text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Points, Vectors and Normals
Math Operations on Points and Vectors
How Does Matrix Work: Part 1
Transforming Points and Vectors
Row Major vs Column Major Vector
Spherical Coordinates and Trigonometric Functions
Creating an Orientation Matrix or Local Coordinate System
Transforming Normals
Before we explain why matrices are interesting, let's start by saying that rendering an image by keeping all the 3D objects and the camera at the origin would be quite limited. In essence, matrices play an essential role in moving objects, light and cameras around in the scene so that you can compose your image the way you want. Because our basic renderer wouldn't produce very exciting images if we were to ignore them all together. You will realise as you develop your own 3D renderer, that you won't be able to ignore them very long. So let's study them without any more delay.
Introduction to Matrices: they Make Transformations Easy!
There is really nothing complicated about matrices and why some people fear them is mostly because they don't really fully comprehend what they represent and how they work. They play an instrumental part in the graphics pipeline and you will see them used regularly in the code of 3D applications.
In the previous chapter we mentioned that it was possible to translate or rotate points by using linear operators. For example we showed that we could translate a point by adding some values to its coordinates. We also showed that it was possible to rotate a vector by using trigonometric functions. Now, in short (and this is not a mathematical definition of what matrices are), a matrix is just a way of combining all these transformations (scale, rotation, translation) into one single structure. Multiplying a point or a vector by this structure (the matrix) gives us a transformed point or vector. Combining these transformations means any combination of the following linear transformations: scale, rotation, translation. We can create a matrix that will rotate a point by 90 degrees around the x-axis, scale it by 2 along the z-axis (the scale applied to the point is (1, 1, 2)) and then translate it by (-2, 3, 1). We could do this by performing a succession of linear transformations on a point but this would potentially mean writing a lot of code:
Vec3f translate(Vec3f P, Vec3f translateValue) { ... } Vec3f scale(Vec3f P, Vec3f scaleValue) { ... } Vec3f rotate(Vec3f P, Vec3f axis, float angle) { ... } ... Vec3f P = Vec3f(1, 1, 1); Vec3f translateVal(-1, 2, 4); Vec3f scaleVal(1, 1, 2); Vec3f axis(1, 0, 0); float angle = 90; Vec3f Pt; Pt = translate(P, translateVal): // translate P Pt = scale(Pt, scaleVal); // then scale the result Pt = rotateValue(Pt, axis, angle); // finally rotate the point
As you can see this code is not very compact. But if we use a matrix we can simply write:
Matrix4f M(...); // set the matrix for translation, rotation, scale Vec3f P = Vec3f(1, 1, 1); Vec3f Ptranformed = P * M; // do everything at once, translate, rotate, scale
Transforming P to achieve a similar effect is simply done by multiplying the point with a matrix (M). We are just showing here what matrices are used for in the graphics pipeline and what advantages they have. In that particular example, we have told you that they can be used to combine together any of the three basic geometric transformations we can perform on points and vectors (scale, translation, rotation) in a very easy, fast and compact way. What we have to do now, is to explain you how and why that works (it will take us a few chapters though).
Matrices, What Are They?
Figure 1: a [4x4] matrix.
What matrices really are? Instead of answering with an abstract mathematical definition we will first start with real matrix examples. Once we have seen a couple of more concrete examples extending the concept to its generic/mathematical form will be easier. If you read a few CG books already, you may have seen matrices mentioned in quite a few places and they often appear as two-dimensional array of numbers. To define a two-dimensional array of numbers we use the standard notation m x n where m and n are two numbers that represent the size of this array. As you may have guessed, m and n respectively represent the number of rows and columns of the matrix. Rows are the horizontal lines of numbers in the 2D array and columns are the vertical ones. Here is an example of a [3x5] matrix:
$$\begin{bmatrix} 1&3&7&9&0\\ 3&3&0&8&3\\ 9&1&0&0&1 \end{bmatrix}$$
We will denote the numbers of the matrix, the matrix coefficients (you might come across the term entry or element but coefficient is often used in CG) and we usually use the subscripts i, j to point to a particular coefficient in the matrix. Matrices themselves are most of the time written with capital letters (M, A, B, etc).
\(\scriptsize M_{ij}\) where \(\scriptsize i\) is the row and \(\scriptsize j\) is the column.
We will make a lot of simplifications on matrices for now. One of them, is that in CG we are mostly using matrices which are said to be squared. These are matrices whose numbers m and n are equal. Typically, in CG, we will be interested in 3x3 or 4x4 matrices and we will tell you in the following chapter what they are and how to use them. More generally, these matrices are called square matrices (the matrix [m x n] is a square matrix if m = n). Now, this is a simplification as we said, because in reality m and n can take any value and don't have to be equal. You can create a 3x1 matrix, a 6x6 matrix or a 4x2 matrix. They are all valid matrices. But as we said, in CG, we will mainly be using 3x3 and 4x4 matrices.
A [3x3] \(\begin{bmatrix} 7&4&3\\ 2&0&3\\ 3&9&1\\ \end{bmatrix}\) and [4x4] \(\begin{bmatrix} 7&1&4&3\\ 2&0&0&3\\ 3&1&9&1\\ 6&6&5&4\\ \end{bmatrix}\) matrix.
Here is an example of how we can implement a 4x4 matrix class in C++ (note that we use the template mechanism in case we need the matrix to use a float or double precision):
template<typename T> class Matrix44 { public: Matrix44() {} const T* operator [] (uint8_t i) const { return m[i]; } T* operator [] (uint8_t i) { return m[i]; } // initialize the coefficients of the matrix with the coefficients of the identity matrix T m[4][4] = {{1,0,0,0},{0,1,0,0},{0,0,1,0},{0,0,0,1}}; }; typedef Matrix44<float> Matrix44f;
These operators in the Matrix44 class:
const T* operator [] (uint8_t i) const { return m[i]; } T* operator [] (uint8_t i) { return m[i]; }
Are sometimes called access operator or accessor. They are used to access the coefficients of the matrix without having to access explicitly the member variable m[4][4]. Typically, you would access the coefficients that way:
Matrx44f mat; mat.m[0][3] = 1.f;
But with the access operators, you can write:
Matrx44f mat; mat[0][3] = 1.f;
Matrices can be multiplied with each other, and this operation is at the heart of the point- or vector-matrix transformation process. The result of a matrix multiplication (the technical term is matrix product, the product of two matrices) is another matrix:
$$M_3=M_1 * M_2$$
Figure 2: a matrix to transform A to C can be obtained by multiplying a matrix M1 that transform A to B with a matrix M2 that transform point B to C. The multiplication of any combination of matrix that transform in successive steps A to C will give matrix M3.
If you remember what we briefly mentioned in the introduction, a matrix defines in a concise way, a combination of linear transformations that can be applied to points and vectors (scale, rotation, translation). How that works, is something we haven't explained yet but that we will be addressing very soon. What's important to understand now is that a matrix multiplication is a way of combining in one matrix the effect of two other matrices. In other words, the transformation that each matrix M1 and M2 would operate on a point or a vector can be combined in one single matrix M3. Imagine you need to transform a point from A to B using matrix M1 and then transform B to C using matrix M2. Multiplying M1 by M2 gives a matrix M3 which directly transforms A to C. A matrix obtained by multiplying two matrices is not different from the other two. What's important to note here is that if you have two other matrices M4 and M5 that respectively transform A to D and D to C then the multiplication of M4 with M5 will give you M3 again (there is a unique matrix for each particular transformation).
Now there is a rule about matrix multiplication which is not important to know if you deal with 4x4 matrices (and you will understand why soon) but for you general knowledge on the subject, we will explain it here (it will become particularly important to remember when we will deal with point- and vector-matrix multiplication). Two matrices M1 and M2 can only be multiplied if the number of columns in M1 is equal to the number of rows in M2. In other worlds if two matrices can be written as m x p and p x n they can be multiplied and it will give a matrix of size m x n. Two matrix p x m and n x p can not be multiplied because m and n are not equal. A 4x2 and 2x3 matrices can be multiplied and will give a 4x3 matrix. The multiplication of two 4x4 matrices gives a 4x4 matrix (this rule isn't so important for us because we will almost always use 4x4 matrix so we generally won't care about whether matrices can be multiplied or not).
$$[M \times P] * [P \times N] = [M \times N]$$
Let' s see now how we multiply two matrices together which turns to be a mathematical operation on the coefficients of the two input matrices. In other words what we are interested in is how we compute the coefficients of the new matrix. It turns out to be quite simple as long as you remember the rule. We said previously that the coefficients in a matrix were defined by their row and column indices. Notation wise, we use the subscripts i and j to denote these row- and column-indices. So imagine that we want to find out what's the value of the coefficient Mi,j in the matrix M3. Let's say that i=1 and j=2 (note that index 0 indicates either the first row or the first column of the matrix. Index 3 indicates the last row or column. Arrays start at index 0 in C++). To compute M3(1,2) we select all the coefficients of the second row in M1 (where M1 is a 4x4 matrix) and all the coefficients of the third column in M2 (where M2 is also a 4x4 matrix). That gives us two sequences of four numbers than we will multiply with each other and sum up in the following way:
$$M1= \begin{bmatrix} c_{00}&c_{01}&c_{02}&c_{03}\\ \color{red}{c_{10}}&\color{red}{c_{11}}&\color{red}{c_{12}}&\color{red}{c_{13}}\\ c_{20}&c_{21}&c_{22}&c_{23}\\ c_{30}&c_{31}&c_{32}&c_{33}\\ \end{bmatrix} \text{ } M2= \begin{bmatrix} c_{00}&c_{01}&\color{red}{c_{02}}&c_{03}\\ c_{10}&c_{11}&\color{red}{c_{12}}&c_{13}\\ c_{20}&c_{21}&\color{red}{c_{22}}&c_{23}\\ c_{30}&c_{31}&\color{red}{c_{32}}&c_{33}\\ \end{bmatrix} $$ $$M3_{12}= \begin{array}{l} M1_{10}*M2_{02} + \\ M1_{11}*M2_{12} + \\ M1_{12}*M2_{22} + \\ M1_{13}*M2_{32} \end{array} $$
We can use this process for all the coefficients of M3: use the row and colum index of the coefficient we want to compute, and use these indices to select the coefficients of the corresponding row in M1 (M1(i,0), M1(i,1), M1(i,2), M1(i,3)) and select the coefficients for the corresponding column in M2 (M2(0,j), M2(1,j), M2(2,j), M3(3,j). Once we have these numbers we combine them using the formula showed above. Multiply all the coefficients of the same index with each other and sum up the results:
$$M3_{ij}= \begin{array}{l} M1_{i0}*M2_{0j} + \\ M1_{i1}*M2_{1j} + \\ M1_{i2}*M2_{2j} + \\ M1_{i3}*M2_{3j} \end{array} $$
Let's see how we could code this operation in C++. Let's define a matrix, as a two-dimensional array of 4 by 4 floats. Here is the function that can be used to multiply two matrices together:
Matrix44 operator * (const Matrix44& rhs) const { Matrix44 mult; for (uint8_t i = 0; i < 4; ++i) { for (uint8_t j = 0; j < 4; ++j) { mult[i][j] = m[i][0] * rhs[0][j] + m[i][1] * rhs[1][j] + m[i][2] * rhs[2][j] + m[i][3] * rhs[3][j]; } } return mult; }
It is not hard when you know how the multiplication of two matrices is obtained, to observe that the multiplication of M1 by M2 doesn't give the same result than the multiplication of M2 by M1. Matrix multiplication indeed is not commutative. M1*M2 doesn't give the same result than M2*M1.
We haven't explained how and why matrices work, but do not worry all these important things will be explained in the next chapter. From this chapter you need to remember that a matrix is a two-dimensional array of numbers. The size of the matrix is denoted m x n where m is the number of rows and n the number of columns. You have learned that matrices can be multiplied only if the matrix on the left side of the multiplication has a number of columns that is equal to the number of rows of the matrix which is on the right inside of the multiplication. For instance two matrices which sizes are m x p and p x n can be multiplied with each other. The resulting matrix combines the transformation of the two matrices used in the multiplication. If M1 transforms a point from A to B and M2 transforms a point from B to C, then if M3 is the result of M1 multiplied by M2, M3 will transform this point from A to C. Finally we have learned how to compute the coefficients of a matrix resulting from of a matrix multiplication. It is also important to remember that matrix multiplication is not commutative. Practically, it means that we will need to pay attention to the order in which we multiply matrices with each other. This order matters and if your code doesn't work, you may want to check the order in which matrices are multiplied with each other.
arrow_backPrevious Chapter
Chapter 4 of 13
Next Chapter arrow_forward | CommonCrawl |
Basic reproduction number estimation and forecasting of COVID-19: A case study of India, Brazil and Peru
CPAA Home
Asymptotic and quenching behaviors of semilinear parabolic systems with singular nonlinearities
doi: 10.3934/cpaa.2021076
Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.
Readers can access Online First articles via the "Online First" tab for the selected journal.
Hadamard Semidifferential, Oriented Distance Function, and some Applications
Michel C. Delfour
Département de mathématiques et de statistique and Centre de recherches mathématiques, Université de Montréal, CP 6128, succ. Centre-ville, Montréal (Qc), Canada H3C 3J7
Received December 2020 Revised March 2021 Early access April 2021
Fund Project: This research was supported by the Natural Sciences and Engineering research Council of Canada through Discovery Grants RGPIN-05279-2017 and a Grant from the Collaborative research and Training Experience (CREATE) program in Simulation-based Engineering Science
The Hadamard semidifferential calculus preserves all the operations of the classical differential calculus including the chain rule for a large family of non-differentiable functions including the continuous convex functions. It naturally extends from the $ n $-dimensional Euclidean space $ \operatorname{\mathbb R}^n $ to subsets of topological vector spaces. This includes most function spaces used in Optimization and the Calculus of Variations, the metric groups used in Shape and Topological Optimization, and functions defined on submanifolds.
Certain set-parametrized functions such as the characteristic function $ \chi_A $of a set $ A $, the distance function $ d_A $ to $ A $, and the oriented (signed) distance function $ b_A = d_A-d_{ \operatorname{\mathbb R}^n\backslash A} $ can be used to identify a space of subsets of $ \operatorname{\mathbb R}^n $ with a metric space of set-parametrized functions. Many geometrical properties of domains (convexity, outward unit normal, curvatures, tangent space, smoothness of boundaries) can be expressed in terms of the analytical properties of $ b_A $ and a simple intrinsic differential calculus is available for functions defined on hypersurfaces without appealing to local bases or Christoffel symbols.
The object of this paper is to extend the use of the Hadamard semidifferential and of the oriented distance function from finite to infinite dimensional spaces with some selected illustrative applications from shapes and geometries, plasma physics, and optimization.
Keywords: Differentiation theory, differentiable maps, calculus of functions between topological vector spaces, optimization of shapes, sensitivity analysis, calculus of variations, plasma physics.
Mathematics Subject Classification: Primary: 58C20, 58C25, 46G05, 46T20, 26E15, 26E20, 49Q10, 49Q12.
Citation: Michel C. Delfour. Hadamard Semidifferential, Oriented Distance Function, and some Applications. Communications on Pure & Applied Analysis, doi: 10.3934/cpaa.2021076
Bernard Dacorogna, Giovanni Pisante, Ana Margarida Ribeiro. On non quasiconvex problems of the calculus of variations. Discrete & Continuous Dynamical Systems, 2005, 13 (4) : 961-983. doi: 10.3934/dcds.2005.13.961
Daniel Faraco, Jan Kristensen. Compactness versus regularity in the calculus of variations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 473-485. doi: 10.3934/dcdsb.2012.17.473
Nikos Katzourakis. Nonuniqueness in vector-valued calculus of variations in $L^\infty$ and some Linear elliptic systems. Communications on Pure & Applied Analysis, 2015, 14 (1) : 313-327. doi: 10.3934/cpaa.2015.14.313
Nikos Katzourakis. Corrigendum to the paper: Nonuniqueness in Vector-Valued Calculus of Variations in $ L^\infty $ and some Linear Elliptic Systems. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2197-2198. doi: 10.3934/cpaa.2019098
Hans Josef Pesch. Carathéodory's royal road of the calculus of variations: Missed exits to the maximum principle of optimal control theory. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 161-173. doi: 10.3934/naco.2013.3.161
Felix Sadyrbaev. Nonlinear boundary value problems of the calculus of variations. Conference Publications, 2003, 2003 (Special) : 760-770. doi: 10.3934/proc.2003.2003.760
Michael Herty, Veronika Sachers. Adjoint calculus for optimization of gas networks. Networks & Heterogeneous Media, 2007, 2 (4) : 733-750. doi: 10.3934/nhm.2007.2.733
Agnieszka B. Malinowska, Delfim F. M. Torres. Euler-Lagrange equations for composition functionals in calculus of variations on time scales. Discrete & Continuous Dynamical Systems, 2011, 29 (2) : 577-593. doi: 10.3934/dcds.2011.29.577
Delfim F. M. Torres. Proper extensions of Noether's symmetry theorem for nonsmooth extremals of the calculus of variations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 491-500. doi: 10.3934/cpaa.2004.3.491
Nuno R. O. Bastos, Rui A. C. Ferreira, Delfim F. M. Torres. Necessary optimality conditions for fractional difference problems of the calculus of variations. Discrete & Continuous Dynamical Systems, 2011, 29 (2) : 417-437. doi: 10.3934/dcds.2011.29.417
Jacky Cresson, Fernando Jiménez, Sina Ober-Blöbaum. Continuous and discrete Noether's fractional conserved quantities for restricted calculus of variations. Journal of Geometric Mechanics, 2021 doi: 10.3934/jgm.2021012
Qilin Wang, S. J. Li. Higher-order sensitivity analysis in nonconvex vector optimization. Journal of Industrial & Management Optimization, 2010, 6 (2) : 381-392. doi: 10.3934/jimo.2010.6.381
Shengji Li, Xiaole Guo. Calculus rules of generalized $\epsilon-$subdifferential for vector valued mappings and applications. Journal of Industrial & Management Optimization, 2012, 8 (2) : 411-427. doi: 10.3934/jimo.2012.8.411
Caglar S. Aksezer. On the sensitivity of desirability functions for multiresponse optimization. Journal of Industrial & Management Optimization, 2008, 4 (4) : 685-696. doi: 10.3934/jimo.2008.4.685
Ioan Bucataru, Matias F. Dahl. Semi-basic 1-forms and Helmholtz conditions for the inverse problem of the calculus of variations. Journal of Geometric Mechanics, 2009, 1 (2) : 159-180. doi: 10.3934/jgm.2009.1.159
Gisella Croce, Nikos Katzourakis, Giovanni Pisante. $\mathcal{D}$-solutions to the system of vectorial Calculus of Variations in $L^∞$ via the singular value problem. Discrete & Continuous Dynamical Systems, 2017, 37 (12) : 6165-6181. doi: 10.3934/dcds.2017266
Ivar Ekeland. From Frank Ramsey to René Thom: A classical problem in the calculus of variations leading to an implicit differential equation. Discrete & Continuous Dynamical Systems, 2010, 28 (3) : 1101-1119. doi: 10.3934/dcds.2010.28.1101
Mehar Chand, Jyotindra C. Prajapati, Ebenezer Bonyah, Jatinder Kumar Bansal. Fractional calculus and applications of family of extended generalized Gauss hypergeometric functions. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 539-560. doi: 10.3934/dcdss.2020030
Roberto Avanzi, Nicolas Thériault. A filtering method for the hyperelliptic curve index calculus and its analysis. Advances in Mathematics of Communications, 2010, 4 (2) : 189-213. doi: 10.3934/amc.2010.4.189
Nicolas Lerner, Yoshinori Morimoto, Karel Pravda-Starov, Chao-Jiang Xu. Phase space analysis and functional calculus for the linearized Landau and Boltzmann operators. Kinetic & Related Models, 2013, 6 (3) : 625-648. doi: 10.3934/krm.2013.6.625 | CommonCrawl |
Review of World Economics
May 2014 , Volume 150, Issue 2, pp 241–275 | Cite as
Does importing more inputs raise exports? Firm-level evidence from France
Maria Bas
Vanessa Strauss-Kahn
Does an increase in imported inputs raise exports? We provide empirical evidence on the direct and indirect channels via which importing more varieties of intermediate inputs increases export scope: (1) imported inputs may enhance productivity and thereby help the firm to overcome export fixed costs (the indirect productivity channel); (2) low-priced imported inputs may boost expected export revenue (the direct-cost channel); and (3) importing intermediate inputs may reduce export fixed costs by providing the quality/technology required in demanding export markets (the quality/technology channel). We use firm-level data on imports at the product (HS6) level provided by French Customs for the 1996–2005 period, and distinguish the origin of imported inputs (developing vs. developed countries) in order to disentangle the different productivity channels above. Regarding the indirect effect, imported inputs raise productivity, and thereby exports, both through greater complementarity of inputs and technology/quality transfer. Controlling for productivity, imports of intermediate inputs from developed and developing countries also have a direct impact on the number of exported varieties. Both quality/technology and price channels are at play. These findings are robust to specifications that explicitly deal with potential reverse causality between imported inputs and export scope.
Firm heterogeneity Imported inputs TFP Export scope Varieties Firm-level data
We have benefited from discussions with Matthieu Crozet, Sandra Poncet, Andrew Bernard and Tibor Besedes.
We also thank seminar participants at LSE, NYU, CEPII, EEA (Oslo), EITI (Tokyo), and ETSG (Lausanne) for useful comments. We are responsible for any remaining errors.
Appendix 1: A simple model
In this Appendix, we present a simple partial equilibrium model which sheds light on the mechanisms via which imported inputs affect firm TFP and export scope. There is a continuum of domestic firms in the economy that supply differentiated final goods under monopolistic competition. Firms differ in their initial productivity draws (\(\varphi\)) which are introduced as in Melitz (2003). In order to produce a variety of final good y, the firm combines three factors of production: labor (L), capital (K) and a range of differentiated intermediate goods (M ij ) produced by industry i, that can be purchased in the domestic or foreign markets. If the firm sources its inputs internationally, it may import intermediate goods from two different sets of countries distinguished by their levels of development. As is traditionally assumed, countries in the North have higher GDP per capita than those in the South. The technology is represented by a Cobb–Douglas production function with factor shares \(\eta +\beta+\sum\nolimits_{i=1}^{I}\alpha_{i}=1\) (for simplicity, we omit the firm subscript):
$$ y=\varphi L^{\eta }K^{\beta }\prod\limits_{j\in \{D,N,S\}}\prod\limits_{i=1}^{I}\left( M_{ij}\right)^{\alpha_{i}} $$
where \(M_{ij}=\left(\sum_{v\in I_{ij}}\chi_{ij}m_{iv}^{\frac{\sigma_{i}-1}{\sigma_{i}}}\right) ^{^{\frac{\sigma_{i}}{\sigma_{i}-1}}}\).
The range of domestic and imported varieties of intermediate goods in industry i are aggregated by CES functions M ij , where i is the industry, j the country region (i.e., domestic, North or South), \(I_{j}=\{1,\ldots.,M_{j}\}\) and σ i > 1 is the elasticity of substitution across the varieties in industry i. The technology/quality parameter, χ ij , captures the fact that imported inputs may enhance firm efficiency differently depending on their origin. We assume that χ ij is greater than one for inputs sourced from the most developed countries, i.e., j = N, and equal to one otherwise. In this set up, each foreign country may produce one variety of inputs per industry, we thus match our empirical framework where a variety is defined as a product-country pair.
As is common in the literature (e.g., Ethier 1982 and Markusen 1989), we consider that intermediate inputs are symetrically produced at a level \(\overline{m}\). This yields \(M_{iD}=N_{iD}^{\frac{\sigma_{i}}{\sigma_{i}-1}}\overline{m}_{D},\quad M_{iS}=N_{iS}^{\frac{\sigma_{i}}{\sigma_{i}-1}}\overline{m}_{S}\hbox{ and }M_{iN}=\left(N_{iN}\chi_{i}\right)^{\frac{\sigma_{i}}{\sigma_{i}-1}}\overline{m}_{N}\), where N iD , N iS and N iN are the number of domestic and imported (from the South or the North) varieties of intermediate goods. The production function for a variety of final good, equation (1), can thus be rewriten as:
$$ y=\varphi L^{\eta }K^{\beta }\prod\limits\limits_{j\in \{D,N,S\}}\prod\limits_{i=1}^{I}\overline{M}_{ij}^{\alpha_{i}} \left(N_{ij}\chi_{ij}\right)^{\frac{\alpha_{i}}{\sigma_{i}-1}} $$
where \(\overline{M}_{ij}=N_{ij}\overline{m_{j}}\). Following Kasahara and Rodrigue (2008), we make the simplifying assumption that firms either source their inputs domestically or internationally (from both the North and the South). Intermediate goods imported from the North have a higher technological content whereas inputs sourced from the South have a lower price, as input prices reflect the assumed relatively lower cost of factors of production in the South. As is standard, the first-order condition is such that prices reflect a constant mark-up, \(\rho =\frac{\phi -1}{\phi }\), over marginal costs, \(p=\frac{MC}{\rho }\), where the marginal cost of production is determined by MC D if the firm sources its inputs domestically and MC F if it does so on foreign markets. 28
$$ MC_{D}=\frac{p_{k}^{\beta}w^{\eta}{\overset{I}{\prod}}_{i=1}p_{iD}^{\alpha_{i}}} {\varphi{\overset{I}{\prod}}_{i=1}N_{iD}^{\frac{\alpha_{i}}{\sigma_{i}-1}}} $$
$$ MC_{F}=\frac{p_{k}^{\beta}w^{\eta}\prod_{j\in \{N,S\}}{\overset{I}{\prod}}_{i=1} p_{ij}^{\alpha_{i}}}{\varphi {\overset{I}{\prod}}_{i=1} \left(N_{iN}\chi_{iN}\right)^{\frac{\alpha_{i}}{\sigma_{i}-1}} \left(N_{iS}\right)^{\frac{\alpha_{i}}{\sigma_{i}-1}}} $$
where w is the wage, p k is the price of capital goods and p ij is the price of inputs from industry i and region j. 29 Combining the demand faced by each firm, \(q_{j}(\varphi)=\left(\frac{P}{p_{j}(\varphi)}\right)^{\phi}C\)—where P is the aggregate final goods price index and C is aggregate expenditure on varieties of final goods—and the price function, \(p_{j}(\varphi)=\frac{MC_{j}}{\rho}\), revenues are given by \(r_{j}(\varphi)=q_{j}(\varphi)p_{j}(\varphi): r_{j}(\varphi)=\left(\frac{P}{p_{j}}\right)^{\phi-1}R\), where R = PC is the aggregate revenue of the industry, which is considered exogenous to the firm. Firm domestic profits thus simplify to \(\pi_{j}=\frac{r_{j}}{\phi}-F\), where F is the fixed production cost. Firms that import, also incur a fixed import cost, F m . Firm export profits are given by \(\pi_{x}=\frac{r_{x}}{\phi}-F_{x}\), where F x includes the production fixed costs (which include the import fixed cost for importing firms), F, as well as the export fixed costs which, as explained below, fall in the technology/quality parameter, i.e., \(F_{x}=g\left(F, \frac{f_{x}}{\chi_{ij}}\right)\).
Using the price and revenue functions defined in the previous section, we derive the following expression for firms' export revenues: 30
$$ r_{x}=\Uppsi\left(\frac{\varphi {\overset{I}{\prod}}_{i=1}\, \left(N_{iN}\chi_{iN}\right)^{\frac{\alpha_{i}}{\sigma_{i}-1}} \left(N_{iS}\right)^{\frac{\alpha_{i}}{\sigma_{i}-1}}}{\prod\nolimits_{j\in \{N,S\}}\prod\nolimits_{i=1} p_{ij}^{\alpha_{i}}}\right)^{\phi-1} $$
where \(\Uppsi=P^{\phi-1}R\left(\rho^{-1}\left(1+\tau\right) p_{k}^{\beta}w^{\eta}\right)_{,}^{1-\phi }\) with τ being the variable export cost, P the aggregate price index of final goods and R aggregate industry revenue, all of which are exogenous to the firm. The corresponding profit can thus be written as
$$ \pi_{x}=\frac{\Uppsi}{\phi}\left(\frac{A}{\prod\nolimits_{j\in \{N,S\}}\prod\nolimits_{i=1}\,p_{ij }^{\alpha_{i}}}\right)^{\phi -1}-F_{x} $$
The tradebility condition for export is given by: \(\pi_{x}\left(\varphi_{x}^{\ast}\right)=0\), where \(\varphi_{x}^{\ast }\) is the Melitz (2003) productivity draw of the marginal firm serving the export market.
Appendix 2: Empirical results
(See Tables 10, 11, 12, 13, 14 and 15)
Intermediate inputs
Firm sector
Bodies, for specific motor vehicles
Footwear, outer sole rub, plastic or leather
Other Polyethers (390720)
Woven Labels, Badges and Similar Articles, Not Embroidered (580710)
Other Articles of Vulcanized Rubber (401699)
Other Textile Products and Articles, for Technical Use (591190)
Other Hollow profiles of Aluminium Alloys (760429)
Other Articles of Leather or of Composition Leather (420500)
Other Parts and Accessories of Bodies for the Motor Vehicles (870829)
Woven Fabrics of Metal Thread, of Metallized Yarn (580900)
Coniferous-Air (470421)
Other Whole Skins (Tanned or Dressed) (430219)
Textile Fabrics Impregnated, Coated, Covered With Polyurethane (590320)
Final products
Other Bodies, for the Other Motor Vehicles (870790)
Other Footwear With Uppers of Leather or Composition Leather (640510)
Other Footwear With Uppers of Leather (640399)
Outer Soles and Heels, of Rubber or Plastics (640620)
Other Footwear With Uppers of Textile Materials (640520)
Distribution of imported inputs by sector
Average number of imported varieties non EU
Average number of imported varieties DC
Average number of imported varieties LDC
First stage of the IV estimations of Tables 6 and 7
# Imported inputs (t − 1)
# Imported inputs DC (t − 1)
# Imported inputs LDC (t − 1)
Imported inputs (t − 3)
0.134***
Imported inputs DC (t − 3)
Imported inputs LDC (t − 3)
Input tariffs (t − 3)
−0.453***
Firm fixed effects
Year fixed effects
The instrumental variables include lagged input tariffs and imported inputs. Dependent variables are in t − 1 and intrumental variables are lagged 3 periods. Input tariffs are computed at the firm level as an average of all tariffs on HS6 intermediate products that each firm imports. Robust standard errors clustered at the firm level are in parentheses. *** Significant at the 1 % level
Alternative productivity measure
Dependent variable: Log of the number of exported varieties to EU of firm (f) in year (t)
Labor productivity (t − 1)
Size (t − 1)
p-value of Hansen
All specifications include firm and year fixed effects. Robust standard errors clustered at the firm level are in parentheses. *** Significant at the 1 % level
Export status and imported inputs
Dependent variable: export status of firm (f) in year (t)
TFP (t − 1)
Hansen statistic
All specifications include firm and year fixed effects. The number of imported inputs, DC and LDC (t − 1), Size (t − 1) and TFP (t − 1) are expressed in logarithmic form. Columns (1), (2), (5) and (6) show the effect of the number of imported inputs on the probability to become an exporter. All the other columns present the IV estimations of the probability that importing firms become exporters. In the IV estimation, the instrumental variables include lagged input tariffs and imported inputs. Variables are in t − 1 and intrumental variables are lagged 3 periods. Input tariffs are computed at the firm level as an average of all tariffs on HS6 intermediate products that each firm imports. Robust standard errors clustered at the firm level are in parentheses. ***, **, * Significant at the 1, 5 and 10 % level respectivly
Intensive margin and destination of export and imported inputs
Dependent variable:
Log of the export value of firm (f) in year (t)
Log of the export destinations of firm (f) in year (t)
All specifications include firm and year fixed effects. The number of imported inputs, DC and LDC (t − 1), Size (t − 1) and TFP (t − 1) are expressed in logarithmic form. In columns (1) to (4) the estimation is run on the subsample of firms importing products from non-EU countries and exporting within EU countries, while in columns (5) to (8) the estimation is run on the subsample of firms exporting toward all destination countries. In the IV estimation, the instrumental variables include lagged input tariffs and imported inputs. Variables are in t − 1 and intrumental variables are lagged 3 periods. Input tariffs are computed at the firm level as an average of all tariffs on HS6 intermediate products that each firm imports. Robust standard errors clustered at the firm level are in parentheses. ***, **, * Significant at the 1, 5 and 10 % level respectivly
Ackerberg, D., Caves, K., & Frazer, G. (2007). Structural identification of production functions. MPRA working Paper 38349. Germany: University Library of Munich.Google Scholar
Amiti, M., & Konings, J. (2007). Trade liberalization, intermediate inputs and productivity: Evidence from Indonesia. American Economic Review, 97(5), 1611–1638.CrossRefGoogle Scholar
Andersson, M., Johansson, S., & Lööf, H. (2008). Productivity and international trade: Firm level evidence from a small open economy. Review of World Economics/Weltwirtschaftliches Archiv, 144(4), 774–801.CrossRefGoogle Scholar
Arellano, M. (2003). Panel data econometrics. Oxford: Oxford University Press.CrossRefGoogle Scholar
Aristei, D., Castellani, D., & Franco, C. (2013). Firms' exporting and importing activities: Is there a two-way relationship?. Review of World Economics/Weltwirtschaftliches Archiv, 149(1), 55–84.CrossRefGoogle Scholar
Augier, P., Cadot, O., & Dovis, M. (2013). Imports and TFP at the firm level: The role of absorptive capacity. Canadian Journal of Economics, 46(3), 956–981.CrossRefGoogle Scholar
Aw, B. Y., Chung, S., Roberts, M. (2000). Productivity and turnover in the export market: Micro evidence from Taiwan and South Korea. The World Bank Economic Review, 14(1), 65–90.CrossRefGoogle Scholar
Bas, M. (2012). Input-trade liberalization and firm export decisions: Evidence from Argentina. Journal of Development Economics, 97(2), 481–493.CrossRefGoogle Scholar
Bas, M., & Ledezma, I. (2010). Trade integration and within-plant productivity evolution in Chile. The Review of World Economics / Weltwirtschaftliches Archiv, 146(1), 113–146.Google Scholar
Bernard, A., Blanchard, E., Van Beveren, I., & Vandenbussche, H. (2012). Carry-Along Trade. NBER Working Papers 18246. Cambridge, MA: National Bureau of Economic Research.Google Scholar
Bernard, A., Eaton, J., Jensen, J., & Kortum, S. (2003). Plants and productivity in international trade. American Economic Review, 93(4), 1268–1290.CrossRefGoogle Scholar
Bernard, A., & Jensen, J. (1995). Exporters, jobs, and wages in U.S. manufacturing: 1976–1987. Brooking Papers on Economic Activity: Microeconomics,, 54–70.Google Scholar
Bernard, A., & Jensen, J. (1999). Exceptional exporter performance: Cause, effect, or both? Journal of International Economics, 47(1), 1–25.CrossRefGoogle Scholar
Bernard, A., Jensen, J., Redding, S., & Schott, P. (2007). Firms in international trade. Journal of Economic Perspectives, 21(3), 105–130.CrossRefGoogle Scholar
Biscourp, P., & Kramarz, F. (2007). Employment, skill structure, and international trade. Journal of International Economics, 72(1), 22–51.CrossRefGoogle Scholar
Broda, C., & Weinstein, D. (2006). Globalization and the gains from variety. Quarterly Journal of Economics, 121(2), 541–585.CrossRefGoogle Scholar
Castellani, D., Serti, F., & Tomasi, C. (2010). Firms in international trade: Importers' and Exporters' Heterogeneity in Italian manufacturing industry. The World Economy, 33(3), 424–457.CrossRefGoogle Scholar
Clerides, S., Lach, S., & Tybout, J. (1998). Is learning by exporting important? Micro-dynamic evidence from Colombia, Mexico, and Morocco. Quarterly Journal of Economics, 113(3), 903–947.CrossRefGoogle Scholar
Coe, D., & Helpman, E. (1995). International R&D spillovers. European Economic Review, 39(5), 859–887.CrossRefGoogle Scholar
Coe, D., Helpman, E., & Hoffmaister, A. (1997). North-South R&D spillovers. Economic Journal, 107(440), 134–149.CrossRefGoogle Scholar
Colantone, I., & Crino, R. (2011). New imported inputs, new domestic products. Development Working Papers 312, Centro Studi Luca d'Agliano: University of Milano.Google Scholar
Damijan, J., Konings, J., & Polanec, S. (2012). Import churning and export performance of multi-product firms. Open Access publications from Katholieke Universiteit Leuven urn:hdl:123456789/349304, Katholieke Universiteit Leuven.Google Scholar
Delgado, M.A., Farinas, J.C., & Ruano, S. (2002). Firm productivity and export markets: A non-parametric approach. Journal of International Economics, 57(2), 397–422.CrossRefGoogle Scholar
De Loecker, J. (2007). Do Exports Generate Higher Productivity? Evidence from Slovania. Journal of International Economics, 73(1), 69–98.CrossRefGoogle Scholar
Eaton, J., Kortum, S., & Kramarz, F. (2004). Dissecting trade: Firms, industries, and export destinations. American Economic Review, Papers and Proceedings, 93, 150–154.CrossRefGoogle Scholar
Erdem, E., & Tybout, J. (2003). Trade policy and industrial sector responses: Using evolutionary models to interpret the evidence. NBER Working Papers 9947. Cambridge, MA: National Bureau of Economic Research.Google Scholar
Ethier, W. (1982). National and international returns to scale in the modern theory of international trade. American Economic Review, 72(3), 389–405.Google Scholar
Feenstra, R. (1994). New product varieties and the measurement of international prices. American Economic Review, 84, 157–177.Google Scholar
Feenstra, R., & Hanson, G. H. (1996). Foreign investment, outsourcing and relative wages. In: R.C. Feenstra, G.M. Grossman, D.A. Irwin (Eds.), The political economy of trade policy: Papers in honor of Jagdish Bhagwati (pp. 89–127). Cambridge, MA: MIT Press.Google Scholar
Fernandes, A. (2007). Trade policy, trade volumes and plant-level productivity in Colombian manufacturing industries. Journal of International Economics, 71(1), 52–71.CrossRefGoogle Scholar
Goldberg, P., Khandelwal, A., Pavcnik, N., & Topalova, P. (2010). Imported intermediate inputs and domestic product growth: Evidence from India. Quarterly Journal of Economics, 125(4), 1727–1767.CrossRefGoogle Scholar
Gopinath, G., & Neiman, B. (2011). Trade adjustment and productivity in large crises. NBER Working Papers 16858. Cambridge, MA: National Bureau of Economic Research.Google Scholar
Grossman, G., & Helpman, E. (1991). Innovation and Growth in the Global Economy, Cambridge, MA: MIT Press.Google Scholar
Halpern, L., Koren, M., & Szeidl, A. (2011). Imported inputs and Productivity. CeFiG Working Papers 8, Center for Firms in the Global Economy.Google Scholar
Hallak, J. C., & Sivadasan, J. (2009). Firms' exporting behavior under quality constraints. NBER Working Paper 14928. Cambridge, MA: National Bureau of Economic Research.Google Scholar
Hausman, J. (2001). Mismeasured variables in econometric analysis: Problems from the right and problems from the left. Journal of Economic perspectives, 15(4), 57–67.CrossRefGoogle Scholar
Head, K., & Mayer, T. (2004). Market potential and the location of Japanese firms in the European Union. Review of Economics and Statistics, 86(4), 860–878.Google Scholar
Hijzen, A., Gorg, H., & Hine, R. C. (2005). International outsourcing and the skill structure of labour demand in the United Kingdom. Economic Journal, 115(506), 860–878.CrossRefGoogle Scholar
Hummels, D., Ishii, J., & Yi, K. M. (2001). The nature and growth of vertical specialization in world trade. Journal of International Economics, 54(1), 959–972.CrossRefGoogle Scholar
Kasahara H., & Lapham, B. (2013). Productivity and the decision to import and export: Theory and evidence. Journal of International Economics, 89(2), 297–316.CrossRefGoogle Scholar
Kasahara H., & Rodrigue, J. (2008). Does the use of imported intermediates increase productivity? Plant-Level evidence. Journal of Development Economics, 87(1), 106–118.CrossRefGoogle Scholar
Katayama, H., Lu, S., & Tybout, J. (2009). Firm-level productivity studies: Illusions and a solution. International Journal of Industrial Organization, 27(3), 403–413.CrossRefGoogle Scholar
Keller, W. (2002). Trade and the transmission of technology. Journal of Economic Growth, 7(1), 5–24.CrossRefGoogle Scholar
Kugler, M., & Verhoogen, E. (2012). Prices, plant size and product quality. Review of Economic Studies, 79(1), 307–339.CrossRefGoogle Scholar
Levinsohn, J., & Petrin, A. (2003). Estimating production functions using inputs to control for unobservables. Review of Economic Studies, 70(243), 317–341.CrossRefGoogle Scholar
Lööf, H., & Anderson, M. (2010). Imports, productivity and origin markets: The role of knowledge-intensive economies. World Economy, 33(3), 458–481.CrossRefGoogle Scholar
Markusen, J. (1989). Trade in producer services and in other specialized intermediate inputs. American Economic Review, 79(1), 85–95.Google Scholar
Matsuyama, J. (2007). Beyond icebergs: Towards a theory of biased globalization. Review of Economic Studies, 74(1), 237–253.CrossRefGoogle Scholar
Melitz, M. (2003). The impact of trade on intra-industry reallocations and aggregate industry productivity. Econometrica, 71(6), 1695–1725.CrossRefGoogle Scholar
Muûls, M., & Pisu, M. (2007). Imports and exports at the level of the firm: Evidence from Belgium. CEP Discussion Paper 0801, London School of Economics.Google Scholar
Olley, G. S., & Pakes, A. (1996). The dynamics of productivity in the telecommunications equipment industry. Econometrica, 64(6), 1263–1297.CrossRefGoogle Scholar
Pavcnik, N. (2002). Trade liberalization, exit, and productivity improvement: Evidence from Chilean plants. Review of Economic Studies, 69(1), 245–276.CrossRefGoogle Scholar
Rivera-Batiz, L., & Romer, P. (1991). International trade with endogenous technological change. European Economic Review, 35(4), 971–1001.CrossRefGoogle Scholar
Romer, P. (1987). Growth based on increasing returns due to specialization. American Economic Review, 77(2), 56–62.Google Scholar
Romer, P. (1990). Endogenous technological change. Journal of Political Economy, 98(5), 71–102.CrossRefGoogle Scholar
Schor, A. (2004). Heterogeneous productivity response to Tariff Reduction: Evidence from Brazilian manufacturing firms. Journal of Development Economics, 75(2), 373–396.CrossRefGoogle Scholar
Smeets, V., & Warzynski, F. (2010). Learning by exporting, importing or both? Estimating productivity with multi-product firms, pricing heterogeneity and the role of international trade. Department of Economics Working Papers 10-13. University of Aarhus, Aarhus School of Business.Google Scholar
Strauss-Kahn, V. (2004). The role of globalization in the within-industry shift away from unskilled workers in France. In: R. Baldwin, A. Winters (Eds.), Challenges to Globalization, Chicago: University of Chicago Press.Google Scholar
Sutton, J. (2007). Quality, trade and the moving window: The globalization process. Economic Journal, 117(524), F469–F498.CrossRefGoogle Scholar
Topalova, P., & Khandelwal, A. (2011). Trade liberalization and firm productivity: The case of India. Review of Economics and Statistics, 93(3), 995–1009.CrossRefGoogle Scholar
Vogel, A., & Wagner, J. (2010). Higher productivity in importing German manufacturing firms: Selfselection, learning from importing, or both?. Review of World Economics/Weltwirtschaftliches Archiv, 145(4), 641–665.CrossRefGoogle Scholar
Wagner, J. (2012). International trade and firm performance: A survey of empirical studies since 2006. Review of World Economics/Weltwirtschaftliches Archiv, 148(2), 235–267.CrossRefGoogle Scholar
Yi, K. M. (2003). Can vertical specialization explain the growth of world trade?. Journal of Political Economy, 111(1), 52–102.CrossRefGoogle Scholar
© Kiel Institute 2013
1.Sciences Po and Centre d'Etudes Prospectives et d'Informations Internationales (CEPII)ParisFrance
2.ESCP-EuropeParisFrance
Bas, M. & Strauss-Kahn, V. Rev World Econ (2014) 150: 241. https://doi.org/10.1007/s10290-013-0175-0 | CommonCrawl |
A deep learning-aided temporal spectral ChannelNet for IEEE 802.11p-based channel estimation in vehicular communications
Xuchen Zhu1,
Zhichao Sheng1,
Yong Fang ORCID: orcid.org/0000-0003-0120-72841 &
Denghong Guo1
In vehicular communications using IEEE 802.11p, estimating channel frequency response (CFR) is a remarkably challenging task. The challenge for channel estimation (CE) lies in tracking variations of CFR due to the extremely fast time-varying characteristic of channel and low density pilot. To tackle such problem, inspired by image super-resolution (ISR) techniques, a deep learning-based temporal spectral channel network (TS-ChannelNet) is proposed. Following the process of ISR, an average decision-directed estimation with time truncation (ADD-TT) is first presented to extend pilot values into tentative CFR, thus tracking coarsely variations. Then, to make tentative CFR values accurate, a super resolution convolutional long short-term memory (SR-ConvLSTM) is utilized to track channel extreme variations by extracting sufficiently temporal spectral correlation of data symbols. Three representative vehicular environments are investigated to demonstrate the performance of our proposed TS-ChannelNet in terms of normalized mean square error (NMSE) and bit error rate (BER). The proposed method has an evident performance gain over existing methods, reaching about 84.5% improvements at some high signal-noise-ratio (SNR) regions.
Vehicular communications, which form a network to support vehicle-to-vehicle (VTV) and vehicle-to-infrastructure (VTI) communications, are essential techniques of intelligent transportation system (ITS). In recent years, lots of attention has been drawn to develop multiple applications in vehicular communications such as automatic selection of routing protocol[1]. To realize such high-speed mobile communications, the IEEE 802.11p standard [2], that defines the physical layers (PHY) and the medium-access layers (MAC), has been officially applied in 2010. The IEEE 802.11p is a modified version of 802.11a [3]. The main difference between them is that 802.11p facilitates the half of frequency bandwidth of 802.11a, thus making signals more robust against fading and multipath propagation effects in vehicular environments[4]. What is more, it can support lower latency, realize higher data rate, and enhance security compared to other standards [5].
Channel estimation (CE) schemes play a crucial role in the performance of vehicular communication using 802.11p. The estimated channel response (CR) significantly affects the subsequent equalization, demodulation, and decoding. In general, the accuracy of CE determines the performance of the whole system. However, in PHY layer, the 802.11p protocol utilizes four pilot subcarriers per one OFDM symbol. The pilot positions are too loose to adequately track variations of channel frequency response (CFR). In addition, due to the fact that CR varies greatly in vehicular communications, coupled with no restrictions of the packet length in 802.11p, channel estimation (CE) keeps easily outdated during the entire packet.
A lot of work has been proposed to track channel variations over the frame duration for vehicular communication under the IEEE 802.11p standard. The current method focuses on data pilot-aided successive (DPAS) whose key is to consider the demapped data signals as aided pilot [6–9]. The performance gain, however, is not evident especially at high signal-noise-ratio (SNR) region because of the error propagation caused by accumulated noise in the iterative process. Recently, deep learning (DL) has shown impressively promising prospects. DL enables to extract inherent characteristic of signals and is applied for channel estimation [10–14]. However, due to deep fading caused by high Doppler shift under vehicular environment, above DL-based approaches degrades in the accuracy of CE. The main objective of this paper is to estimate precisely CFR by integrating DPAS with DL.
In this paper, a deep learning-aided temporal spectral channel network (TS-ChannelNet) for 802.11p-based channel estimation under high-speed vehicular scenarios is proposed to track variations of CFR. In general, the pilot taken as low-resolution (LR) version of CR is utilized to recover high-resolution (HR) version of CR by TS-ChannelNet. Our presented TS-ChannelNet consists of two phases. Initially, coarse CR is restored via pilot by leveraging averaging decision-directed with time truncation (ADD-TT). By averaging both in time and frequency domains, ADD-TT handles few of the impacts of error propagation caused by time truncation based on decision feedback. Afterwards, a neural network (NN) architecture named super resolution convolutional long short-term memory (SR-ConvLSTM) is introduced to make estimated CR more accurate. SR-ConvLSTM utilizes the power of convolutional long short-term memory (ConvLSTM) that exploits temporal spectral correlation to combat deep fast fading under the extremely time-varying environments. The obtained CR is tailored for vehicular communications. Simulation results demonstrate that our method is competent over previous methods under representative vehicular scenarios. Our contributions of this paper are summarized as follows.
CFR is modeled as an image. The pilot is considered as a LR version of the image. The estimated CFR is viewed as a HR version of the image. Then, the TS-ChannelNet, which includes pilot-based interpolation and DL-based restoration, is presented to obtain HR version of CFR.
An improved interpolation based on DD-TT called ADD-TT is taken to extend pilot into reasonable initial coarse CFR. ADD-TT handles few impacts of error propagation by time truncation based on decision feedback and further improves the performance of the follow-up SR-ConvLSTM.
The new super resolution technique-based architecture named SR-ConvLSTM is designed. It restores HR version of CFR by reflecting highly variations of channel.
The extensive ablation experiment is conducted to verify that SR-ConvLSTM powerfully extract temporal spectral correlation of signal to track the variations of channel.
The rest of this paper is organized as follows. Section 2 illustrates related work in details. Section 3 introduces the system model, channel model, and benchmark algorithm. Section 4 presents our temporal spectral deep learning-based channel estimation scheme. Section 5 verifies the full advantage of TS-ChannelNet by simulation results. Section 6 concludes the paper.
In this section, the existing work of CE under vehicular communications using 802.11p standard is first elaborated. The downside of present work is then introduced. Furthermore, DL applied in the communication field, as a promising prospect, is investigated.
In few years, mobile ad hoc network (MANET) has successfully applied in amounts of field, such as health care [15, 16], broadcast encryption [17], vehicular streaming service [18], and urban management [19]. CE has been investigated actively because it decides the performance of the system in PHY layer [7]. The current CE focuses on DPAS method, such as STA [6] and CDP [7]. The key part of these algorithms is to consider the demapped data signals as aided pilot. Then, the estimated CR is iteratively used to construct data pilot in the follow-up orthogonal frequency division multiplexing (OFDM) symbol. Mehrabi [9] introduced decoded data bits into DPAS to suppress noise caused by demodulation, but the performance gain is still marginal at high SNR region. To further improve accuracy of CFR, Awad [8] transformed CFR into time domain and performed truncation operation, thus removing demodulation errors. However, because iterative accumulated noise is not eliminated completely, these schemes still suffer from error propagation especially in the rapid time-varying vehicular channels.
Compared to the conventional schemes, DL has been shown to extract powerfully the inherent characteristic of signals [20] and thus has been qualified when overcoming multiple problems in wireless communications field [21–25]. FCNN was utilized into channel estimation and pilot design [11, 12]. It initially demonstrates the powerful ability of DL to increase improvement in the accuracy of CE. However, the scheme is not fit for vehicular communications using 802.11p. Because the unlimited packet length in 802.11p leads to increase rapidly in the number of neurons and thus FCNN tends to overfit. Neumann [13] modeled channel as conditionally Gaussian distribution given a set of random hyperparameters. Those hyperparameters are learned via convolutional neural network (CNN). Soltani [14] viewed channel estimation as an image super resolution problem where the pilot was a low-resolution sampled version of the channel and time-frequency CR was the image to be recovered. But the performance of the method still degrades under fast time-variant environment. The goal of this paper is to integrate DPAS with DL to track variations of channel, thus estimating CFR precisely.
System model
In this section, the structure of IEEE 802.11p under vehicular communications is first presented. Then, the channel model for vehicular wireless environment employed in this paper is briefly introduced. Subsequently, ChannelNet applied as benchmark algorithm is elaborated.
Structure of IEEE 802.11p
IEEE 802.11p physical layer is based on OFDM which boosts spectrum utilization by turning serial large data streams into parallel data streams on orthogonal subcarriers.
In 802.11p, the received signal is turned into parallel data for fast Fourier transformation (FFT) input, thus obtaining follow-up output in the frequency domain.
$$\begin{array}{@{}rcl@{}} Y(t,k) = H(t,k)X(t,k) + Z(t,k). \end{array} $$
where Y(t,k) and X(t,k) represents received, transmitted OFDM data symbols using FFT respectively, H(t,k) represents the CFR of the wireless channel, and Z(t,k) is added white Gaussian noise (AWGN). t represents the index of length per frame with 1 ≤t≤T. T is the number of length per frame. k denotes the index of subcarriers per frame with 1 ≤k≤K. K is the number of subcarrier per frame. How to estimate H more accurately is the goal of this paper.
IEEE 802.11p defines 75 MHz band at 5.9 GHz. The 75 MHz bandwidth is divided into 7 channels including one control channel (CCH) and six service channels. Safety messages are transmitted through CCH when emergent events happen [26]. IEEE 802.11p standard defines that pilot tones for channel estimation is comb structure. It is located on subcarriers -21, -7, 7, and 21 as Fig. 1 is shown. The initial channel estimation is enabled by utilizing the known training symbols transmitted of the preamble. Due to the highly time-varying channel in vehicular environments and the fact that the frame length is unlimited in the IEEE 802.11p standard, the channel estimation for each packet outdates easily over the entire packet duration. Therefore, how to design a channel estimation scheme to track variations under vehicular channel is a challenging problem.
The structure of TS-ChannelNet
Channel model for vehicular communications
Due to the relative motion of the transmitter and receiver, a Doppler spectral spread or broadening appears under vehicular communication. The relatively high velocity causes fast time-varying CR. To capture the joint Doppler-delay characteristics of vehicular communications, the tapped-delay line (TDL) model is adopted following the parameter of [27]. In [27], taps are characterized by Doppler power spectral density due to Rayleigh fading. The channel impulse response is calculated as (2)
$$\begin{array}{@{}rcl@{}} h(t,\tau) = \sum\limits_{l = 1}^{L} {{\phi_{l}}(t)\delta (\tau - {\tau_{l}}(t))}. \end{array} $$
where ϕl(t) represents the fading coefficient, L denotes resolution multipath, δ is impulse function, and τl(t) denotes time delay in lth path.
In this paper, three representative models are given as in [27], i.e., VTV Expressway Oncoming (VTVEO), VTV Urban Canyon (VTVUC) Oncoming, and RTV Expressway (RTVE). In the VTV Expressway Oncoming scenario, the moving speed of the receiver and the transmitter is the highest compared to the other scenarios. Its speed is 100km/h and its Doppler shift is about 1200Hz. Then, the VTV Urban Oncoming is the medium challenging environment for channel estimation. Its Doppler shift is 400–500 Hz with about 32 km/h moving velocity. In conclusion, the models presented are typical standard vehicular environments that consist of different velocities (low velocity/high velocity), and a Doppler shift ranged from 400 to 1200 Hz.
Benchmark algorithm: ChannelNet
In [14], a deep learning-based channel estimation scheme named ChannelNet was implemented for the short length of frame in slow time-varying environment. By viewing CR as images, the pilot values were utilized via image super resolution technique to restore (estimate) CR.
The process of ChannelNet consists of two phases. On the one hand, the isolated pilot values are extended to initial CR via Gaussian interpolation. On the other hand, CR values as input are fed into super resolution neural network (SRCNN) [28] followed by denoising convolutional neural network (DnCNN) [29]. The NN generates the estimated CR. The authors investigate the performance of ChannelNet in relatively slow time-varying environment. In our experimental trial, ChannelNet furthermore degrades for high-velocity mobile communications. This is owing to the unreliability of initial interpolation method, coupled with the fact that CNN does not have enough capacity to uncover temporal spectral correlation of the CR, thereby keeping CR outdated over the frame duration.
Proposed method
In this section, we first describe the pre-process of TS-ChannelNet. It utilizes interpolation scheme based on ADD-TT via pilot values. Then, the NN architecture named SR-ConvLSTM is presented to track variations of vehicular channel. Afterwards, the training process of TS-ChannelNet that is made up of ADD-TT following by SR-ConvLSTM is illustrated.
Interpolation based on ADD-TT
In this subsection, interpolation based on pilot via ADD-TT is implemented to obtain coarse CR. It extends few pilot values to initial CR values that are taken as IR images.
Usually, least squares (LS) estimation utilizes two identical preambles which are sent at the beginning of received packet in IEEE 802.11p to estimate tentative CR. Y(1,k), Y(2,k) are the first two long training symbols. X(1,k), X(2,k) are identical and two transmitted predefined long symbols in the frequency domain. To obtain CFR for all subcarriers, the received Y(1,k) and Y(2,k) are divided by X(1,k) as
$$\begin{array}{@{}rcl@{}} {\hat{H}_{LS}}(1,k) = \frac{{Y(1,k) + Y(2,k)}}{{2X(1,k)}}, \end{array} $$
where \(\hat {H}_{LS}(1,k)\) represents the LS channel estimate at the 1th time slot on the kth subcarrier. LS estimation assumes the channel is stationary. However, vehicular channel varies fast and the performance of LS estimation degrades significantly.
Then, decision-directed channel estimation is presented. It is based on correlation of adjacent symbols. The symbols are equalized by previous channel estimation as follows
$$\begin{array}{@{}rcl@{}} \hat{S}(t,k) = \frac{{Y(t,k)}}{{\hat{H}(t - 1,k)}}, \end{array} $$
where \(\hat {S}(t,k)\) denotes equalized symbol at the tth time slot on the kth subcarrier and \(\hat {H}(t-1,k)\) is the previous channel estimation. Based on the high correlation between adjacent data symbols, the current tth CFR is assumed to be unchanged with respect to the previous. The errors caused by such assumption are alleviated by the subsequent demodulation. Hence, the previous \(\hat {H}(t-1,k)\) is utilized to estimate. Note that the first estimated CR is LS channel estimation using (2). Then, the decision feedback is used to update channel estimate according to (5)
$$\begin{array}{@{}rcl@{}} \hat{H}(t,k) = \frac{{Y(t,k)}}{{\hat{X}(t,k)}}, \end{array} $$
where \(\hat {X}(t,k)\) represents the demodulated OFDM data symbol that stems from \(\hat {S}(t,k)\). The errors of estimated CFR are alleviated by demapping \(\hat {S}(t,k)\) to the corresponding constellation point \(\hat {X}(t,k)\). Thus, data symbols can provide useful channel information to construct data pilot.
However, \(\hat {H}(t,k)\) still cannot eliminate completely noise and accumulate error in iterative process, caused by error propagation, especially at low SNR region. The error propagation happens because the data symbols may be incorrectly demapped and thus the error is gradually accumulated during the iteration. To reduce such negative impact on decision-directed channel estimation, an average method based on time-domain truncation loop approach is applied. The scaled version of FFT matrix V is firstly calculated following by
$$\begin{array}{@{}rcl@{}} V = \sqrt M {F_{M}}(:,1:L + 1), \end{array} $$
where M represents the modulation order of the signal, FM is the FFT matrix, and L is the number of reserved time domain taps. \(\hat {H}(t,k)\) is converted to h(t,k) in the time domain using inverse fast Fourier transformation (IFFT). To curb noise, time truncation is operated. V is the scaled matrix that works for converting \(\hat {H}(t,k)\) to frequency domain. Then, \(\hat {H}(t,k)\) is turned into time domain to remove the time domain taps containing most of noise as follows
$$\begin{array}{@{}rcl@{}} \hat{H}_{f}(t,k) = V\hat{h}(t,1:L), \end{array} $$
where \(\hat {h}(t,1:L)\) represents the reserved CR in time domain at the tth time slot and \(\hat {H_{f}}(k)\) denotes scaled version of CR in frequency domain. The demodulation errors are equivalent to adding noise into \(\hat {h}\). Converting from frequency domain to time domain, noise is uniformly distributed across the different taps. \(\hat {h}\) is truncated in the time domain to alleviate the effect of noise caused by demodulation errors. Even though the time truncation is employed, there are some cumulative errors in \(\hat {H}_{f}(k)\) caused by minor noise. Thus, the average of \(\hat {H}_{f}(k)\) in time and frequency domains to smooth CR according to (8), (9)
$$\begin{array}{@{}rcl@{}} \hat{H}_{s}(t,k) = \sum\limits_{\lambda = - \beta }^{\lambda = \beta} {\frac{1}{{2\beta + 1}}} {\hat{H}_{f}}{(t,k + \lambda)}, \end{array} $$
where 2 β+1 represents the number of averaged subcarriers. The high correlation between adjacent subcarriers \(\hat {H}_{f}(t, k + \lambda)\) can be introduced to further improve the accuracy of the estimates. Then, averaging in time domain is calculated as follows,
$$\begin{array}{@{}rcl@{}} \hat{H}_{f}{(t,k)} = (1 - \alpha)\hat{H}{(t - 1,k)_{f}} + \alpha \hat{H}_{s}{(t,k)}. \end{array} $$
where \(\hat {H}_{f}(t,k)\) denotes the output of ADD-TT scheme at the tth time slot on the kth subcarrier, and α is coefficient parameter to update CR. Based on the high correlation across successive OFDM symbols, the weighted summation of previous and current estimated CFR can improve the performance. α, β are parameters related to knowledge of the vehicular environments. However, it is impossible to obtain such information in practice. It is observed in [6] that the best performance of averaging in time and frequency domain is achieved with α = 0.5 and β = 2. Thus, α is fixed to 0.5 and β is set to 2 in this paper.
The architecture of SR-ConvLSTM
ChannelNet based on CNN is inept in uncovering the inherent characteristics of temporal spectral correlation, thus a NN architecture SR-ConvLSTM based on ConvLSTM is proposed. It models temporal spectral correlation of adjacent symbols to estimate CR and is suitable for non-stationary scenarios.
Channel estimation of vehicular communications using IEEE 802.11p is viewed as super resolution problem. Considering the time-variant channel, LSTM that enables to extract time correlation of series is introduced to tackle super resolution problem. In [30], the authors prove LSTM successfully handles channel state information (CSI) feedback for time-varying communications. Adding a convolution operation to the LSTM composes of ConvLSTM. ConvLSTM is more effective for feature extraction when the time series data are images. The ConvLSTM [31] originates from LSTM. The difference is that after adding the convolution operation which not only obtain the timing relationship, but also to extract features such as convolution layers. In this way, we obtain the temporal spectral characteristics via SR-ConvLSTM based on ConvLSTM.
The details of proposed SR-ConvLSTM are presented. SR-ConvLSTM is composed of five layers including ConvLSTM and batch normalization (BN). Since this paper views channel estimation as an image super resolution problem, inspired by the architecture of [28], the structure of ConvLSTM following by BN is chosen and such structure is repeated to track high variations of CFR. ConvLSTM works for capturing temporal spectral correlation between adjacent data symbols and BN enables SR-ConvLSTM to converge. The specific structure is seen in Table 1. The first layer applies 64 filters of size 9 × 9 of ConvLSTM following by rectified linear units (ReLU) activation following by
$$\begin{array}{@{}rcl@{}} R(x) = \max (0,x), \end{array} $$
Table 1 Architecture of the SR-ConvLSTM
where x is input of the ConvLSTM. When the activation value of the neuron enters the negative half region, the gradient is 0. That means this neuron is trained to keep sparsity. The second layer is BN. BN is able to solve the problem when the neural network is training with slow convergence speed or exploding gradients. In fact, we find out if BN is removed from SR-ConvLSTM, the network cannot be converged. The reason may lie in the complex distribution of channel that needs BN operation. In addition, BN is added to speed up the training speed and improve the accuracy of the model. The third layer uses 32 filters of size 1 ×1 of ConvLSTM following by ReLU activation. The fifth layer is BN. The last layer is 1 filter of size 5 ×5×5 to reconstruct the output. Notably, to strike balance between performance and complexity, TS-ChannelNet removes DnCNN compared to ChannelNet.
The relationship between input and output of proposed SR-ConvLSTM is represented as
$$\begin{array}{@{}rcl@{}} \hat{H} = f\left(\theta ;{\hat{H}_{\text{seq}}}\right). \end{array} $$
where θ denotes the parameters of SR-ConvLSTM, \(\hat {h}\) is the final estimated CR, and f means nonlinear function that is determined by θ.
The architecture of ChannelNet must be revised if the frame lengths are changed. What is worse, the whole ChannelNet should be trained from scratch, which is non-trivial in practice. In SR-ConvLSTM, the CR is divided into blocks that contain n data symbols. Hence, SR-ConvLSTM fits for the arbitrary frame length without amending the input shape of NN. In conclusion, SR-ConvLSTM is more robust than SRCNN that is building blocks of ChannelNet.
Training of TS-ChannelNet
In this paper, estimating CFR at the receiver is viewed as a super resolution problem which includes pilot-based interpolation and DL-based restoration [14]. Thus, the proposed TS-ChannelNet is composed of ADD-TT and SR-ConvLSTM. In the first phase, pilot values hp are extended into the coarse CFR whose dimension is identical to estimated \(\hat {h}\). In this second phase, SR-ConvLSTM parameterized by θ is utilized to make coarse CFR become HR version via DL. The relationship between the input and output of TS-ChannelNet can be represented by this equation:
$$\begin{array}{@{}rcl@{}} \hat{H} = f(\theta ;{h_{p}}) = {f_{\theta} }({f_{ADD - TT}}({h_{p}});\theta) \end{array} $$
where fθ and fADD−TT are the network and interpolated functions, respectively.
ADD-TT in the first phase comprises decision-direction, time truncation, and weighted average. Firstly, decision-direction assumes that the tth CFR is highly correlated with the previous and thus \(\hat {H}(t-1, k)\) is used as pseudo \(\hat {H}(t,k)\) to calculate data pilot. The errors caused by such iterative operation are alleviated via demapping data pilot to constellation point. Secondly, accumulate errors by wrong demodulation are equal to adding noise. Noise is uniformly distributed across the different taps from frequency domain to time domain [8] and operate truncation to curb it. Thirdly, to make use of pilot, averaging \(\hat {H}(t,k)\) in the frequency and time domain is taken into account. In general, ADD-TT utilizes average decision-directed time truncation to make pilot become coarse \(\hat {h}\).
SR-ConvLSTM in the second phase is introduced to restore HR version of \(\hat {h}\). Initially, training SR-ConvLSTM needs to extract real and imaginary part of \(\hat {h}\) and stack them. Then, the stacked \(\hat {h}\) is divided into several blocks to make SR-ConvLSTM reveal temporal spectral correlation. SR-ConvLSTM has impressive power to achieve intrinsic correlation of signal in a end-to-end manner. The stacked \(\hat {h}\) is divided into several blocks. Finally, the output of SR-ConvLSTM is concatenated to obtain final estimated CFR. In addition, the optimization algorithm Adam [11] is chosen to make SR-ConvLSTM converge. To measure the accuracy of estimates, the normalized mean square error (NMSE) between \(\hat {h}\) and H is utilized.
$$\begin{array}{@{}rcl@{}} \text{NMSE} = \frac{1}{N}\sum\limits_{i = 1}^{N} {\frac{{E\left[{{\left| {H - \hat{H}} \right|}^{2}}\right]}}{{E\left[{{\left| H \right|}^{2}}\right]}}} \end{array} $$
where N is the frame length. Besides, bit error rate (BER) is also chosen to demonstrate the performance of TS-ChannelNet. The algorithm of TS-ChannelNet is summarized in Algorithm 1.
Simulation and results
In this section, we first introduce the settings of the simulation, which includes the parameters of IEEE 802.11p and DL-based model. Then, the simulation results demonstrate the strength of our proposed TS-ChannelNet.
Simulation setup and parameters
The IEEE 802.11p end-to-end PHY is implemented for simulation. The NMSE and BER are taken as the performance measurement of the scheme. The range of SNRs for simulation is from 0 to 30 dB with 4 quadrature amplitude modulation (QAM). The velocities range from 32 to 104 km/h. Frame length with 60 blocks is chosen.
Tensorflow using graphics processing unit (GPU) is employed for our approach. The learning rate is 0.001 and the dropout is 0.2. The batch size is 128 and epochs are 60. The training size, validation size, and test size are 32000, 8000, and 4000 respectively. The two models are trained at the SNR values of 22 dB with above hyperparameters with respect to three different environments. The specific parameters of simulation are described in Table 2..
Table 2 The parameters of simulation
Figures 2, 3, and 4 compare the performance of TS-ChannelNet and other schemes with maximum Doppler shift ranged from 300 to 1200 Hz. It is seen that the DD-TT outperforms ChannelNet at the high SNR region. Our presented scheme consistently has a better performance advantage than other approaches. This is because our proposed scheme estimates CR by integrating pilot knowledge, data knowledge, and the correlation of adjacent symbol. TS-ChannelNet is competent under high-velocity communication, which is challenging for real vehicular communication.
NMSE with 4QAM modulation, maximum Doppler = 1200 Hz, and 60 OFDM symbols (VTV Expressway Oncoming)
NMSE with 4QAM modulation, maximum Doppler = 300 Hz, and 60 OFDM symbols (RTV Expressway)
NMSE with 4QAM modulation, maximum Doppler = 500 Hz, and 60 OFDM symbols (VTV Urban Canyon)
In Fig. 5, ideal BER is illustrated. Ideal BER is obtained with known of CR without noise. It is seen that the performance of our method is approaching the ideal situation, which means TS-ChannelNet can nearly accurately recover CR. It is obvious that TS-ChannelNet has a better performance as deep fading for vehicular communications become severer. Through the performance under representative vehicular models, we demonstrate our TS-ChannelNet is robust and has a evident performance in terms of BER or NMSE.
BER with 4QAM modulation, maximum Doppler = 500 Hz, and 60 OFDM symbols (VTV Urban Canyon)
To further investigate our proposed method, an ablation analysis for fast time-varying environment is introduced. Due to the fact that Gaussian interpolation (GI) is utilized in ChannelNet, we take GI, DD-TT, and ADD-TT as interpolation methods in the first phase of TS-ChannelNet respectively while SR-ConvLSTM remains. We refer these approaches as GI-(SR-ConvLSTM), DD-TT-(SR-ConvLSTM), and ADD-TT-(SR-ConvLSTM). Besides, ChannelNet is taken as benchmark algorithm.
Figure 6 plots the NMSE of TS-ChannelNet with different interpolation methods under high mobility scenario while ChannelNet is considered as a reference. It is clearly seen that TS-ChannelNet with GI outperforms ChannelNet with GI. It suggests that our proposed SR-ConvLSTM has better capacity to extract temporal spectral correlation of data symbol than NN structure of ChannelNet. It is also observed that the different interpolation methods have effect on the performance of following SR-ConvLSTM. It proves that our proposed ADD-TT outperforms DD-TT, especially at high SNR values.
NMSE for ablation analysis with 4QAM modulation, maximum Doppler = 1200 Hz and 60 OFDM symbols (VTV Expressway Oncoming)
With respect to the compared methods, the improved result of our method in percentage is also presented in Table 3. Under three representative channel models, this percentage is obtained in terms of NMSE with SNR=30 dB. The three representative channel models are RTV Expressway (RTVE), VTV Expressway Oncoming (VTVEO), and VTV Urban Canyon (VTVUC) as mentioned before. It is obvious that our proposed method delivers fairly performance gain. Besides, the gain increases as the maximum Doppler shift grows. It demonstrates our proposed method can track more adequately variations of CFR with respect to the compared methods.
Table 3 The improved percentage of TS-ChannelNet with respect to the compared methods
Because CFR in vehicular communications varies highly, it is difficult to track variations of channel. The current DPAS method suffers from error propagation caused by accumulative noise. In this paper, a TS-ChannelNet-based channel estimation method for the fast time-varying scenario using IEEE 802.11p is proposed. In this scheme, CR is taken as images and apply TS-ChannelNet to estimate the CR leveraging pilot. TS-ChannelNet is made up of two phases. Pilot values are first extended to coarse tentative CR via interpolation based on ADD-TT. Note that the estimated CR is divided into sequences that contain n adjacent symbols. Afterwards, the SR-ConvLSTM takes divided CR as input and generates recovered CR. Simulation results demonstrate that our proposed method enables prominent performance over previous schemes under high-sped scenarios. Further experiments verify the two building blocks of TS-ChannelNet have all evident performances in channel estimation accuracy. The proposed TS-ChannelNet sheds light on how DL can be successfully applied for CE under high velocity environments.
In this paper, the NN is trained separately with respect to the correspoding representive environments. Hence, the generalization ability of network needs to be further improved. How to use transfer learning to overcome this problem will be our future work.
IEEE:
ISR:
Image super resolution
CR:
Channel response
CFR:
Channel frequency response
SNR:
Signal-noise-ratio
VTV:
Vehicle-to-vehicle
VTI:
Vehicle-to-infrastructure
PHY:
Medium-access Layer
CE:
STA:
Spectral temporal averaging
CDP:
Constructed data pilot
DL:
NN:
FCNN:
Fully connected neural network
CNN:
LSTM:
Long short-Term
ITS:
Intelligent transportation system
OFDM:
Orthogonal frequency division multiplexing
AWGN:
Added white Gaussian noise
NMSE:
Normalized mean square error
Channel state information
ReLU:
Rectified linear units
BN:
Batch normalization
BER:
QAM:
Quadrature amplitude modulation
Graphics processing unit
SRCNN:
Super resolution convolutional neural network
DnCNN:
Denoiser convolutional neural network
DD-TT:
Decision-directed estimation with time truncation
SR-ConvLSTM:
Super resolution convolutional long short-term
ADD-TT:
Average decision-directed estimation with time truncation
TS-ChannelNet:
Temporal spectral channel network
I. Wahid, A. U. A. Ikram, M. Ahmad, F. Ullah, An improved supervisory protocol for automatic selection of routing protocols in environment-aware vehicular ad hoc networks. Int. J. Distrib. Sensor Netw.14(11) (2018).
IEEE Standard for Information technology– Local and metropolitan area networks– Specific requirements– Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 6: Wireless Access in Vehicular Environments. IEEE Std 802.11p-2010 (Amendment to IEEE Std 802.11-2007 as amended by IEEE Std 802.11k-2008, IEEE Std 802.11r-2008, IEEE Std 802.11y-2008, IEEE Std 802.11n-2009, and IEEE Std 802.11w-2009), 1–51 (2010). IEEE Xplore.
IEEE Standard for Information technology– Local and metropolitan area networks– Specific requirements– Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: Further Higher Data Rate Extension in the 2.4 GHz Band. IEEE Std 802.11g-2003 (Amendment to IEEE Std 802.11, 1999 Edn. (Reaff 2003) as amended by IEEE Stds 802.11a-1999, 802.11b-1999, 802.11b-1999/Cor 1-2001, and 802.11d-2001), 1–104 (2003). IEEE Xplore.
W. Lin, M. Li, K. Lan, C. Hsu, A comparison of 802.11a and 802.11p for V-to-I communication: a measurement study. Int. Conf. Heterog. Netw. Qual. Reliab. Secur. Robustness, 559–570 (2010).
S. Benkirane, M. Benaziz, in 2018 IEEE 5th International Congress on Information Science and Technology (CiSt). Performance evaluation of IEEE 802.11p and IEEE 802.16e for vehicular ad hoc networks using simulation tools (Marrakech, 2018), pp. 573–577.
J. A. Fernandez, K. Borries, L. Cheng, B. V. K. Vijaya Kumar, D. D. Stancil, F. Bai, Performance of the 802.11p physical layer in vehicle-to-vehicle environments. IEEE Trans. Veh. Tech.61(1), 3–14 (2012).
Z. Zhao, X. Cheng, M. Wen, L. Yang, B. Jiao, Constructed data pilot-assisted channel estimators for mobile environments. IEEE Trans. Intell. Transp. Syst.16(2), 947–957 (2015).
M. M. Awad, K. G. Seddik, A. Elezabi, Low-complexity semi-blind channel estimation algorithms for vehicular communications using the IEEE 802.11p standard. IEEE Trans. Intell. Transp. Syst.20(5), 1739–1748 (2019).
S. Baek, I. Lee, C. Song, A new data pilot-aided channel estimation scheme for fast time-varying channels in IEEE 802.11p systems. IEEE Trans. Veh. Tech.68(5), 5169–5172 (2019).
D. Gündüz, P. de Kerret, N. D. Sidiropoulos, D. Gesbert, C. R. Murthy, M. van der Schaar, Machine learning in the air. IEEE J. Sel. Areas Commun.37(10), 2184–2199 (2019).
Y. Yang, F. Gao, X. Ma, S. Zhang, Deep learning-based channel estimation for doubly selective fading channels. IEEE Access. 7:, 36579–36589 (2019).
H. Ye, G. Y. Li, B. Juang, Power of deep learning for channel estimation and signal detection in OFDM systems. IEEE Wirel. Commun. Lett.7(1), 114–117 (2018).
D. Neumann, T. Wiese, W. Utschick, Learning the MMSE channel estimator. IEEE Trans. Signal Process.66(11), 2905–2917 (2018).
MathSciNet MATH Google Scholar
M. Soltani, V. Pourahmadi, A. Mirzaei, H. Sheikhzadeh, Deep learning-based channel estimation. IEEE Commun. Lett.23(4), 652–655 (2019).
F. Ullah, A. H. Abdullah, O. Kaiwartya, J. Lloret, M. M. Arshad, EETP-MAC: energy efficient traffic prioritization for medium access control in wireless body area networks. Telecommun. Syst. (2017).
F. Ullah, Z. Ullah, S. Ahmad, I. U. Islam, S. U. Rehman, J. Iqbal, Traffic priority based delay-aware and energy efficient path allocation routing protocol for wireless body area network. J. Ambient. Intell. Humanized Comput. (2019).
Y. Yang, Broadcast encryption based non-interactive key distribution in MANETs. J. Comput. Syst. Sci.80(3), 533–545 (2014).
H. Kung, C. Chen, M. Lin, T. Wu, Traffic priority based delay-aware and energy efficient path allocation routing protocol for wireless body area network. J. Int. Tech.20(7), 2083–2097 (2019).
Y. Chen, S. Weng, W. Guo, N. Xiong, A game theory algorithm for intra-cluster data aggregation in a vehicular ad hoc network. Sensors (Basel). 16(2), 245 (2016).
X. Ke, J. Zou, Y. Niu, End-to-end automatic image annotation based on deep CNN and multi-label data augmentation. IEEE Trans. Multimedia. 21(8), 2093–2106 (2019).
H. Cheng, Z. Xie, L. Wu, Z. Yu, R. Li, Data prediction model in wireless sensor networks based on bidirectional LSTM. EURASIP J. Wirel. Commun. Netw.2019:, 203 (2019).
S. Zhong, C. Jia, K. Chen, P. Dai, A novel steganalysis method with deep learning for different texture complexity images. Multimedia Tools Appl.78:, 8017–8039 (2019).
L. Liang, H. Ye, G. Yu, G. Y. Li, Deep-learning-based wireless resource allocation with application to vehicular networks. Proc. IEEE. 108(2), 341–356 (2020).
T. Fu, C. Wang, N. Cheng, IEEE Internet Things J., Deep learning based joint optimization of renewable energy storage and routing in vehicular energy network, 1–1 (2020).
S. Khan Tayyaba, H. A. Khattak, A. Almogren, M. A. Shah, I. Ud Din, I. Alkhalifa, M. Guizani, 5G vehicular network resource management for improving radio access through machine learning. IEEE Access. 8:, 6792–6800 (2020).
I. Chu, P. Chen, W. Chen, in 2012 IEEE 75th Vehicular Technology Conference (VTC Spring). An IEEE 802.11p based distributed channel assignment scheme considering emergency message dissemination (Yokohama, 2012), pp. 1–5.
G. Acosta-Marum, M. A. Ingram, Six time- and frequency- selective empirical channel models for vehicular wireless lans. IEEE Veh. Tech. Mag.2(4), 4–11 (2007).
C. Dong, C. C. Loy, K. He, X. Tang, Image super-resolution using deep convolutional networks. IEEE Trans. Patt. Anal. Mach. Intell.38(2), 295–307 (2016).
K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process.26(7), 3142–3155 (2017).
T. Wang, C. Wen, S. Jin, G. Y. Li, Deep learning-based CSI feedback approach for time-varying massive MIMO channels. IEEE Wirel. Commun. Lett.8(2), 416–419 (2019).
X. Shi, Z. Chen, H. Wang, D. -Y. Yeung, W. -K. Wong, W. -c. Woo, Convolutional LSTM network: a machine learning approach for precipitation nowcasting. ArXiv abs/1506.04214 (2015).
This work is supported by the National Natural Science Foundation of China (No.61673253, 61901254).
Shanghai Institute for Advanced Communication and Data Science, Key laboratory of Specialty Fiber Optics and Optical Access Networks, Shanghai University, Shanghai, 200444, China
Xuchen Zhu, Zhichao Sheng, Yong Fang & Denghong Guo
Xuchen Zhu
Zhichao Sheng
Yong Fang
Denghong Guo
The study is conceived and designed by ZX. ZX performed the experiment. The manuscript is reviewed and revised by FY, SZ, and GD. All authors read and approved this manuscript.
Correspondence to Yong Fang.
Zhu, X., Sheng, Z., Fang, Y. et al. A deep learning-aided temporal spectral ChannelNet for IEEE 802.11p-based channel estimation in vehicular communications. J Wireless Com Network 2020, 94 (2020). https://doi.org/10.1186/s13638-020-01714-4
Vehicular communications
IEEE 802.11p | CommonCrawl |
What are the mathematical models for force, acceleration and velocity?
In mechanics, the space can be described as a Riemann manifold. Forces, then, can be defined as vector fields of this manifold. Accelerations are linear functions of forces, so they are covector fields. But what about velocities and many other kinds of vectors?
Of course velocities are not forces, so I don't think it is right to reuse vector fields of this manifold. But does this mean that this manifold has many different tangent spaces at each point?
This sounds very strange to me. I think the problem is that math models have no physical units, maybe somehow we can create a many-sorted manifold to accommodate units?
classical-mechanics differential-geometry dimensional-analysis vector-fields
Qmechanic♦
elflyaoelflyao
$\begingroup$ "In mechanics, the space can be described as a Riemann manifold."...well, that depends. Hamiltonian mechanics usually describes physical system as symplectic manifolds. Also, related question: Is force a co- or contravariant vector? $\endgroup$ – ACuriousMind♦ Mar 31 '15 at 15:58
$\begingroup$ Answering this cleanly will require a bit more than what your'e providing. Are we talking about a Newtonian mechanics in a not-necessarily Euclidean space? Are we allowing for a relativistic dynamics here? Are we doing relativistic dynamics, but assuming that the metric can be decomposed into some sort of $-dt^{2} + f(t)g_{ij}dx^{i}dx^{j}$ ? Is this even a metric space? This sounds like a bunch of technical complaining, but the answer is actually different in all of these cases. $\endgroup$ – Jerry Schirmer Mar 31 '15 at 21:48
Velocities and Spatial Accelerations are twists and Forces and Momenta are wrenches. Both are screws (two-vectors) with one vector free and the other a spatial field. All of them transform with the same laws and their interactions have many dual properties.
NOTE: See "A treatise on the theory of screws", Stawell R Ball, https://archive.org/details/theoryscrews00ballrich
The proportionality tensor transforming twists to wrenches is the 6×6 spatial mass matrix converting motion into momentum and acceleration into forces.
For example below I am composing a velocity twist and a momentum wrench. Do you spot the similarities?
$$\begin{aligned} {\hat v} &= \begin{pmatrix} {\bf \omega} \\{\bf r} \times {\bf \omega} \end{pmatrix} & {\hat p} &= \begin{pmatrix} {\bf p} \\{\bf r} \times {\bf p} \end{pmatrix}\end{aligned} $$
ja72ja72
In classical mechanics a system is described by a Lagrangian $\mathscr{L}\colon TQ\to \mathbb{R}$, with $Q$ being the configuration space and $TQ$ its tangent bundle, namely the union over $q\in Q$ of all tangent spaces $T_qQ$: $TQ = \cup_q T_qQ$. A local chart on $Q$ looks like $(q_1, \ldots, q_n)$, the $q_k$ being the degrees of freedom of the system. The Lagrangian is then $\mathscr{L}\equiv\mathscr{L}\big(q(t), v(t)\big)$ and the equations of motion are: $$ \frac{d}{dt}\frac{\partial \mathscr{L}}{\partial v^{\mu}} - \frac{\partial \mathscr{L}}{\partial q^{\mu}}=0. $$ The solution is a collection of $\big(q^{\mu}(t), v^{\mu}(t)\big)$ that live on $TQ$; if we make the further requirement that, on those solutions, $v=\dot{q}$, then the path on $TQ$ projects uniquely onto a path on $Q$, whose flow is given by the velocity fields.
To directly answer your questions:
Forces, then, can be defined as vector fields of this manifold. Accelerations are linear functions of forces, so they are covector fields. But what about velocities and many other kinds of vectors?
Wrong. Positions and velocities are coordinates of local charts $\phi$ from the tangent bundle $\phi\colon U\subset TQ\to\mathbb{R}$: as such, they transform contra-variantly. Forces, in the above formalism, are related to the conjugate momenta $p_{\mu}=\partial\mathscr{L}/\partial{v^{\mu}}$ and hence transform co-variantly, with the inverse matrix.
See above. Also, manifolds just have one tangent space at each point, defined as the set of all directional derivatives calculated in that point.
I think the problem is that math models have no physical units, maybe somehow we can create a many-sorted manifold to accommodate units?
That has absolutely nothing to do with units.
gentedgented
Not the answer you're looking for? Browse other questions tagged classical-mechanics differential-geometry dimensional-analysis vector-fields or ask your own question.
Is force a contravariant vector or a covariant vector (or either)?
Understanding terms Twist and Wrench
Computing relative velocities from angular velocity
How can a set of components fail to make up a vector?
Components of dual vectors
Tensors defined by transformation laws are tensors at a vector space or tensor fields?
In which context is this concept of vector being defined?
Prove isometry preserving excision is Killing-like?
Getting an intuition of a "Vector Field" in General relativity
The necessity of using tangent space as the vectors in general relativity | CommonCrawl |
REPORTS > DETAIL:
TR21-001 | 1st January 2021 21:37
Computation Over the Noisy Broadcast Channel with Malicious Parties
TR21-001 Authors: Klim Efremenko, Gillat Kol, Dmitry Paramonov, Raghuvansh Saxena
Publication: 3rd January 2021 23:04
Broadcast Network, Communication complexity, Malicious Parties
We study the $n$-party noisy broadcast channel with a constant fraction of malicious parties. Specifically, we assume that each non-malicious party holds an input bit, and communicates with the others in order to learn the input bits of all non-malicious parties. In each communication round, one of the parties broadcasts a bit to all other parties, and the bit received by each party is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed?
Assuming there are no malicious parties, Gallager gave an $\mathcal{O}(n \log \log n)$-round protocol for the above problem, which was later shown to be optimal. This protocol, however, inherently breaks down in the presence of malicious parties.
We present a novel $n \cdot \tilde{\mathcal{O}}\left(\sqrt{\log n}\right)$-round protocol, that solves this problem even when almost half of the parties are malicious. Our protocol uses a new type of error correcting code, which we call a locality sensitive code and which may be of independent interest. Roughly speaking, these codes map "close" messages to "close" codewords, while messages that are not close are mapped to codewords that are very far apart.
We view our result as a first step towards a theory of property preserving interactive coding, i.e., interactive codes that preserve useful properties of the protocol being encoded. In our case, the naive protocol over the noiseless broadcast channel, where all the parties broadcast their input bit and output all the bits received, works even in the presence of malicious parties. Our simulation of this protocol, unlike Gallager's, preserves this property of the original protocol. | CommonCrawl |
American Institute of Mathematical Sciences
Journal Prices
Book Prices/Order
Proceeding Prices
E-journal Policy
On the upper semicontinuity of pullback attractors for multi-valued noncompact random dynamical systems
DCDS-B Home
Traveling waves in an SEIR epidemic model with the variable total population
December 2016, 21(10): 3709-3722. doi: 10.3934/dcdsb.2016117
The steady state solutions to thermohaline circulation equations
Chao Xing 1, , Ping Zhou 1, and Hong Luo 1,
College of Mathematics and Software Science, Sichuan Normal University, Chengdu, Sichuan 610066, China, China
Received December 2015 Revised May 2016 Published November 2016
Related Papers
In the article, we study the existence and the regularity of the steady state solutions to thermohaline circulation equations. Firstly, we obtain a sufficient condition of the existence of weak solutions to the equations by acute angle theory of weakly continuous operator. Secondly, we prove the existence of strong solutions to the equations by ADN theory and iteration procedure. Furthermore, we study the generic property of the solutions by Sard-Smale theorem and the existence of classical solutions by ADN theorem.
Keywords: regularity., thermohaline circulation equations, Steady state solutions, existence.
Mathematics Subject Classification: Primary: 35Q35, 35A01; Secondary: 34K2.
Citation: Chao Xing, Ping Zhou, Hong Luo. The steady state solutions to thermohaline circulation equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3709-3722. doi: 10.3934/dcdsb.2016117
H. Dijkstra and M. Ghil, Low-frequency variability of the large-scale ocean circulations: a dynamical system approach,, Review of Geophysics, 43 (2005), 1. doi: 10.1029/2002RG000122. Google Scholar
H. Dijkstra and M. Molemaker, Symmetry breaking and overturning oscillations in thermohaline-driven flows,, J. Fluid Mech., 331 (1997), 195. Google Scholar
H. Dijkstra and J. Neclin, Imperfections of the thermohaline circulation: Multiple equilibria and flux correction,, Journal of Climate, 12 (1999), 1382. doi: 10.1175/1520-0442(1999)012<1382:IOTTCM>2.0.CO;2. Google Scholar
L. Evans, Partial Differential Equations,, American Mathematical Society, (1998). Google Scholar
C. Foias and R. Temam, Structure of the set of stationary solutions of the Navier-Stokes equations,, Communications on Pure and Applied Mathematics, 30 (1977), 149. doi: 10.1002/cpa.3160300202. Google Scholar
Y. G. Gu and W. J Sun, Nontrivial equilibrium solutions for a semilinear reaction diffusion system,, Applied Mathematics and Mechanics, 25 (2004), 1382. doi: 10.1007/BF02438295. Google Scholar
S. Henry and P. Hall, Thermohaline convection with two stable regimes of flow,, Tellus, 13 (1961), 224. Google Scholar
Z. J. Jiang and S. L. Sun, Functional Analysis,, (Chinese)Higher Education Press, (2005). Google Scholar
H. Luo, Steady state solution to atmospheric circulation equations with humidity effect,, Journal of Applied Mathematics, 2012 (2012). Google Scholar
T. Ma and S. H. Wang, Dynamic transition theory for thermohaline circulation,, Physical D, 239 (2010), 167. doi: 10.1016/j.physd.2009.10.014. Google Scholar
T. Ma and S. H. Wang, Stability and Bifurcation of Nonlinear Evolution Equations,, (Chinese) Science Press, (2007). Google Scholar
T. Ma and S. H. Wang, Bifurcation Theory and Applications,, Nonl. Sci. Ser. A, (2005). doi: 10.1142/5798. Google Scholar
T. Ma, Theories and Methods in Partial Differential Equations,, (Chinese) Science Press, (2011). Google Scholar
R. Sell G and Y. C. You, Dynamics of Evolutionary Equations,, Applied Mathematical Sciences, (2002). Google Scholar
R. Temam, Navier-Stokes Equation and Nonlinear Functional Analysis,, $2^{nd}$ edition, (1983). Google Scholar
R. Temam, Navier-Stokes equation: Theory and Numerical Analysis,, North-Holland Publishing Co., (1979). Google Scholar
R. Temam, Infinite-dimensional Dynamical Systems In Mechanics And Physics,, Applied Mathematics and Science, (1997). doi: 10.1007/978-1-4612-0645-3. Google Scholar
O. Thual and J. McWilliams, The Catastrophe structure of thermohaline convection in a two-dimensional fluid model and a comparison with low-order box models,, Geophysical Astrophysical Fluid Dynamics, 64 (1992), 67. doi: 10.1080/03091929208228085. Google Scholar
E. Tziperman, J. Toggweiler, Y. Fzliks and K. Bryan, Instability of the thermohaline circulation with respect to mixed boundary conditions: Is it really a problem for realistic models?,, Journal of Physical Oceanography, 24 (1994), 217. doi: 10.1175/1520-0485(1994)024<0217:IOTTCW>2.0.CO;2. Google Scholar
Hongjun Gao, Jinqiao Duan. Dynamics of the thermohaline circulation under wind forcing. Discrete & Continuous Dynamical Systems - B, 2002, 2 (2) : 205-219. doi: 10.3934/dcdsb.2002.2.205
La-Su Mai, Kaijun Zhang. Asymptotic stability of steady state solutions for the relativistic Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 981-1004. doi: 10.3934/dcds.2016.36.981
Xiangjun Wang, Jianghui Wen, Jianping Li, Jinqiao Duan. Impact of $\alpha$-stable Lévy noise on the Stommel model for the thermohaline circulation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1575-1584. doi: 10.3934/dcdsb.2012.17.1575
Geng Chen, Yannan Shen. Existence and regularity of solutions in nonlinear wave equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3327-3342. doi: 10.3934/dcds.2015.35.3327
Youcef Amirat, Kamel Hamdache. Steady state solutions of ferrofluid flow models. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2329-2355. doi: 10.3934/cpaa.2016039
Norihisa Ikoma. Existence of ground state solutions to the nonlinear Kirchhoff type equations with potentials. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 943-966. doi: 10.3934/dcds.2015.35.943
Dengfeng Lü. Existence and concentration behavior of ground state solutions for magnetic nonlinear Choquard equations. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1781-1795. doi: 10.3934/cpaa.2016014
Lorena Bociu, Jean-Paul Zolésio. Existence for the linearization of a steady state fluid/nonlinear elasticity interaction. Conference Publications, 2011, 2011 (Special) : 184-197. doi: 10.3934/proc.2011.2011.184
Lijuan Wang, Hongling Jiang, Ying Li. Positive steady state solutions of a plant-pollinator model with diffusion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1805-1819. doi: 10.3934/dcdsb.2015.20.1805
Mei-hua Wei, Jianhua Wu, Yinnian He. Steady-state solutions and stability for a cubic autocatalysis model. Communications on Pure & Applied Analysis, 2015, 14 (3) : 1147-1167. doi: 10.3934/cpaa.2015.14.1147
Daniel Ginsberg, Gideon Simpson. Analytical and numerical results on the positivity of steady state solutions of a thin film equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1305-1321. doi: 10.3934/dcdsb.2013.18.1305
Samir K. Bhowmik, Dugald B. Duncan, Michael Grinfeld, Gabriel J. Lord. Finite to infinite steady state solutions, bifurcations of an integro-differential equation. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 57-71. doi: 10.3934/dcdsb.2011.16.57
Samira Boussaïd, Danielle Hilhorst, Thanh Nam Nguyen. Convergence to steady state for the solutions of a nonlocal reaction-diffusion equation. Evolution Equations & Control Theory, 2015, 4 (1) : 39-59. doi: 10.3934/eect.2015.4.39
Hua Nie, Wenhao Xie, Jianhua Wu. Uniqueness of positive steady state solutions to the unstirred chemostat model with external inhibitor. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1279-1297. doi: 10.3934/cpaa.2013.12.1279
Jianhua Chen, Xianhua Tang, Bitao Cheng. Existence of ground state solutions for a class of quasilinear Schrödinger equations with general critical nonlinearity. Communications on Pure & Applied Analysis, 2019, 18 (1) : 493-517. doi: 10.3934/cpaa.2019025
Pablo Ochoa, Julio Alejo Ruiz. A study of comparison, existence and regularity of viscosity and weak solutions for quasilinear equations in the Heisenberg group. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1091-1115. doi: 10.3934/cpaa.2019053
Wenxiong Chen, Congming Li. Regularity of solutions for a system of integral equations. Communications on Pure & Applied Analysis, 2005, 4 (1) : 1-8. doi: 10.3934/cpaa.2005.4.1
Zhenzhen Zheng, Ching-Shan Chou, Tau-Mu Yi, Qing Nie. Mathematical analysis of steady-state solutions in compartment and continuum models of cell polarization. Mathematical Biosciences & Engineering, 2011, 8 (4) : 1135-1168. doi: 10.3934/mbe.2011.8.1135
Junping Shi, Jimin Zhang, Xiaoyan Zhang. Stability and asymptotic profile of steady state solutions to a reaction-diffusion pelagic-benthic algae growth model. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2325-2347. doi: 10.3934/cpaa.2019105
Yacine Chitour, Jean-Michel Coron, Mauro Garavello. On conditions that prevent steady-state controllability of certain linear partial differential equations. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 643-672. doi: 10.3934/dcds.2006.14.643
PDF downloads (9)
HTML views (0)
on AIMS
Chao Xing Ping Zhou Hong Luo
Copyright © 2019 American Institute of Mathematical Sciences
RIS(for EndNote,Reference Manager,ProCite)
Recipient's E-mail*
Code* | CommonCrawl |
Stressed portfolio optimization with semiparametric method
Chuan-Hsiang Han1 &
Kun Wang ORCID: orcid.org/0000-0003-4258-17232
Tail risk is a classic topic in stressed portfolio optimization to treat unprecedented risks, while the traditional mean–variance approach may fail to perform well. This study proposes an innovative semiparametric method consisting of two modeling components: the nonparametric estimation and copula method for each marginal distribution of the portfolio and their joint distribution, respectively. We then focus on the optimal weights of the stressed portfolio and its optimal scale beyond the Gaussian restriction. Empirical studies include statistical estimation for the semiparametric method, risk measure minimization for optimal weights, and value measure maximization for the optimal scale to enlarge the investment. From the outputs of short-term and long-term data analysis, optimal stressed portfolios demonstrate the advantages of model flexibility to account for tail risk over the traditional mean–variance method.
Several historical episodes, such as the financial crisis and COVID-19, have posed new challenges for investment management in unknown and unprecedented tail risks. A large body of literature on econometric research exploits the validation of various financial models and risk measures, such as value-at-risk (VaR) and conditional value at risk (CVaR) for risk management (Jorion 2007). We extend the use of these risk measures (Artzner et al. 1999) for portfolio optimization using a novel semiparametric modeling method under stressed scenarios. The scaling effect of stressed portfolios is also a concern. Risk-sensitive value measures (Miyahara 2010) were adopted to maximize the optimal scale for a given portfolio strategy.
The proposed semiparametric modeling method is constructive and consists of two estimation procedures: the nonparametric kernel method for marginal distributions and a parametric copula method for their joint distribution. This semiparametric method builds up a more complex dependence between portfolio constituents than traditional Gaussian models that can be used to exploit tail risks.
From both experimental and theoretical perspectives, we find that the proposed optimal stressed portfolio and the semiparametric method perform better than Markowitz's mean–variance method (Markowitz 1952). From an experimental perspective, our implementation of the stressed portfolio optimization relies on a rolling window approach and checks its robustness. In addition, from a theoretical perspective, the risk-sensitive value measure (RSVM) is equipped with more properties for general heavy-tail distribution than Markowitz's mean–variance model, thus making mean–variance a special case in the risk-sensitive value measure.
The remainder of this paper is organized as follows: "Literature review" section provides a literature review, particularly on the nonparametric kernel method and the parametric copula method. "The semiparametric method" section generates non-Gaussian distributed portfolios using the proposed semiparametric method with two parts. First, we construct the marginal distribution of each constituent asset by nonparametric estimation with cross-validation to obtain the optimal bandwidth of a kernel function and its perturbation analysis. The alternative part estimates the parameters of copula functions by full maximum likelihood estimation (MLE). "Stressed portfolio optimization and its scaling effect" section solves the optimal weights of the portfolio using the semiparametric method by minimizing risk measures, such as VaR and CVaR. The scaling effect is then optimized by maximizing the risk-sensitive value measures. "Empirical studies and data analysis" section presents the data set, intensive empirical results, and a comparison between the stressed portfolio and the traditional mean–variance method. We conclude the paper in "Conclusion" section.
There are two major directions for tail risk estimation: modeling the return distribution and capturing the volatility process. For the former direction, various techniques are employed for modeling the entire return distribution or just the tail areas, including known parametric distribution, kernel density approximation, and extreme value theory (Tsay 2010). The latter direction mostly relies on discrete-time volatility models, such as the exponentially weighted moving average model (EWMA) and autoregressive general conditional heteroskedasticity (GARCH) model to capture the volatility process. See Jondeau et al. (2007) for further details.
Traditional modeling methods in financial management often rely on the Gaussian distribution by virtue of closed-form solutions for mean–variance analysis (Fu et al. 2021), the optimal risk measure, and so on. There are other risk measures such as the Entropic Value-at-Risk (Mills et al. 2017). However, some stylized facts of heavy tails and asymmetry among empirical distributions expose extra risk for the fraud of initial assumptions. In contrast, we relax the Gaussian assumption using a semiparametric method, which renders flexible distributions to describe more details and properties for unknown tail risks.
Distinct from previous studies on financial modeling, the aim of this study is to build up the joint distribution of portfolios in high dimensions without assumptions of each underlying asset distribution. This innovative construction of a joint distribution is based on nonparametric estimation (Robinson 1983) and the copula method (Cherubini et al. 2004, 2011). Nonparametric estimation with a kernel function is adopted to estimate the probability density function of each underlying asset, and the parametric copula method is used to describe a joint distribution between the assets of the portfolio. Among nonparametric estimations, several studies exploit the optimal kernel functions and bandwidth in the estimation by Robinson (1983). There is no universally accepted approach to select the optimal kernel function that has little influence on the estimation results. We concentrate on the selection of the optimal bandwidth using cross-validation theory (Horová et al. 2012). A bias estimation for the perturbed optimal bandwidth is derived. Regarding the parametric copula method, there are two primitive families of copula functions: elliptical and Archimedean copula (Nelsen 1999). The multivariate copula method builds up the dependence on portfolio constituents.
Notably, the proposed semiparametric modeling method is static, in comparison to dynamical multivariate models such as the GARCH DCC model (Engle 2002) in discrete time or stochastic volatility matrix model in continuous time (Mancino et al. 2017; Han 2018). The former static model can be quite complex in its structure, whereas the latter dynamic model advances its prediction capability. The stressed portfolio optimization problem under the static model is the focus of this study. Owing to the complexity of financial modeling, computational schemes such as optimization solvers and the Monte Carlo estimator by simulations are utilized. There are several techniques for solving portfolio optimization models (Esfahanipour and Khodaee 2021), particle swarm optimization (PSO), and so on. Motivated by Babazadeh and Esfahanipour (2019), the optimization algorithm genetic algorithm (GA) was used to solve risk measure minimization problems using MATLAB's package. However, its counterpart dynamic version requires solving high-dimensional nonlinear HJB-type partial differential equations (Fleming and Soner 2006) in continuous time.
Marginal distribution: nonparametric kernel method
A nonparametric estimation utilizes kernel functions to smooth out the shape of the distribution from discrete raw data into continuous data. The degree of smoothness has a limited relationship with kernel functions, whereas it depends on the bandwidth of the kernel.
Kernel function
These are many choices of kernel functions, such as Gaussian kernel, exponential kernel, and Cauchy kernel. However, the Gaussian kernel,
$$K\left(x\right)=\frac{1}{\sqrt{2\pi }}{\mathrm{e}}^{-\frac{{x}^{2}}{2}},$$
is commonly used in practice because it does not influence the asymptotic of the estimation as significant as the bandwidth used by Horová et al. (2012).
Definition 2.1
Suppose that there are \(n\) observed values (or returns) denoted by vector \(X\). The kernel estimator (Rosenblatt-Parzen) \(\widehat{f}\) at point \(x\in R\) is defined as:
$$\widehat{f}\left(x;\mathrm{h}\right)=\frac{1}{nh}\sum_{i=1}^{n}K\left(\frac{x-{X}_{i}}{h}\right)=\frac{1}{n}\sum_{i=1}^{n}{K}_{h}\left(x-{X}_{i}\right),$$
where \({K}_{h}\left(t\right)=\frac{1}{h}K\left(\frac{t}{h}\right), h>0.\) The positive number \(h\) is a smoothing parameter called the bandwidth of the kernel function.
Joint distribution: copula method
The copula method (Nelsen 1999, Cherubini et al. 2004) provides a useful tool for describing the dependence between variables. Two families of copula functions are often considered: elliptical and Archimedean copulas. Unlike the nonparametric kernel function, the copula method is parametric and contributes to the joint distribution of the portfolio from its multiple marginal distributions (Bouyé et al. 2000; Cambanis et al. 1981; Cherubini and Luciano 2001).
An m-dimension copula is a distribution function on \({[\mathrm{0,1}]}^{m}\) with standard uniform marginal distributions.
$$C\left({\varvec{u}}\right)= C \left({u}_{1} {,u}_{2} ,\dots ,{ u}_{m}\right),$$
where \(C\) is called a copula function.
The copula function \(C\) is a mapping of form \(C{: [\mathrm{0,1}]}^{m}\to \left[\mathrm{0,1}\right].\) These are two major types of elliptical copula families: Gaussian and Student's t copulas. Both are associated with a class of elliptical distributions.
The multivariate dispersion copula
The m-dimensional normal or Gaussian copula is derived from the m-dimensional Gaussian distribution. The Gaussian copula is generated from a set of correlated normally distributed variates \({v}_{1},{v}_{2}\)…\({v}_{m}\) using Cholesky's decomposition, and then transforms these to uniform variables \({u}_{1}=\Phi \left({v}_{1}\right), {u}_{2}=\Phi ({v}_{2})\)…\({u}_{m}=\Phi ({v}_{m})\), where \(\Phi\) is the cumulative standard normal; therefore, the pair \(({u}_{1},{u}_{2}\dots {u}_{m})\) draws from the Gaussian copula.
The marginal distribution of each variable is standard normal, and the joint normal distribution can be defined as
$${C}_{R}^{Gaussian}\left({u}_{1} {,u}_{2} ,\dots ,{ u}_{m}\right)\equiv {\Phi }_{m}\left({\Phi }^{-1}\left({u}_{1}\right),\dots ,{\Phi }^{-1}\left({u}_{m}\right);R\right),$$
where \(R\) is the m-dimensional covariance matrix, and \({\Phi }_{m}\) is the cumulative multivariate normal distribution function in dimension \(m\).
For the multivariate Gaussian copula (MGC), let \(R\) be a symmetric, positive define matrix with \(\mathrm{diag}\left(R\right)={(\mathrm{1,1}\dots 1)}^{T},\) and the corresponding density function of (2.4) is,
$${c}_{R}^{Gaussian}\left(\Phi \left({x}_{1}\right),\dots ,\Phi \left({x}_{m}\right)\right)=\frac{\frac{1}{{\left(2\pi \right)}^\frac{m}{2}{\left|R\right|}^\frac{1}{2}}{exp}\left({{-\frac{1}{2}X}^{T}R}^{-1}X\right)}{\prod_{j=1}^{m}\left(\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left(-\frac{1}{2}{x}_{j}^{2}\right)\right)}$$
where \(R\) is the covariance matrix of vector \(X,\) and \(\left|\mathrm{R}\right|\) is the determinant of \(\Sigma .\) Let \({u}_{j}={\Phi }\left({x}_{j}\right);\) therefore, \({x}_{j}={\Phi }^{-1}\left({u}_{j}\right).\) This copula density function can be rewritten as given below:
$${c}_{R}^{Gaussian}\left({u}_{1} {,u}_{2} ,\dots ,{ u}_{m}\right)=\frac{1}{{\left|R\right|}^\frac{1}{2}}exp\left({{-\frac{1}{2}\varsigma^{T}(R}^{-1}-I)}\varsigma\right)$$
where \(\varsigma=({\Phi }^{-1}\left({u}_{1}\right),\dots ,{\Phi }^{-1}\left({u}_{m}\right))\).
Let \({\varvec{\mu}}={({\mu }_{1},{\mu }_{2}\dots {\mu }_{m})}^{T}\) be a positive parameter, \({\varvec{\upsigma}}={({\upsigma }_{1},{\upsigma }_{2}\dots {\upsigma }_{m})}^{T}\) be a dispersion parameter, and \(\mathrm{R}\) be a correlation matrix. The multivariate dispersion copula (MDC) density is as given below:
$$f\left(X;{\varvec{\mu}},{\varvec{\sigma}},R\right)=\frac{1}{{\left|R\right|}^\frac{1}{2}}exp({{-\frac{1}{2}\varsigma}^{T}(R}^{-1}-I)\varsigma)\prod_{j=1}^{m}{{f}_{j}\left({x}_{j};{\mu }_{j},{\sigma }_{j}\right)},$$
where \({\varsigma}_{j}={\Phi }^{-1}{(F}_{j}\left({x}_{j};{\mu }_{j},{\upsigma }_{j}\right))\), and \({{f}_{j}\left({x}_{j};{\mu }_{j},{\upsigma }_{j}\right)=\frac{\partial {(F}_{j}\left({x}_{j};{\mu }_{j},{\upsigma }_{j}\right)}{\partial {x}_{j}}}\) for every set of c.d.f. \({F}_{j}\left({x}_{j};{\mu }_{j},{\upsigma }_{j}\right)\).
The multivariate student's t copula
Similarly, the m-dimensional Student's t-copula is derived from the m-dimensional Student's t-distribution. Student's t copulas are models with a heavier tail than Gaussian copulas. We denote \({\mathbf{T}}_{m}\)(\({\epsilon }_{1},\dots ,{\epsilon }_{m};{\varvec{R}},v\)) be the joint Student's t distribution and \(\mathbf{T}(x)\) be the univariate Student's t distributions. The Student's t copula is defined as,
$${C}_{R,v}\left({u}_{1} {,u}_{2} ,\dots ,{ u}_{m}\right)\equiv {\mathbf{T}}_{m}\left({\mathbf{T}}^{-1}\left({u}_{1}\right),\dots ,{\mathbf{T}}^{-1}\left({u}_{m}\right);{\varvec{R}},v\right),$$
and its density function of the multivariate Student's t copula (MTC) is,
$$\begin{aligned}{}&{\mathrm{T}}_{R,v}\left({u}_{1} {,u}_{2} ,\dots ,{u}_{m}\right)={t}_{R,v}\left({t}_{v}^{-1}\left({u}_{1}\right),{t}_{v}^{-1}\left({u}_{2}\right)\dots {t}_{v}^{-1}\left({u}_{m}\right)\right)\\ &\quad = \int\limits_{-\infty}^{t_v^{-1}({u}_{1})}\cdots\int\limits_{-\infty}^{t_v^{-1}({u}_{m})} \frac{\Gamma\left[\frac{v+m}{2}\right]|R|^\frac{1}{2}}{\Gamma\left[\frac{v}{2}\right]{v}^\frac{m}{2}\pi^\frac{m}{2}} \left(1+\frac{1}{v}{X}^{T}{\mathrm{R}}^{-1}X\right)^{-\frac{v+m}{2}}d{x}_{1}\dots d{x}_{m}\end{aligned}$$
where \({t}_{v}^{-1}\) is the inverse of the univariate cumulative distribution function of Student's t with \(v\) degrees of freedom. Using the standard representation, the copula density for multivariate Student's t copula (Cherubini et al. 2004) is:
$${c}_{R,v}\left({u}_{1} {,u}_{2} ,\dots ,{ u}_{m}\right)={\left|R\right|}^\frac{1}{2}\frac{\Gamma \left[\frac{v+m}{2}\right]}{\Gamma \left[\frac{v}{2}\right]}{\left(\frac{\Gamma \left[\frac{v}{2}\right]}{\Gamma \left[\frac{v+1}{2}\right]}\right)}^{m}\frac{{\left(1+\frac{1}{v}{\varsigma}^{T}{\mathrm{R}}^{-1}\varsigma\right)}^{-\frac{v+m}{2}}}{\prod_{j=1}^{m}{\left(1+\frac{{\varsigma}_{j}^{2}}{v}\right)}^{-\frac{v+1}{2}}},$$
where \({\varsigma}_{j}={t}_{v}^{-1}\left({u}_{j}\right).\)
The Archimedean copula
In contrast to the elliptical copula, it is easy to deduce parameterized multivariate distributions from the same class of marginal distributions. Given a function \(\phi (x)\) as the generator of the Archimedean copula function, the formula of Archimedean copulas induces a copula by
$$C \left({u}_{1} {,u}_{2} ,\dots ,{ u}_{m}\right)\equiv {\phi }_{m}\left({\phi }^{-1}\left({u}_{1}\right)+\dots +{\phi }^{-1}\left({u}_{m}\right)\right)$$
Three well-known Archimedean copulas are illustrated below with the following density functions (Table 1).
Table 1 The Archimedean copulas
Although the Archimedean copula requires only one parameter in the estimation, the partial distribution function is not easy to calculate in high dimensions for the joint density function. Thus, we choose the MGC to build up the joint distribution in "Stressed portfolio optimization and its scaling effect" section for ease of computation.
The semiparametric method
The semiparametric method combines the nonparametric kernel and the parametric copula methods to describe the marginal distribution of each underlying asset and the joint distribution of the portfolio, respectively. Details about the formulation of each nonparametric and parametric method are discussed in the last section. We focus on the estimation procedures described below, including a bias estimation for the optimal bandwidth.
Optimal bandwidth choice
As mentioned in "Kernel function" section, the choice of bandwidth is not only pivotal as it determines the smoothness of the estimation but also plays a significant role in the weight function on a kernel. In addition, bandwidth choice is a crucial problem in kernel smoothing because no universally accepted approach exists to this problem yet.
One approach of cross-validation theory aims to minimize the mean square error (MSE) between the estimated and true densities. Thus, an appropriate \(h\) should determine the degree of smoothness and influence on the MSE between the kernel estimated density \({f}_{\widehat{p}}\left(x\right)\) and its true density \({f}_{p}\left(x\right)\).
The variance, bias, and MSE of the estimator are defined as
$$\begin{aligned}{\mathrm{Var}}_{p}({f}_{\widehat{p}}(X))&={E}_{p}{[{f}_{\widehat{p}}\left(X\right)-{E}_{p}[{f}_{\widehat{p}}\left(X\right)]]}^{2},\\{\mathrm{Bias}}_{p}({f}_{\widehat{p}}(X))&={E}_{p}\left[{f}_{\widehat{p}}\left(X\right)\right]-{f}_{p}\left(x\right),\\ {\mathrm{MSE}}_{p}\left({f}_{\widehat{p}}\left(X\right)\right)&={E}_{p}{\left[{f}_{\widehat{p}}\left(X\right)-{f}_{p}\left(X\right)\right]}^{2}.\end{aligned}$$
Similarly, we could get results
$${\mathrm{MSE}}_{p}\left({f}_{\widehat{p}}(X)\right)={\mathrm{Var}}_{p}\left({f}_{\widehat{p}}(X)\right)+{{\mathrm{Bias}}_{p}}^{2}\left({f}_{\widehat{p}}(X)\right).$$
Let the density function \({f}_{\widehat{p}}(X)\) bound second derivative \({f}_{\widehat{p}}^{\prime\prime}\left(X\right)\), leading to Taylor expansion,
$${\mathrm{Bias}}_{p}\left({f}_{\widehat{p}}\left(X\right)\right)=\frac{1}{2}{h}^{2}{f}_{\widehat{p}}^{\prime\prime}\left(x\right){k}_{2}+o\left({h}^{2}\right),$$
$${\mathrm{Var}}_{p}\left({f}_{\widehat{p}}\left(X\right)\right)=\frac{{f}_{\widehat{p}}\left(X\right){k}_{1}}{nh}+o\left(\frac{1}{nh}\right),$$
where \({k}_{1}=\int {K}^{2}\left(u\right)du,\) \({k}_{2}=\int {u}^{2}K\left(u\right)du.\) See Horová et al. (2012) for details.
From (3.2) and (3.3), we derive the MSE of the kernel density estimators as
$${\mathrm{MSE}}_{p}\left(x\right)=\frac{s\left({f}_{\widehat{p}}(x)\right)\left[4\int {K}^{2}\left(u\right)du+n{h}^{5}T\left({f}_{\widehat{p}}(x)\right){[\int {u}^{2}K\left(u\right)du]}^{2}\right]}{4nh}+o\left(\frac{1}{nh}\right)+o\left({h}^{4}\right),$$
where \(\mathrm{T}\left({f}_{\widehat{p}}(x)\right)=\int \frac{{{f}_{\widehat{p}}^{\prime\prime}}^{2}\left(x\right)}{{f}_{\widehat{p}}\left(x\right)}dx,\) \({\mathrm{s}}_{p}\left({f}_{\widehat{p}}(x)\right)={E}_{p}\left[{f}_{\widehat{p}}^{2}\left(x\right)\right].\)
The optimal bandwidth is defined from the truncated \({\mathrm{MSE}}_{p}\left(x\right)\) taking only the first leading order term as,
$${h}_{opt}=\mathrm{arg}\,min\, MSE\left(h\right).$$
In this approach, the optimal bandwidth can be obtained by some straightforward calculations:
$${h}_{opt}\approx {n}^{-\frac{1}{5}}{{k}_{1}}^\frac{1}{5}{{k}_{2}}^{-\frac{2}{5}}{\left[\mathrm{T}\left({f}_{\widehat{p}}\left(x\right)\right)\right]}^{-\frac{1}{5}},$$
where \({k}_{1}=\int {K}^{2}\left(u\right)du,\) \({k}_{2}=\int {u}^{2}K\left(u\right)du.\) For Gaussian kernel \({k}_{1}=\sqrt{\frac{1}{4\pi }} ,{k}_{2}=1\); therefore,
$${h}_{opt}={n}^{-\frac{1}{5}}{\left(4\pi \right)}^{-\frac{1}{10}}{\left[\mathrm{T}\left({f}_{\widehat{p}}\left(x\right)\right)\right]}^{-\frac{1}{5}}.$$
Bias estimation for the perturbed optimal bandwidth
Here, we provide a perturbation analysis and show that the error of the Gaussian kernel function deviating from the optimal bandwidth is uniformly bounded.
Lemma 3.2
Given the Gaussian kernel function \({K}_{{h}_{opt}}\left(t\right)=\frac{1}{{h}_{opt}}K\left(\frac{t}{{h}_{opt}}\right)\) with the optimal bandwidth choice \({h}_{opt}>0,\) for any estimation error \(\upvarepsilon >0\), there exists an independent constant \(M,\) such that \(\left|{K}_{{h}_{opt}}\left(t\right)- {K}_{{h}_{opt}+\upvarepsilon }\left(t\right)\right|<M\upvarepsilon\), for \(t\in R\).
This means that the bias between the optimal kernel and its perturbed density is uniformly bounded.
Use Taylor expansion and the uniformly bounded property for the normal density.
Introducing the telescope expression, we obtain
$$\begin{aligned}\left|{K}_{{h}_{opt}}\left(t\right)- {K}_{{h}_{opt}+\upvarepsilon }\left(t\right)\right|\\ \quad &=\left|\frac{1}{{h}_{opt}}K\left(\frac{t}{{h}_{opt}}\right)-\frac{1}{{h}_{opt}+\upvarepsilon }K\left(\frac{t}{{h}_{opt}}\right)+\frac{1}{{h}_{opt}+\upvarepsilon }K\left(\frac{t}{{h}_{opt}}\right)-\frac{1}{{h}_{opt}+\upvarepsilon }K\left(\frac{t}{{h}_{opt}+\upvarepsilon }\right)\right|\\ &\quad\le \left|\left(\frac{1}{{h}_{opt}}-\frac{1}{{h}_{opt}+\upvarepsilon }\right)K\left(\frac{t}{{h}_{opt}}\right)\right|+\left|\frac{1}{{h}_{opt}+\upvarepsilon }\left(K\left(\frac{t}{{h}_{opt}}\right)-K\left(\frac{t}{{h}_{opt}+\upvarepsilon }\right)\right)\right|.\end{aligned}$$
The first term on the left is bounded by \({M}_{1}\upvarepsilon\) regardless of the variable \(t\) for some independent constant \({M}_{1}\). Because the Gaussian kernel function is a normal density function, by the mean-value theorem, the second term on the right is bounded above by \({M}_{2}\upvarepsilon\) for some independent constant \({M}_{2}.\) Therefore, \(\left|{K}_{{h}_{opt}}\left(t\right)- {K}_{{h}_{opt}+\upvarepsilon }\left(t\right)\right|\le \left({M}_{1}+{M}_{2}\right)\upvarepsilon\) for an arbitrary \(t\) is obtained. □
The joint distribution of portfolio
As a semiparametric estimation, it has nonparametric and parametric components. The kernel method offers the marginal distribution of each asset under nonparametric estimation, and the copula method is common in parametric estimation, which builds up the joint distribution between marginal distributions. After combining these two components, the joint distribution of the portfolio is obtained.
The joint distribution of assets in our portfolio is as given below:
$$f\left({x}_{1},{x}_{2},\dots {x}_{n}\right)=c\left({F}_{1}({x}_{1}),{{F}_{2}(x}_{2}),\dots {{F}_{n}(x}_{n})\right)\prod_{i=1}^{n}{f}_{i}\left({x}_{i}\right),$$
where \(c\left({x}_{1},{x}_{2},\dots {x}_{n}\right)\) are copulas using parametric methods, and \({f}_{i}\left({x}_{i}\right)\) is the marginal distribution using nonparametric methods.
Once the joint distribution for the multivariate \(\left({X}_{1},\dots ,{X}_{n}\right)\) is estimated, its portfolio \(P\) with different weights \(({w}_{1},..,{w}_{n}\)) is defined by,
$$P\left({X}_{1},\dots ,{X}_{n,}{w}_{1},..,{w}_{n}\right)=\sum_{i=1}^{n}{w}_{i}{X}_{i},$$
where \({w}_{i}\) and \({X}_{i,}\) are the weight and value of \({i}{th}\) asset, respectively. The total sum \(\sum_{i=1}^{n}{w}_{i}=1\). When a weight \({w}_{i}\) is nonnegative, it means that the corresponding asset is not allowed for short selling.
Maximum likelihood estimation (MLE) was employed to estimate model parameters. Based on the joint density function,
$$f\left({x}_{1},{x}_{2}\dots ,{x}_{n}\right)=c\left({F}_{1}\left({x}_{1}\right),{F}_{2}\left({x}_{2}\right)\dots {F}_{n}\left({x}_{n}\right)\right)\prod_{i=1}^{n}{f}_{i}\left({x}_{i}\right),$$
where \(c\left({x}_{1},{x}_{2}\dots {x}_{n}\right)=\frac{{\partial }^{n}C({x}_{1},{x}_{2}\dots {x}_{n})}{\partial {x}_{1}\partial {x}_{2}\dots \partial {x}_{n}}\) is the density of the \(n\) dimensional copula \(C({x}_{1},{x}_{2}\dots {x}_{n};\theta )\). The log-likelihood function is defined as follows:
$$L=\sum_{j=1}^{N}logf\left({{x}_{1}}^{\left(j\right)},{{x}_{2}}^{\left(j\right)}\dots {{x}_{n}}^{\left(j\right)}\right)={L}_{C}+\sum_{i=1}^{n}{L}_{i}$$
where \({\mathrm{L}}_{C}=\sum_{j=1}^{N}log c({F}_{1}^{\left(j\right)},{F}_{2}^{\left(j\right)}\dots {F}_{n}^{\left(j\right)})\) is the log-likelihood function from the independent term with the copula \(C\) function, the rest term \({L}_{i}=\sum_{j=1}^{N}log{f}_{j}({{x}_{i}}^{(j)}), i=\mathrm{1,2}\dots n\) is the log-likelihood function from the dependent term, which is not necessary to estimate parameters using the nonparametric kernel method, where \(log\) denotes the natural logarithm. Thus, only parameters in \({\mathrm{L}}_{C}\) need to be estimated. Let \(\theta\) denote the parameter set of copula \(C\). This can be estimated by the following full MLE:
$$\widehat{\theta }={arg}\,{max}_{\theta }{L}_{C}\left(\theta \right)={arg}\,{max}_{\theta }\sum_{j=1}^{N}logc\left({F}_{1}\left({x}_{1}^{\left(j\right)}\right),{F}_{2}\left({x}_{2}^{\left(j\right)}\right)\dots {F}_{n}\left({x}_{n}^{\left(j\right)}\right);\theta \right).$$
Stressed portfolio optimization and its scaling effect
This section introduces the methodology for stressed portfolio optimization, which includes specific procedures for constructing an optimal portfolio under tail risk and its scaling effect. We extend the use of risk measures (Artzner et al. 1999) for portfolio optimization using the previously mentioned semiparametric method. The optimal scales of such stressed portfolios are studied by maximizing risk-sensitive value measures (Miyahara 2010).
Risk measure minimization for stressed portfolio
As a regulatory standard or internal control for financial institutions, risk measures provide extreme information about potential value losses. Owing to its simplicity and clarification in risk management, VaR is the most conventional measure to estimate the loss of asset value, given a certain confidence level; therefore, an adequate capital amount is gauged to prevent negative impacts.
\(V{aR}_{\alpha }\) is defined as a quantile in statistics:
$$V{aR}_{\alpha }\left(X\right)={inf}\left\{l\in R:P\left(X>l\right)\le 1-\alpha \right\}$$
where \(\alpha\) is the confidence level, and \(X\) denotes either the loss of asset value or its loss return.
Conditional value-at-risk (CVaR), also known as expected shortfall, is a stringent risk assessment used to estimate the average losses exceeding VaR.
\({CVaR}_{\mathrm{\alpha }}\) is defined as a conditional expectation:
$${CVaR}_{\mathrm{\alpha }}(X)=E\left(X|X\ge {VaR}_{\alpha }(X)\right),$$
where \(\alpha\) is the confidence level, the variable \(X\) represents the loss value or its return, and \({VaR}_{\alpha }(X)\) is defined above.
Note that both values of \({VaR}_{\mathrm{\alpha }}\) and \({CVaR}_{\mathrm{\alpha }}\) are variable \(X\) dependent. This means that they are not constant, even though the value of \(\alpha\) is given. When the variable \(X\) is a portfolio, such as \(P\) defined in Eq. (3.7), minimizing nonlinear risk measures such as \({VaR}_{\mathrm{\alpha }}\) and \({CVaR}_{\mathrm{\alpha }}\) over the feasible set of portfolio weights, possibly in high dimensions, must be solved numerically. Discussions on data analysis and computational schemes are presented in "Statistical estimation for semiparametric method" section.
Value measure maximization for the scaling effect
The evaluation of a risk-sensitive portfolio is essential for finance. This section aims to revisit the optimal scale using the risk-sensitive value measures proposed by Miyahara (2010) and discuss some computational issues given stressed portfolios.
Let \(X\) be a linear space of return of portfolio; the risk-sensitive value measure in \(X\) is then the following functional defined on \(X\):
$${U}^{\left(\alpha \right)}\left(X\right)=-\frac{1}{\alpha }logE\left({e}^{-\alpha X}\right),$$
where \(\alpha\) is the risk aversion parameter and \(\alpha \in [\mathrm{0,1}]\).
For a Gaussian multivariate \(X\), from its moment generating function
$$E\left({e}^{-\alpha X}\right)={e}^{E\left(-\alpha X\right)+\frac{1}{2}Var\left(-\alpha X\right)}={e}^{-\alpha E\left(X\right)+\frac{{\alpha }^{2}}{2}Var\left(X\right)},$$
the utility function (4.2) is explicitly obtained
$${U}^{\left(\alpha \right)}\left(X\right)=-\frac{1}{\alpha }logE\left({e}^{-\alpha X}\right)=E\left(X\right)-\frac{\alpha }{2}Var\left(X\right):=MV\left(X\right).$$
The mean–variance (MV) value measure is defined above, and the optimal scale for this MV value measure is obtained.
$${\lambda }_{opt}=\frac{E\left(X\right)}{\alpha Var\left(X\right)},$$
from solving a quadratic minimization over \(\lambda\) the scale of portfolio
$$MV\left(\lambda X\right)=E\left(\lambda X\right)-\frac{1}{2}\alpha Var\left(\lambda X\right),=\lambda E\left(X\right)-\frac{1}{2}{\lambda }^{2}\alpha Var\left(X\right).$$
However, when the distribution of \(X\) is non-Gaussian, the mean–variance model is the first two leading terms of the risk-sensitive value measure. This can be easily deduced by substituting the Taylor expansion
$${e}^{-\alpha X}=1-\alpha X+\frac{{\alpha }^{2}{X}^{2}}{2}+H.O.T (higher\,order\,terms)$$
into (4.2) and obtain
$$\begin{aligned}{U}^{\left(\alpha \right)}\left(X\right)&=-\frac{1}{\alpha }logE\left({e}^{-\alpha X}\right)\\&=-\frac{1}{\alpha }\mathrm{log}[\mathrm{E}\left(1-\alpha X+\frac{{\alpha }^{2}{X}^{2}}{2}\right)]+H.O.T\\& \approx -\frac{1}{\alpha }E\left(-\alpha X+\frac{{\alpha }^{2}{X}^{2}}{2}\right)+H.O.T\\&=\mathrm{E}\left(\mathrm{X}\right)-\frac{\alpha }{2}E\left({X}^{2}\right)+H.O.T.\end{aligned}$$
If \(\mathrm{X}\) is centered at 0, i.e., \(\mathrm{E}\left(\mathrm{X}\right)=0\),
$${U}^{\left(\alpha \right)}\left(X\right)\approx MV\left(X\right)+H.O.T.$$
As \({U}^{(\alpha )}\left(\lambda X\right)\) is a concave function of \(\lambda\) (Miyahara 2010), the optimal scale of the portfolio can be obtained by maximizing this scaled value measure:
$${U}^{(\alpha )}\left(\lambda X\right)=-\frac{1}{\alpha }logE\left({e}^{-\alpha \lambda X}\right),$$
such that \({\lambda }_{opt}=\frac{{C}_{X}}{\alpha }\), where \({C}_{X}\) is a solution of \(E\left({Xe}^{-{C}_{X}X}\right)=0.\)
Because our portfolio variable \(X\) has a complex structure from the proposed semiparametric method, we adopt the following Monte Carlo estimator to solve the optimal scale as an approximation:
$${U}^{\left(\alpha \right)}\left(\lambda X\right)\approx -\frac{1}{\alpha }\mathrm{log}\left(\frac{1}{n}\sum_{i=1}^{n}{e}^{-\alpha \lambda {X}^{\left(i\right)}}\right),$$
where \(\lambda\) is the scale of the portfolio, \(\alpha\) is the risk aversion, \(n\) is the sample size, and \({X}^{\left(i\right)}{{}^{\prime}}s\) are random samples from historical simulations.
We comment on the strict concavity of the approximate estimator in (4.4). This can be inherently derived from the concavity of the utility function defined in Eq. (4.2) by taking the random variable \(X\) as discrete and uniformly distributed on the set of fixed outcomes \(\left\{{X}^{\left(1\right)}, {X}^{\left(2\right)}, \dots ,{X}^{\left(n\right)} \right\}.\) Since the graph of the risk-sensitive value measure over the scale is concave, the peak of this graph is identified as the optimal scale for its associated portfolio.
For investors with different levels of sensitivity to the same risk, we use different values of aversion to calculate the optimal scale. The risk-seeker (0 \(<\alpha <0.5\)), risk-neutral (\(\alpha =0.5\)), and risk-averter (\(0.5<\alpha <1\)) correspond to aversion values of 0.5, 0.5, and 0.5, respectively.
Empirical studies and data analysis
According to the framework depicted in Sects. Literature review and Stressed portfolio optimization and its scaling effect" sections, we designed the following experiments for stressed portfolio optimization using the semiparametric method. First, we build the marginal distribution for each constituent of the portfolio, given daily data from 2016 to 2020. We then describe the joint distribution of a portfolio with a Gaussian copula, which explains the dependence between these constituents. Second, we solve for the optimal weights from risk measure minimization using the genetic algorithm (GA) within MATLAB's package. Finally, the optimal scale based on the stressed VaR portfolio is solved numerically using an approximated Monte Carlo estimator. Intensive and heavy computation, which includes modeling by semiparametric estimation and portfolio optimization under tail risk, is executed on a server cluster equipped with four Intel Xeon 5220R CPUs. Each CPU is 2.2 GHz with 24 cores.
Statistical estimation for semiparametric method
To implement our methodology on real data, we construct a diversified portfolio with five ETFs: Vanguard S&P 500 ETF (VOO), iShares 20 + Year Treasury Bond ETF (TLT), iShares iBoxx investment grade corporate bond ETF (LQD), iShares Gold Trust ETF (IAU), and Vanguard Real Estate Index Fund ETF Shares (VNQ). Daily price data spanning from 2016 to 2020 were retrieved from the Bloomberg database. Daily returns were calculated from the difference between two consecutive log prices.
Our implementation of the optimization models relies on a rolling-window approach. Specifically, at the beginning of each month, we use the return data of the previous three months to calculate the input parameters needed to determine the portfolio weights. Using these weights, we calculate portfolio returns over the next month. The following month, new portfolio weights are determined using updates of the parameter estimates.
The model parameters of the optimal bandwidth for the kernel function and the correlation matrix required in "Literature review" section for our portfolio are time-invariant in each estimate window (three months). The relevant parameters and estimation results are available upon request.
Optimal weights for risk measure: stressed portfolio optimization
Following the semiparametric model, applications for portfolio optimization under tail risk are presented. Tables 2 and 3 record the empirical results of in-sample fit for a quarterly time span (three months), which is useful for training models. Tables 4 and 5 record the empirical results of out-of-sample fit for a monthly time span, which is useful for testing models.
Table 2 The in-sample results of Markowitz model and semiparametric method with VaR
Table 3 The in-sample results of Markowitz model and semiparametric method with CVaR
Table 4 The out-of-sample results of Markowitz model and semiparametric method with VaR
Table 5 The summary of out-of-sample results for Markowitz model and semiparametric method with VaR (\(\alpha =0.05\))
According to Eq. (4.1), portfolio VaR is a function of the weight vector \(w\) defined by
$${\mathrm{VaR}}_{\alpha }(P\left({\varvec{w}}\right))= g\left({\varvec{w}}\right),$$
where \(g\) denotes the function of the weight vector \({\varvec{w}}\), and the optimal weight \(\widehat{{\varvec{w}}}\) attains the minimum value of \(g\left({\varvec{w}}\right).\) Table 2 records the in-sample fit for the optimal weight vector \(\widehat{{\varvec{w}}}\), the performance of each stressed portfolio, and its VaR value for five consecutive years from 2016 to 2020. These performance results, including volatility, return, Sharpe ratio, and VaR, are calculated quarterly.
According to Table 2, although Markowitz's model and semiparametric method have different objective functions for weight estimation, the two methods have comparable results for the Sharpe ratio. The in-sample results show that the semiparametric method always has a lower VaR than Markowitz's model.
Similarly, the portfolio CVaR is a function of weight vector \(w\) defined by the following equation:
$${\mathrm{CVaR}}_{\alpha }(P\left({\varvec{w}}\right))= k\left({\varvec{w}}\right),$$
where \(k\) is a function of the weight vector \({\varvec{w}}\), and the optimal weight \(\widehat{{\varvec{w}}}\) is the minimum value of \(k\left({\varvec{w}}\right)\). The optimal weight, the performance of each stressed portfolio, and its CVaR value are listed in Table 3.
Tables 2 and 3 demonstrate the in-sample tests of the dataset and the performance measure of the optimal stressed portfolio on a long-term quarterly basis. According to Tables 2 and 3, although Markowitz's model and semiparametric method have different objective functions for weight estimation, the two methods have comparable results in terms of the Sharpe ratio. The empirical results of the in-sample show that the semiparametric method always has lower VaR and CVaR than Markowitz's model.
We conduct out-of-sample tests on a short-term monthly basis by using the same set of five ETFs (VOO-equity, TLT-government bond, LQD-corporate bond, IAU-gold, and VNQ-real estate) and compare the performance of portfolios generated from the semiparametric method and Markowitz method from 2016 to 2020, as demonstrated in Table 4.
The results of return, volatility, Sharpe ratio, and risk measures were calculated monthly. As can be seen from Fig. 1, compared to S&P 500, our semiparametric method provides better results in terms of portfolio returns during those five years.
The portfolio value of the semiparametric model with VaR and S&P 500 from 2016 to 2020. This figure shows the portfolio value of S&P 500 (blue line) and Semiparametric method with VaR (\(\alpha =0.05, yello line\)) in five years (2016 to 2020). The initial value of portfolio is 100. It is clear that the semiparametric method has better performance in return
Note that Markowitz's mean–variance model is profit-oriented. It selects the portfolio with the highest Sharpe ratio from the efficient frontier of the five ETF assets. Nevertheless, the semiparametric method is risk-oriented. Its objective function aims to minimize the VaR/CVaR function. Compared with Markowitz's mean–variance method, Table 4 is summarized in Table 5. Our semiparametric method reduces the average volatility of the portfolio in those five years and decreases the average return in the same period, simultaneously, but increases the average Sharpe ratio of the portfolio. Our proposed method mitigates not only the whole risk but also the tail risk because our method has a lower portfolio VaR in those five years.
Similarly, the coherent risk measure CVaR is used to compare the results of the semiparametric method and Markowitz's method within the same test period from 2016 to 2020. Figure 2 depicts the portfolio value of the semiparametric model with CVaR and S&P 500.
The portfolio value of the semiparametric model with CVaR and S&P 500 from 2016 to 2020. This figure shows the portfolio value of S&P 500 (blue line) and Semiparametric method with CVaR (\(\alpha =0.05, yello line\)) in five years (2016 to 2020). The initial value of portfolio is 100. It is clear that the semiparametric method has better performance in return
As shown in Table 7, which is a summary of Table 6, our semiparametric method reduces average volatility of portfolio in five years, whereas our method decreases average return in the same period. However, the semiparametric method increases the average Sharpe ratio of the portfolio. Our semiparametric method consistently offers better risk management than the Markowitz model in comprehensive risk and tail risk because our method has a lower portfolio CVaR.
Table 6 The out-of-sample results of Markowitz model and semiparametric method with CVaR
Table 7 The summary of out-of-sample results for Markowitz model and semiparametric method with CVaR (\(\alpha =0.05\))
In addition, we verify the robustness of the semiparametric method in several sensitivity checks. First, we extensively vary the dataset to examine whether our findings are robust with respect to the indices used to represent the asset classes. For example, we add other ETFs or use alternative indices to our portfolio. This procedure often leads to changes in sample size. However, we find that the variation in the dataset does not alter any of our conclusions. Second, we examine whether the performance of our method improves when shorter and longer time series of historical returns are used for parametrization, and we base the estimation method on a rolling-window approach with 2 months and 4 months of historical data available in estimation. We do not observe a consistent improvement in additional tests. Third, we repeat our analysis by utilizing other performance measures. Specifically, we employ the Sortino ratio, which does not change the qualitative nature of our results.
Optimal scale for value measure: scaling effect
As mentioned above, we can obtain a stressed portfolio using the semiparametric method with the optimal weights by minimizing the VaR of the portfolio. To further understand the scaling effect of the portfolio, we compare the mean–variance model and risk-sensitive value measure with different risk aversion, denoted by \(\alpha\) from zero to one. We assume that there are three types of investors: risk-averter (\(0.5<\alpha <1\)), risk-seeker (\(0<\alpha <0.5\)), and risk-neutral (\(\alpha =0.5\)). We discuss the optimal scale of the portfolio during the five years with three types of investors, and the results are shown in Fig. 3, 4, 5, 6 and 7.
Value measure of RSVM and MV with various risk aversion \(\alpha\) and scale \(\lambda\) in 2016. This figure shows the relation between value measure (mean–variance model with dash line and risk-sensitive value measure with line) and scale with different investors (risk-averter \(\alpha =0.8\), risk-neutral \(\alpha =0.5\), and risk-seeker \(\alpha =0.2\)) in 2016. The red star and blue circle are optimal scale of mean–variance model and risk-sensitive value measure, respectively
Although the curve of mean–variance (MV) and risk-sensitive value measure (RSVM) are similar in shape to a downward parabola, the curve of MV has a particularly strong concavity. In theory, the MV is a special case of an RSVM. MV has a close-form optimal portfolio scale shown in Eq. (4.3), while the optimal scale of the risk-sensitive value measure must be calculated by the Monte Carlo estimator. The numerical comparisons are listed in Tables 8 and 9.
Table 8 The optimal scale of portfolio with mean–variance model
Table 9 The optimal scale of portfolio with risk-sensitive value measure
The empirical results show a negative correlation between the degree of risk aversion and the optimal scale in the value measure. Risk-seeking investors correspond to larger scales, while risk-averters correspond to smaller scales. In addition, there is no difference in the mean–variance model and risk-sensitive value measure only for portfolios with a Gaussian distribution, but most portfolios are non-Gaussian in practice. If investors use a mean–variance model to determine the optimal scale, which may not be a real optimal scale, because the mean–variance model is not fit in a non-Gaussian distribution. Thus, the risk-sensitive value measure is pivotal in the stressed portfolio optimization.
We propose an innovative semiparametric method for financial modeling and discuss the applications of portfolio optimization under tail risk with the scaling effect. This semiparametric method is composed of a nonparametric method and a copula method by estimating marginal distributions and the dependence of assets in a portfolio, respectively. Stressed portfolios and their optimal scaling effects are designed to be obtained by minimizing risk measures and maximizing risk-sensitive value measures, respectively. Through intensive empirical data analysis, we observe that the mean–variance type Markowitz method may cause bias selection, compared to the semiparametric method, which improves the efficiency of risk management with less risk exposure.
Fortunately, our data is public because it comes from a data base company called Bloomberg.
Mean–variance
Value-at-risk
CVaR:
Conditional value-at-risk
RSVM:
Risk-sensitive value measure
Artzner P, Delbaen F, Eber J-M, Heath D (1999) Coherent measures of risk. Math Finance 9(3):203–228
Babazadeh H, Esfahanipour A (2019) A novel multi period mean-VaR portfolio optimization model considering practical constraints and transaction cost. J Comput Appl Math 2019(361):313–342
Bouyé E, Durrleman V, Nikeghbali A, Riboulet G, Roncalli T (2000) Copulas for finance—a reading guide and some applications. Groupe de Recherche Opérationelle, Crédit Lyonnais, working paper
Cambanis S, Huang S, Simons G (1981) On the theory of elliptically contoured distributions. J Multivar Anal 11:368–385
Cherubini U, Luciano E (2001) Value at risk trade-off and capital allocation with copulas. Econ Notes 30(2):235–256
Cherubini U, Luciano E, Vecchiato W (2004) Copula methods in finance. Wiley
Cherubini U, Mulinacci S, Gobbi F, Romagnoli S (2011) Dynamic copula methods in finance. Wiley
Engle R (2002) Dynamic conditional correlation: a simple class of multivariate generalized autoregressive conditional heteroskedasticity models. J Bus Econ Stat 20(3):339–350
Esfahanipour A, Khodaee P (2021) A constrained portfolio selection model solved by particle swarm optimization under different risk measures. In: Mercangöz BA (ed) Applying particle swarm optimization: new solutions and cases for optimized portfolios. Springer, Cham, pp 133–153
Fleming WH, Soner HM (2006) Controlled Markov Processes and Viscosity Solutions. 2nd edn. Springer Verlag, New York
Fu C-C, HanC-H, Wang K (2021) A novel semi-static method for the index tracking problem. Accepted by Handbook of Investment Analysis, Portfolio Management and Financial Derivatives. Editor C.F. Lee.
Han C-H (2018) Systemic risk estimation under dynamic volatility matrix models. Adv Financ Plan Forecast 9:79–107
Horová I, Koláček J, Zelinka J (2012) Kernel smoothing in MATLAB. World Scientific Publishing
Jondeau E, Poon S-H, Rockinger M (2007) Financial Modeling under Non-Gaussian Distributions. Springer
Jorion P. Value-at-Risk: The New Benchmark for Managing Risk. 3rd ed. McGraw-Hill; 2007.
Mancino ME, Recchioni MC, Sanfelici S (2017) Fourier-Malliavin volatility estimation: theory and practice, 1st edn. Springer
Markowitz H (1952) Portfolio selection. J Finance 7(1):77–91
Mills EFEA, Yu B, Yu J (2017) Scaled and stable mean-variance-EVaR portfolio selection strategy with proportional transaction costs. J Bus Econ Manag 18(4):561–584
Miyahara Y (2010) Risk-sensitive value measure method for projects evaluation. J Option Strategy 2:185–204
Nelsen RB (1999) An introduction to copulas. Lecture notes in statistics. Springer, New York
Robinson P (1983) Nonparametric estimators for time series. J Time Series Anal 4:185–207
Tsay RS (2010) Analysis of Financial Time Series, 3rd edn. John Wiley & Sons
Work supported by MOST, [107-2115-M-007 -015 -MY2].
Department of Quantitative Finance, National Tsing Hua University, Hsinchu, 30013, Taiwan
Chuan-Hsiang Han
International Intercollegiate Ph.D. Program, National Tsing Hua University, Hsinchu, 30013, Taiwan
Kun Wang
All authors wrote, corrected and agreed to the published version of the manuscript. All authors read and approved the final manuscript.
Chuan-Hsiang Han is the Professor of Quantitative Finance and Mathematics at National Tsing-Hua University. His fields of research are Applied Probability, Financial Mathematics, Monte Carlo methods, and Fintech. Han received a PhD in Mathematics from North Carolina State University. Before joining Tsing-Hua University he worked at University of Minnesota and Ford Motor Company. He is an editorial Member of Advances in Financial Planning and Forecasting and an associate editor of Journal of the Chinese Statistical Association.
Kun Wang is a PhD candidate of Quantitative Finance at National Tsing-Hua University. His fields of research are time series and volatility analysis. His recent work has focused on portfolio optimization.
Correspondence to Kun Wang.
Han, CH., Wang, K. Stressed portfolio optimization with semiparametric method. Financ Innov 8, 27 (2022). https://doi.org/10.1186/s40854-022-00333-w
Portfolio optimization
Tail risk
Semiparametric method
Copula method
Risk measure
Scaling effect | CommonCrawl |
Q-nexus: a comprehensive and efficient analysis pipeline designed for ChIP-nexus
Peter Hansen1,2,
Jochen Hecht3,4,
Jonas Ibn-Salem5,6,
Benjamin S. Menkuec1,
Sebastian Roskosch7,
Matthias Truss8 &
Peter N. Robinson1,2,7,9,10
ChIP-nexus, an extension of the ChIP-exo protocol, can be used to map the borders of protein-bound DNA sequences at nucleotide resolution, requires less input DNA and enables selective PCR duplicate removal using random barcodes. However, the use of random barcodes requires additional preprocessing of the mapping data, which complicates the computational analysis. To date, only a very limited number of software packages are available for the analysis of ChIP-exo data, which have not yet been systematically tested and compared on ChIP-nexus data.
Here, we present a comprehensive software package for ChIP-nexus data that exploits the random barcodes for selective removal of PCR duplicates and for quality control. Furthermore, we developed bespoke methods to estimate the width of the protected region resulting from protein-DNA binding and to infer binding positions from ChIP-nexus data. Finally, we applied our peak calling method as well as the two other methods MACE and MACS2 to the available ChIP-nexus data.
The Q-nexus software is efficient and easy to use. Novel statistics about duplication rates in consideration of random barcodes are calculated. Our method for the estimation of the width of the protected region yields unbiased signatures that are highly reproducible for biological replicates and at the same time very specific for the respective factors analyzed. As judged by the irreproducible discovery rate (IDR), our peak calling algorithm shows a substantially better reproducibility. An implementation of Q-nexus is available at http://charite.github.io/Q/.
ChIP-seq, which couples chromatin immunoprecipitation with high-throughput sequencing, has enabled researchers to investigate protein-DNA binding on a genome-wide scale [1–3]. ChIP-seq works by cross-linking DNA-protein complexes with formaldehyde followed by fragmentation of the complexes into short stretches of 300–500 base pairs (bp). The fragments are then immunoprecipitated with an antibody specific for the protein of interest, such as a transcription factor (TF) or a modified histone, in order to enrich for DNA fragments bound by the protein prior to next-generation sequencing (NGS). Since the fragment is much longer than the specific protein-DNA binding site (which tend to be on the order of 6–20 bp in length), ChIP-seq "peaks", representing areas of enrichment for the bound protein, do not directly allow the exact position of protein-DNA binding to be identified.
For this reason, ChIP-exo, an extension of the basic ChIP-seq method, aims to remove DNA segments that surround the binding site of the protein of interest before NGS adapters are attached in order to characterize the exact binding site of proteins more exactly [4]. The protocol for ChIP-exo is similar to the ChIP-seq protocol with the key difference that a 5'-3' (λ) exonuclease is employed to trim the DNA sequences on each strand to within a few bp of the location at which the protein of interest has been cross-linked to the DNA. DNA sequences located 3' to the cross-linking point remain intact and thus can be used to identify the genomic location of the binding event if they are sufficiently long and located in non-repetitive areas of the genome; on the other hand, non-cross-linked nonspecific DNA is largely eliminated by the exonuclease treatment, which may contribute towards reducing background noise [5, 6].
The ChIP-exo methodology allows protein-DNA binding interactions to be characterized to a level of detail that was not possible with ChIP-seq. The cross-linked DNA-protein complex protects the 5' ends of bound DNA fragments from exonuclease digestion, with the cleavage occurring about 5–6 bp upstream of the cross-linking point [5], allowing TF binding sites be mapped at high resolution [7–9]. Additionally, the morphology of the ChIP-exo mapped read profiles allows one to discriminate between direct and indirect protein-DNA binding interactions. Profile-based analysis of ChIP-exo signals can uncover structural and functional clues about the interaction and cooperative nature of genomic TF binding [10].
Despite these advantages, the ChIP-exo methodology has a number of shortcomings, including the need for high amounts of input DNA in order to avoid overamplification artifacts resulting from low amounts of starting DNA [11]. Recently, an adaptation of the ChIP-exo procedure was introduced with the goal of addressing these short-comings. ChIP-nexus (chromatin immunoprecipitation experiments with Nucleotide resolution through EXonuclease, Unique barcode and Single ligation) involves chromatin immunoprecipitation as with standard ChIP-seq, but then proceeds to ligate adapters that contain Illumina-specific sequences, a BamHI site, and a random barcode arranged in such a way that self-circularization can occur following λ-exonuclease digestion, which places the random barcode directly adjacent to the "stop" nucleotide resulting from the cross-linked protein-DNA complex. In comparison to the ChIP-exo protocol, ChIP-nexus is more efficient, because for a given fragment only one ligation (instead of two) is needed. Following ligase-mediated circularization, an oligonucleotide with sequence complementary to segment with the BamHI site is added, which enables relinearization of the circles by means of BamHI digestion. Finally, PCR amplification is performed with primers that match the Illumina sequences of the adapter, followed by single-end Illumina sequencing. The random barcode allows multiple reads that correspond to independent molecules but map to the same position to be distinguished from PCR duplicates.
There is currently no software designed specifically for ChIP-nexus, and the computational analysis in the original publication [11] was performed using a number of scripts for preprocessing and MACS2 [12] for peak calling, which was designed for ChIP-seq data and does not take into account the specifics of ChIP-exo and ChIP-nexus data. Although there are software packages specifically developed for ChIP-exo data [13, 14], they do not provide solutions for the extensive preprocessing of the data before peak calling, which comprises quality trimming, adapter clipping and mapping. For ChIP-nexus an additional processing of the mapping has to be performed in order to benefit from the random barcodes. For ChIP-seq the average fragment length is an important parameter for peak calling and downstream analysis and a number of algorithms for estimation have been developed, e.g. the well known cross-correlation method [2, 12]. For ChIP-exo and ChIP-nexus the relevant quantity is the average width of the regions that are occupied by the protein of interest, which is different from the average fragment length. We will refer to such regions as protected regions, because they are protected from 5'-3' (λ) exonuclease digestion. A number of methods have been developed to estimate the size of the protected region and to call peaks in ChIP-exo data [10, 11, 13, 14].
In this work, we present a software package, Q-nexus, for the analysis of ChIP-nexus and ChIP-exo data. Our software implements an all-in-one approach for the preprocessing of ChIP-nexus reads, a novel method for the estimation of the protected-region width and peak calling that can be applied to ChIP-nexus as well as ChIP-exo data. We tested our software on the available ChIP-nexus data and show that our method for the estimation of protected region width provides unbiased signatures that are homogeneous for biological replicates and specific for individual transcription factors. Using the IDR framework [15], we show that our method for ChIP-nexus peak calling outperforms competing methods with respect to the reproducibility of the results. The Q-nexus software as well as an associated tutorial is freely available at https://github.com/charite/Q.
Preprocessing and mapping of ChIP-nexus reads
In standard ChIP-seq, multiple reads that map to the same genomic position are usually considered to be duplicates resulting from PCR overamplification during library preparation, and are therefore removed before further analysis. In contrast, in the ChIP-exo and ChIP-nexus protocols, exonuclease digests multiple distinct DNA fragments up to the identical protein-DNA binding site, and therefore reads mapping to the same position are not necessarily PCR duplicated. While ChIP-exo analysis protocols simply do not remove any reads mapping to the same position, ChIP-nexus employs a randomized barcode in the adapter in order to allow PCR duplicated reads (with the same random barcode) to be distinguished from reads originating from distinct molecules (with different random barcodes). We will refer to reads mapping to the same position as identically mapped (IM) reads. Furthermore, we will refer to IM reads with identical barcodes as IMIB reads and to those with unique barcode as IMUB reads. ChIP-nexus assumes that identically mapped reads with identical barcode (IMIB) represent PCR duplicates and only IMUB are utilized for analysis.
Existing tools are not able to process these barcodes in the way that is required by the ChIP-nexus protocol. In the original publication, a set of scripts was used to process barcodes and prepare the data for peak calling [11]. We have therefore implemented a preprocessing routine (Fig. 1) that processes raw FASTQ files by extracting the random barcodes from ChIP-nexus reads and writes them into the sequence ID line for downstream analysis. Due to exonuclease digestion a certain proportion of the inserts tend to be very short in size, i.e. shorter than the read length. Therefore, adapter clipping is performed. Following this, alignment of the preprocessed ChIP-nexus reads is carried out with an aligner such as bowtie [16]. The ChIP-nexus protocol assumes that reads that are mapped to the same genomic position and have an identical barcode result from PCR duplication artifacts. For such reads only one read is retained. The Q-nexus software preprocesses a typical ChIP-nexus dataset in less than 17 minutes, where the runtime primarily depends on the number of raw reads to be processed. We compared the results of our preprocessing to those obtained in the original publication [11] and found comparable numbers of IMUB reads (Table 1).
Overview of the Q-nexus preprocessing workflow. During barcode preprocessing, barcode tags are removed. Subsequently, adapter sequences are clipped and reads that consist completely of adapter (orange-tagged) are removed. The clipped reads are mapped to the reference genome. The random barcode tags allow PCR duplicated reads and IMUB reads to be distinguished from one another. Only one of the two PCR duplicated reads mapping to the same genomic position (blue-tagged) is kept, while reads with different random barcodes are allowed to map to the same position
Table 1 Preprocessing: IMUB reads and runtime
Sequence duplication levels and random barcodes
The complexity of sequencing libraries and PCR overamplification are critical points in virtually all NGS applications [17]. In many NGS applications, a diverse library in which any given sequence appears only once in the final data set is considered ideal, and high levels of duplication generally indicate PCR overamplification or other forms of bias. For this reason, software tools such as FastQC [18] have been developed that generate plots showing the proportion of the overall library with a given degree of duplication, where the sequence duplication level refers to the proportion of reads in the given duplication level bin. ChIP-exo presents a unique challenge to this kind of analysis, because the exonuclease digestion step tends to reduce the overall diversity of starting positions in a library, since distinct starting molecules may be digested down to the same stop position. For this reason, the pile-ups of reads at stop nucleotides cannot be distinguished from PCR duplicated reads. ChIP-nexus was designed to allow PCR duplicates to be identified based on the random barcode [11].
We developed a bespoke plot that determines the levels of duplication with respect to random barcodes (Fig. 2 a). Instead of considering only the levels of identical sequences or identically mapped reads, we also determine the distribution of the levels for IMIB and IMUB reads. We applied this procedure to all analyzed datasets (Fig. 2 b, c, Additional file 1: Figure S2). Furthermore, we use the various level counts to calculate overall duplication levels for a given sample, defined as the ratio of mapped reads with 5'-end depth > 1 to all mapped reads. For ChIP-nexus, we expect pile-ups of IMUB reads at stop nucleotides, whereas at background positions we expect IMUB reads to occur as singletons. Therefore, the mean per-position depth of IMUB reads can be used to assess the quality of enrichment. We calculated these values for all analyzed datasets (Table 2).
Duplication levels using random barcodes. a Toy example of mapped ChIP-nexus reads and the corresponding counts for identically mapped reads (IM, red), identically mapped reads with identical random barcodes (IMIB, blue), and identically mapped reads with unique random barcodes (IMUB, black). The number of horizontal bars for a given level corresponds to the number of reads that have the same level of duplication. Additional file 1: Figure S1 provides a detailed example of how IM, IMIB, and IMUB reads are defined and calculated. b, c Duplication level plots for a Dorsal dataset with an overall duplication level of 54 % and for a Max dataset with an overall duplication level of 95 %
Table 2 Duplication levels
Binding characteristics: qfrag length distribution with pseudo-control
For ChIP-nexus and ChIP-exo the equivalent of the average fragment length is the width of the regions that are occupied by the protein of interest, which we call "protected-region width". Such regions are about 6–20 bp, which is much shorter than typically observed average fragment lengths (Fig. 3 a). Here, we present a new method for the estimation of the protected-region width in ChIP-nexus data that uses the distribution of qfrag [19] lengths. A qfrag is defined to be the genomic interval between any pair of 5' read mapping positions on the forward and reverse strand. We derive the empirical distribution of qfrag-lengths from data by counting the number of qfrags for given lengths. The qfrag-length with the highest number of observed qfrags can be interpreted as the protected-region width (see "Methods").
Binding characteristics: qfrag length distribution with pseudo-control. a The 5'-3' (λ) exonuclease is employed to trim the DNA sequences of a fragment of length ℓ to within a few bp at which the protein of interest has been bound to the DNA. This yields shorter fragments of length ℓ ′ and ℓ ″. The width of the protected region (ℓ ‴) is given by the distance between two 5' ends on the forward and reverse strand. b Schematic representation of the pseudo-control and the corresponding transformation. For mapping artifacts predominantly to be found on chrU and chrUextra (left-hand) and for genuine ChIP-nexus peaks (right-hand). The pseudo-control is derived from the original mapping data by swapping the strand of each read and subsequently shifting the 5' end by one read length towards 5' direction. For artifacts, this has no effect on the qfrag-length distribution in the pseudo-control. c qfrag-length distribution for original data (black) and pseudo-control (gray). Both distributions are are dominated by the phantom peak at one read length. d We use the difference between the qfrag-length distributions as signature and the maximum at a length of 19 as estimate for ℓ ′′′
It is a well known problem that fragment length estimation by the cross-correlation method [2] can demonstrate an artefactual, "phantom" peak that shows the maximum correlation at a length of one read length [20, 21], that is thought to mainly arise from pile-ups of mapped reads that are arranged in a way that 5' ends on the forward and reverse strand have a distance of one read length (Additional file 1: Figure S3). Such mapping artifacts are most likely caused by repetitive sequences and for the Drosophila experiments analyzed here (dm3) they occur predominantly on the chromosomes U and Uextra. Similar to the cross-correlation plot, we found that the qfrag length distribution can also be affected by phantom peaks; we therefore developed a method that attempts to remove the phantom peak from the qfrag length distribution in order to enable more accurate estimation of the protected-region width. Our method generates a "pseudo-control" for each ChIP-nexus dataset in which the the strands of each mapped read are swapped and subsequently the 5' end of each read is shifted by one read length towards 5' end (see Methods). This transformation has no effect on artifacts responsible for the 'phantom peak', but it abolishes signals from clusters of qfrags smaller than one read length (Fig. 3 b). Therefore, our procedure subtracts the qfrag counts of the pseudo-control from the counts of the original data and uses the resulting difference as an unbiased signature to estimate the mean protected-region width (Fig. 3 c and d).
Evaluation: binding characteristics
We applied the method of plotting the 5' coverage around motif-centered binding sites [10, 11], the cross-correlation method [2], the internal routines of MACS2 [12] and MACE [13] and our Q-nexus method to the ten available ChIP-nexus datasets and compared the estimated distances (Methods and Table 3). The 5' coverage around binding sites shows maxima on the forward and reverse strand that have distances between 10 and 18 bp (Fig. 4 a, b and Additional file 1: Figure S4) that appear to be reasonable from a biological point of view and are in line with former analysis of the same data [11]. However, we found that the results were unstable and heavily depend on the motif used for selection and centering, as well as the allowed distance between motif and predicted binding site. Using standardized parameter settings the method fails to derive a distinctive distribution in four out of ten cases (Fig. 4 c, d and Additional file 1: Figure S4). The cross-correlation method is obviously strongly biased by the phantom peak and (falsely) estimates in all ten cases the read length of 42 bp (Additional file 1: Figure S5). Also the optimal border pair size estimates between of MACE highly correlate with the read length, indicating a biased estimation. The predicted fragment lengths of MACS2 are indeed smaller than the read length but disagree with the distances that result from the 5' coverage around binding sites. Our Q-nexus method derives estimates for the protected-region width that are largely consistent with distances derived from the 5' coverage plots (Fig. 4 a, b and Additional file 1: Figure S4). Finally, the derived signatures (differences between original and pseudo-control) are very similar for biological replicates, but specific for individual factors (Fig. 4 e–h and Additional file 1: Figure S6).
Evaluation: Binding characteristics. a, b 5' end coverage around motif centered predicted binding sites for two biological replicates of Dorsal. The two peaks on the forward and reverse strand have a distance of 18 bp, which is in line with a previous analysis [11] of the same data. c, d Using standardized parameter settings for all samples, no characteristic distribution is derived for TBP. See Additional file 1: Figure S4 for further positive and negative examples. e, f Difference of qfrag-length distribution between original datasets and pseudo-controls for Dorsal replicates (for all mapped reads (blue) and filtered for reads that map to standard chromosomes (green)). The estimated protected-region width of ℓ ′′′=18 is consistent with the distance derived by the 5' end coverage around motif centered binding sites. The signatures are independent of whether mapped reads were filtered or not. Furthermore, they are reproducible for biological replicates. g, h Additional examples for biological replicates of TBP. The signatures are reproducible for biological replicates, but different from the signatures derived for Dorsal. See Additional file 1: Figure S6 for further examples
Table 3 Evaluation of binding characteristics
ChIP-nexus peak calling
We implemented an algorithm for ChIP-nexus peak calling which builds on the previous preprocessing steps and accepts mapped reads in BAM format. Since PCR artifacts (IMIB) are already removed, IMUB reads are kept, assuming that such reads stem from different molecules because they have different random barcodes. Our algorithm (see Methods) implements the method of qfrag-length distribution with pseudo-control in order to estimate the protected-region width ℓ ′′′, which is then used to combine pairs of 5' ends on the forward and reverse strand by forming qfrags [19] with a minimal allowed distance q min =ℓ ′′′−5 and a maximal allowed distance of q max =ℓ ′′′+5. The qfrag-depth at any one position is the total number of qfrags that cover the position. The qfrag coverage has a different depth distribution than the original coverage of reads or 5' ends. Regions with neighboring clusters of 5' ends on the forward and reverse strand are selectively emphasized by the qfrag method (Fig. 5). The qfrag coverage profile along the genome is searched for local maxima that we refer to as summits that are then tested for significance. For each summit position the number of 5' ends that map to within a radius of q max is determined. P-values are calculated using the Poisson distribution and corrected for multiple testing using the Benjamini-Hochberg procedure [22]. The final candidate list is sorted by P-value and a cutoff can be specified by the user. Our algorithm does not require fine-tuning of parameters for typical runs.
ChIP-nexus peak calling (a) Idealized example of a ChIP-nexus peak. The protein of interest (green) is bound via one cross-link to the DNA. The 5' ends are trimmed by exonuclease ('Pac-Man' symbols) up to the cross-link position. 5' end positions of mapped reads, depicted by red and blue arrows, are transformed to a qfrags coverage profile (purple) along the genome. Local maxima within the qfrag coverage are taken as summits. For each summit position s i the number of 5' end positions within a range of q max is determined and tested for statistical significance. b Comparison of 5' end and qfrags coverage profiles for Dorsal and Twist. 5' end (red and blue) and qfrags (purple) coverage profiles at the rho NEE enhancer for Dorsal and Twist (taken from IGV [29]). This region is also shown in the original ChIP-nexus publication [11]. Regions surrounded by clusters of 5'ends on the forward and reverse strand are selectively emphasized by the qfrag method. The qfrags coverage profiles demonstrate two clearly separated peaks for Dorsal and Twist
Evaluation: reproducibility of peak calling
To evaluate the reproducibility of our peak calling algorithm compared to that of MACS2 and MACE, we used a test framework based on the IDR procedure [15, 23], which has been heavily used to measure the reproducibility of ChIP-seq experiments [20] and should also be applicable to ChIP-nexus data. We performed the comparisons on pairs of biological replicates for five transcription factors (Table 1). We derived peak sets for each dataset using the three peak calling algorithms (see Methods). The peak sets were sorted by significance and the top 100,000 peaks were used for further analysis. The IDR procedure is essentially based on peak overlaps. Two predicted binding positions from two biological replicates were classified as overlapping, if they have a distance of at most 3 bp, which is reasonable given the high resolution provided by the ChIP-nexus protocol.
Figure 6 a–e show the results of the IDR procedure that were obtained for Twist. The top 100,000 peaks derived by Q-nexus display substantial larger overlap compared to MACS2 and MACE (Fig. 6 a–c). It should also be noted that the Pearson correlation coefficients for signal scores of overlapping peaks of MACE are very low in comparison to that of Q-nexus and MACS2. The change of correspondence curve [15], which is used to visualize the transition from reproducible to irreproducible signals, shows that Q-nexus identifies the largest number of reproducible peaks before the transition occurs (Fig. 6 d). Furthermore, according to the IDR, Q-nexus identifies the largest number of reproducible peaks (Fig. 6 e). For all pairs of biological replicates tested similar results as for Twist were obtained (Fig. 6 f–g and Additional file 1: Figures S7–S16 and Tables S1 and S2).
Evaluation: Reproducibility of peak calling. a-c We applied the peak calling methods Q-nexus, MACS2, and MACE to a pair of biological replicates of Twist. The scatterplots show the scores of overlapping peaks of the top 100,000 peaks for different methods. The number of overlapping peaks and the Pearson correlation coefficient is given in the upper-left corner of each plot. Q-nexus yields the largest number of overlapping peaks and correlation coefficients of almost 1. d The change of correspondence curve shows that peaks derived by Q-nexus remain consistent for 10,000 more than those of MACS2. e Q-nexus displays a considerably smaller proportion of irreproducible signals (0.01 < IDR) than MACS2. f, g We obtained similar results for the other ChIP-nexus datasets of the transcription factors Dorsal, Max, Myc and TBP. For more detailed results see Additional file1: Figures S7 to S16
ChIP-nexus is an extension of the ChIP-exo protocol that was shown to outperform ChIP-exo with respect to resolution and specificity, and additionally requires less input material [11]. However, to date no bespoke software for ChIP-nexus analysis has been published, and the original analysis of the ChIP-nexus data was performed using scripts and software such as MACS2 that was originally designed for ChIP-seq [11]. In this work, we present an efficient and easy to use software pipeline for ChIP-nexus data that includes methods for preprocessing and mapping of ChIP-nexus reads, estimation of the protected region width, as well as peak calling. We evaluated our methods on ten publicly available datasets.
One of the major advantages of the ChIP-nexus protocol is the use of random barcodes that allow monitoring of PCR overamplification. Our software recognizes random barcodes and selectively removes PCR duplicated reads while retaining independent reads whose 5' ends map to same position. Additionally, the random barcode information is used to generate a plot for duplication levels and calculate various statistics that can be used for trouble shooting and optimization.
For ChIP-seq, the size distribution of the fragments needs to be estimated for most peak calling algorithms [4, 24, 25]. For ChIP-nexus, it is not the size of the original fragments that is important, but rather the segment of DNA that cannot be digested because of steric interference by the formaldehyde cross-linked protein (the "protected region"). We present a method for estimating the average width of the protected region that is based on the notion of qfrags [19] and show that, on the ChIP-nexus data, it yields unbiased signatures that are not affected by the so-called phantom peak [20], which is not the case for the cross-correlation method developed for ChIP-seq [2]. The estimates of the protected region width are in line with distances that were derived in a previous study of same data [11] using integrated 5' end coverage plots around predicted and motif-centered sites. Notably, our method derives signatures that are highly reproducible for biological replicates and specific for different factors.
We have previously developed a method using "qfrag-analysis" to identify candidate peaks in ChIP-seq analysis [19]. Here, we adapted that algorithm for ChIP-nexus analysis. We adopted the peak detection step using the qfrag coverage depth profile along the genome, but for ChIP-nexus data we keep duplicated reads, assuming that they originate from different molecules, and form qfrags using the average protected region width instead of fragment length. Regions bound by the protein of interest are surrounded by pile-ups of 5' ends reads mapped to the forward and reverse strand and therefore will be emphasized in the qfrag depth profile. This approach differs from previous published peak pairing methods for ChIP-exo [13, 14], in which peaks are detected separately for the forward and reverse strand and subsequently combined into pairs. The saturation-based method we presented for the evaluation ChIP-seq analysis involved a statistical analysis of the number of positions within candidate ChIP-seq peaks to which one or more 5' read ends mapped. This approach is less suitable for ChIP-exo and ChIP-nexus analysis, in which we expected multiple, independent reads to map to the same position because of the exonuclease digestion. We therefore applied a statistical test based on a standard Poisson model of the count distribution. With respect to the IDR analysis framework applied to biological replicates, our results showed substantially better reproducibility than the other two methods we tested.
In this study, we present an integrated analysis pipeline implemented in C++ for the analysis of ChIP-nexus and ChIP-exo data. The pipeline begins with efficient methods for preprocessing ChIP-nexus reads to remove PCR duplicates by exploiting information in the random barcodes included in ChIP-nexus adapters to recognize PCR duplicates. This step is skipped for ChIP-exo analysis. We introduce an algorithm that creates pseudo-controls from the data with which true signal can be differentiated from pseudo-peaks, which allows us to accurately estimate the width of the protected region. Our method then performs an analysis of the qfrag distribution to center candidate peaks and then performs a statistical analysis of the read depth distribution to identify peaks. We demonstrate that our method displays a higher reproducibility than other approaches to ChIP-nexus analysis. An efficient and easy-to-use implementation of our method is freely available at https://github.com/charite/Q.
Preprocessing of raw FASTQ reads
We implemented an efficient Q-nexus preprocessing application, flexcat (that is based on flexbar [26]), using the SeqAn C++ programming library [27]. flexcat removes the random and the fixed barcodes, inserts the randomized barcode into the ID field of the sequence (for instance, TL:ATGCC would be added to the description line of a sequence with the random barcode ATGCC), and clips adapter sequences. In ChIP-nexus reads, the random barcode is followed directly by a fixed four-nucleotide barcode. Reads that display no fixed barcode or more than one different nucleotide within the fixed barcode are discarded.
Read mapping
In principle, Q-nexus can be used with any read mapper, whereby only uniquely mappable reads should be used for downstream analysis. In the experiments described in this work, we used bowtie [16] version 1.1.2 with the settings —chunkmbs 512, -k 1, -m 1, —best, —strata.
Processing of aligned reads
We implemented an efficient tool called nexcat that scans BAM files in order to identify sets of PCR duplicates. The random barcode from the FASTQ description line is carried over into the BAM file, and can thus be used to distinguish PCR duplicates from IMUB reads. PCR duplicates are identified as sets of reads with an identical random barcode whose 5' terminus is located at the same genomic position and all but one of those reads are discarded. Since Q-nexus peak calling utilizes only the 5' end of each read, it does not matter which read is retained.
Assessment of Q-nexus duplication levels
Our method has three different ways of defining "duplication". Identically mapped (IM) reads, are simply reads whose 5' end maps to the identical chromosomal location. The information in the random barcodes is not relevant for the determination of IM reads. Identically mapped with identical barcode (IMIB) reads are defined as IM reads that additionally have an identical random barcode. The ChIP-nexus analysis pipeline [11] removes all IMIB reads except one at each position. We name the remaining reads identically mapped with unique barcode (IMUB) reads (in the original publication, these reads were named "usable reads"). The duplication plots shown in Fig. 2 b–c provide an overview of IM, IMIB, and IMUB reads according to the proportion of reads that have a given duplication level (Fig. 2 a and Additional file 1: Figure S1). We calculate an overall duplication level as the proportion of reads with a duplication level of two or more among all reads. Table 2 shows the overall duplication levels according to each of the three definitions of "duplication".
Binding characteristics: qfrag length distribution and pseudo control
We refer to 5' end positions of reads that map either to the forward or the reverse strand as hits. The outcome of ChIP-nexus experiment is modeled as a set of hits:
$${} T=\{\ h=(\text{pos},\text{strand})\ |\ \text{pos} \in \{1,\ldots,l\} \wedge \text{strand} \in \{f,r\}\}, $$
where l is the length of the chromosome. A qfrag is defined to be the genomic interval between an ordered pair of hits (h i ,h j ), such that h i is on the forward strand, h j is on the reverse strand. For the distribution of qfrag-lengths qfrags of fixed lengths are considered and for each length δ=2,…,Δ the number of qfrags is determined:
$$Q_{t}(\delta)=|\{(h_{i},h_{j})\ |\ h_{i} \in T_{f}\ \land\ h_{j} \in T_{r}\ \land h_{j}-h_{i}=\delta\}|. $$
The pseudo-control is derived from the original data by inverting the strand information for each given hit, i.e
$$\begin{array}{@{}rcl@{}} h^{\prime}.strand:=\left\{ \begin{array}{ll} f,& \text{if}\ h.strand=r\\ r, & \text{otherwise} \end{array}\right. \end{array} $$
and subsequently shifting the (strand inverted) hit by one read length rl towards 5' end, i.e.
$$\begin{array}{@{}rcl@{}} h^{\prime}.pos:=\left\{ \begin{array}{ll} h.pos+rl-1,& \text{if}\ h^{\prime}.strand=r\\ h.pos-rl+1, & \text{otherwise} \end{array}\right. \end{array} $$
The distribution of qfrag-length in the pseudo-control is defined as before:
$$Q_{p}(\delta)=|\{(h^{\prime}_{i},h^{\prime}_{j})\ |\ h^{\prime}_{i} \in T^{\prime}_{f}\ \land\ h^{\prime}_{j} \in T^{\prime}_{r}\ \land h^{\prime}_{j}-h^{\prime}_{i}=\delta\}|. $$
We use the difference between Q t (δ) and Q p (δ) as signature and the maximum value as estimate for the protected-region width ℓ ′′′, i.e.
$$\ell^{\prime\prime\prime}={\underset{\delta}{\text{arg max}}}\, Q_{t}(\delta)-Q_{p}(\delta). $$
5' end coverage around motif centered binding sites
For each dataset summits were derived using Q [19] with the following parameter settings: —fragment- length-average 15, —fragment-length- deviation 10, —keep-dup and sorted by significance. The top 2,000 peaks (summit position ± 40) were used for a de novo motif analysis with DREME [28] using default settings. The top 30,000 peaks were filtered for those with an occurrence of the top motif in a distance of at most the length of the motif. Finally, the selected summits were centered to the center of the motif occurrence. Around the motif filtered and centered sites the integrated distribution of 5' ends were determined using Q with the following parameter settings: —bed-hit-dist <CENTERED_SITES_BED>,—keep-dup, —pseudo-control.
Cross-correlation analysis
We performed cross-correlation analysis using the function get.binding.characteristics of the SPP package (version 1.11) with the following parameter settings: srange=c(2,110),bin=1.
The predicted fragment lengths were derived in the course of peak calling with parameter settings as stated below.
The optimal border pair sizes of MACE were derived in the course of peak calling with parameter settings as stated below.
qfrag-length distribution with pseudo-control
We derived qfrag-length distributions using Q with the following parameter settings: —qfrag-length-distribution, —step-num 110, —keep-dup.
We use q min =ℓ ′′′−x and q max =ℓ ′′′+x to form qfrags from all hits on the forward and reverse strand that satisfy q min ≤hj.pos−hi.pos≤q max . We used a value of x=5 by default, which is a parameter for the deviation from estimated protected-region width.
We calculate the depth of qfrags at any given position and search the qfrag coverage profile along the genome for free-standing local maxima which we refer to as summits that correspond to predicted binding positions, where free-standing means that there is no position with a higher qfrag depth within a radius of q min . For each summit s i the number of 5' ends within the range s i −q max ,…,s i +q max , denoted as k, is determined. Assuming a null model in which reads are evenly distributed across the genome, P-values are calculated using the Poisson distribution.
$$P(x \geq k)=1-\sum_{i=0}^{k-1}\text{Pois}(i,\lambda) $$
$$\lambda=2 \cdot q_{max} \cdot \frac{|T_{f}|+|T_{r}|}{l} $$
All regions covered by at least one qfrag are tested. P-values are corrected for multiple testing using the Benjamini-Hochberg procedure [22].
IDR analyses were performed with parameter settings recommended for pairs of biological replicates. Peak lists were derived for Q-nexus, MACS2 and MACE as stated below.
Peak calling parameters for Q-nexus
We used version 1.3.0 of Q with the following parameter settings: —nexus-mode, —top-n 200,000. Q-nexus predicts single binding positions or summits. The summits were extended by 2 bp in upstream and downstream direction, sorted by signal score, i.e. the number of 5' ends that map for a given summit s i into the range s i −q max ,…,s i −q max , and only the top 100,000 were kept.
Peak calling parameters for MACS2
We used version 2.1.0.20150731 of MACS2 in the callpeak mode with the following parameter settings —keep-dup all, —pvalue 5e-01, —call- summits. Furthermore, we used —gsize dm for Drosophila and —gsize hs for Human to specify the size of the genome. MACS2 tends to combine multiple adjacent summits into broader broader peaks and only reports the highest summit position, but the option —call-summits causes MACS2 to report all summits. The summits were extended by 2 bp in upstream and downstream direction, sorted by P-value, and only the top 100,000 were kept.
Peak calling parameters for MACE
We used version 1.2 of MACE. The python script for preprocessing was used with the following parameter settings: —kmerSize 0, which turns off the nucleotide bias correction. We did this, because according to the implementation the length of each read has to be greater than three times the kmer-size. Discarding all (clipped) reads shorter than 19 leads to a significant loss of information. The python script for peak calling was used with the following parameter settings: —pvalue 0.99. The MACE algorithm does not report peaks, but border pairs, i.e. pairs of peaks on the forward and reverse strand with a distance that approximates to a optimal border pair size that is estimated from the data. We defined the centers between the border pairs as summits. The summits were extended by 2 bp in upstream and downstream direction, sorted by P-value, and only the top 100,000 were kept.
Bp:
Base pairs
Identically mapped
IMIB:
Identically mapped with identical barcode
IMUB:
Identically mapped with unique barcode
IDR:
Irreproducible discovery rate
Robertson G, Hirst M, Bainbridge M, Bilenky M, Zhao Y, Zeng T, Euskirchen G, Bernier B, Varhol R, Delaney A, Thiessen N, Griffith OL, He A, Marra M, Snyder M, Jones S. Genome-wide profiles of STAT1 DNA association using chromatin immunoprecipitation and massively parallel sequencing. Nat Methods. 2007; 4(8):651–7. doi:10.1038/nmeth1068.
Kharchenko PV, Tolstorukov MY, Park PJ. Design and analysis of ChIP-seq experiments for DNA-binding proteins. Nat Biotechnol. 2008; 26(12):1351–9. doi:10.1038/nbt.1508.
Valouev A, Johnson DS, Sundquist A, Medina C, Anton E, Batzoglou S, Myers RM, Sidow A. Genome-wide analysis of transcription factor binding sites based on ChIP-Seq data. Nat Methods. 2008; 5(9):829–34. doi:10.1038/nmeth.1246.
Furey TS. ChIP-seq and beyond: new and improved methodologies to detect and characterize protein-DNA interactions. Nat Rev Genet. 2012; 13(12):840–52. doi:10.1038/nrg3306.
Rhee HS, Pugh BF. Comprehensive genome-wide protein-DNA interactions detected at single-nucleotide resolution. Cell. 2011; 147(6):1408–19. doi:10.1016/j.cell.2011.11.013.
Rhee HS, Pugh BF. ChIP-exo method for identifying genomic location of DNA-binding proteins with near-single-nucleotide accuracy. Curr Protoc Mol Biol. 2012; Chapter 21:21–4. doi:10.1002/0471142727.mb2124s100.
Serandour AA, Brown GD, Cohen JD, Carroll JS. Development of an Illumina-based ChIP-exonuclease method provides insight into FoxA1-DNA binding properties. Genome Biol. 2013; 14(12):147. doi:10.1186/gb-2013-14-12-r147.
Wales S, Hashemi S, Blais A, McDermott JC. Global MEF2 target gene analysis in cardiac and skeletal muscle reveals novel regulation of DUSP6 by p38MAPK-MEF2 signaling. Nucleic Acids Res. 2014; 42(18):11349–62. doi:10.1093/nar/gku813.
Svensson JP, Shukla M, Menendez-Benito V, Norman-Axelsson U, Audergon P, Sinha I, Tanny JC, Allshire RC, Ekwall K. A nucleosome turnover map reveals that the stability of histone H4 Lys20 methylation depends on histone recycling in transcribed chromatin. Genome Res. 2015; 25(6):872–83. doi:10.1101/gr.188870.114.
Starick SR, Ibn-Salem J, Jurk M, Hernandez C, Love MI, Chung HR, Vingron M, Thomas-Chollier M, Meijsing SH. ChIP-exo signal associated with DNA-binding motifs provides insight into the genomic binding of the glucocorticoid receptor and cooperating transcription factors. Genome Res. 2015; 25(6):825–35. doi:10.1101/gr.185157.114.
He Q, Johnston J, Zeitlinger J. ChIP-nexus enables improved detection of in vivo transcription factor binding footprints. Nat Biotechnol. 2015; 33(4):395–401. doi:10.1038/nbt.3121.
Feng J, Liu T, Qin B, Zhang Y, Liu XS. Identifying ChIP-seq enrichment using MACS. Nat Protoc. 2012; 7(9):1728–40. doi:10.1038/nprot.2012.101.
Wang L, Chen J, Wang C, Uusküla-Reimand L, Chen K, Medina-Rivera A, Young EJ, Zimmermann MT, Yan H, Sun Z, Zhang Y, Wu ST, Huang H, Wilson MD, Kocher J-PA, Li W. MACE: model based analysis of ChIP-exo. Nucleic Acids Res. 2014; 42(20):156. doi:10.1093/nar/gku846.
Madrigal P. CexoR: an R/bioconductor package to uncover high-resolution protein-DNA interactions in ChIP-exo replicates. EMBnet.journal. 2015; 21:e837. Available at: http://journal.embnet.org/index.php/embnetjournal/article/view/837/1225. Accessed 24 Oct 2016.
Li Q, Brown J, Huang H, Bickel P. Measuring reproducibility of high-throughput experiments. Ann Appl Stat. 2011; 5(3):1752–79.
Langmead B, Trapnell C, Pop M, Salzberg SL. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009; 10(3):25. doi:10.1186/gb-2009-10-3-r25.
Daley T, Smith AD, Angeles L, Biology C. Predicting the molecular complexity of sequencing libraries. 2013; 10(4):325–7. doi:10.1038/nmeth.2375.Predicting.
Andrews SR. FastQC: a quality control tool for high throughput sequence data. 2010. http://www.bioinformatics.babraham.ac.uk/projects/fastqc. Accessed 1 Aug 2016.
Hansen P, Hecht J, Ibrahim DM, Krannich A, Truss M, Robinson PN. Saturation analysis of ChIP-seq data for reproducible identification of binding peaks. Genome Res. 2015; 25(9):1391–400. doi:10.1101/gr.189894.115.
Landt S, Marinov G, Kundaje A, Kheradpour P, Pauli F, Batzoglou S, Bernstein B, Bickel P, Brown J, Cayting P, Chen Y, Desalvo G, Epstein C, Fisher-Aylor K, Euskirchen G, Gerstein M, Gertz J, Hartemink A, Hoffman M, Iyer V, Jung Y, Karmakar S, Kellis M, Kharchenko P, Li Q, Liu T, Liu X, Ma L, Milosavljevic A, Myers R, Park P, Pazin M, Perry M, Raha D, Reddy T, Rozowsky J, Shoresh N, Sidow A, Slattery M, Stamatoyannopoulos J, Tolstorukov M, White K, Xi S, Farnham P, Lieb J, Wold B, Snyder M. ChIP-seq guidelines and practices of the ENCODE and modENCODE consortia. Genome Res. 2012; 22(9):1813–31. doi:10.1101/gr.136184.111.
Carroll TS, Liang Z, Salama R, Stark R, de Santiago I. Impact of artifact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data. Front Genet. 2014; 5(April):75. doi:10.3389/fgene.2014.00075.
Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc Series B (Methodological). 1995; 57(1):289–300. doi:10.2307/2346101.
Li Q, Brown B, Huang H, Bickel P. IDR analysis 101 Measuring consistency between replicates in high-throughput experiments. 2010.
Park PJ. ChIP-seq: advantages and challenges of a maturing technology. Nat Rev Genet. 2009; 10(10):669–80. doi:10.1038/nrg2641.
Ma W, Wong WH. The analysis of ChIP-Seq data. Methods Enzymol. 2011; 497:51–73. doi:10.1016/B978-0-12-385075-1.00003-2.
Dodt M, Roehr JT, Ahmed R, Dieterich C. Flexbar-flexible barcode and adapter processing for next-generation sequencing platforms. Biology (Basel). 2012; 1(3):895–905. doi:10.3390/biology1030895.
Döring A, Weese D, Rausch T, Reinert K. SeqAn an efficient, generic C++ library for sequence analysis. BMC Bioinformatics. 2008; 9:11. doi:10.1186/1471-2105-9-11.
Bailey TL. DREME: motif discovery in transcription factor ChIP-seq data. Bioinformatics (Oxford). 2011; 27(12):1653–9. doi:10.1093/bioinformatics/btr261.
Robinson JT, Thorvaldsdóttir H, Winckler W, Guttman M, Lander ES, Getz G, Mesirov JP. Integrative genomics viewer. Nat Biotechnol. 2011; 29(1):24–6. doi:10.1038/nbt.1754.
We would like to thank Jeffrey Johnston of the Zeitlinger lab who tested the Q-nexus software.
This project was supported by the Bundesministerium für Bildung und Forschung (BMBF; project no. 0313911 and 13GW0099) and the European Community's Seventh Framework Programme (grant agreement no. 602300; SYBIL). Furthermore, we acknowledge support of the Spanish Ministry of Economy and Competitiveness, 'Centro de Excelencia Severo Ochoa 2013-2017'.
All the genomic data used for analyses are freely available to be downloaded from the GEO repository (GSE55306). Additional tools used for analyses are freely available and details are given in the methods section.
PH conceived and developed the methods for the estimation of the protected-region width and ChIP-nexus peak calling, conducted the reproducibility analysis, and wrote the manuscript. BSM and SR conceived and developed the algorithms for the preprocessing of ChIP-nexus data. MT, JH and JIS assisted with the analysis in the paper and provided biological insights into the ChIP-nexus procedure. PNR helped develop the algorithm, supervised the analysis, and wrote the manuscript. All authors read and approved the final manuscript.
Institute for Medical and Human Genetics, Charité-Universitätsmedizin Berlin, Augustenburger Platz 1, Berlin, 13353, Germany
Peter Hansen, Benjamin S. Menkuec & Peter N. Robinson
Berlin Brandenburg Center for Regenerative Therapies (BCRT), Charité-Universitätsmedizin Berlin, Augustenburger Platz 1, Berlin, 13353, Germany
Peter Hansen & Peter N. Robinson
Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Dr. Aiguader 88, Barcelona, 08003, Spain
Jochen Hecht
Universitat Pompeu Fabra (UPF), Barcelona, Spain
Faculty of Biology, Johannes Gutenberg University Mainz, Ackermannweg 4, Mainz, 55128, Germany
Jonas Ibn-Salem
Institute of Molecular Biology, Ackermannweg 4, Mainz, 55128, Germany
Institute for Bioinformatics, Department of Mathematics and Computer Science, Freie Universität Berlin, Arnimallee 14, Berlin, 14195, Germany
Sebastian Roskosch & Peter N. Robinson
Labor für Pädiatrische Molekularbiologie, Charité-Universitätsmedizin Berlin, Augustenburger Platz 1, Berlin, 13353, Germany
Matthias Truss
Max Planck Institute for Molecular Genetics, Inhestr. 63-73, Berlin, 14195, Germany
Peter N. Robinson
Current address: The Jackson Laboratory for Genomic Medicine, 10 Discovery Drive, Farmington, 06032, CT, USA
Peter Hansen
Benjamin S. Menkuec
Sebastian Roskosch
Correspondence to Peter N. Robinson.
Supplementary figures and tables. The following additional data are available with the online version of this paper. Additional data file 1 contains an explanatory figure for duplication levels as well as figures and tables for additional analyses including duplication rate plots, examples for mapping artifacts, 5' end coverage around motif centered binding sites, cross-correlation plots, qfrag-length distributions, scatterplots of signal scores of overlapping peaks and corresponding IDR plots, as well as two tables containing the total numbers of overlapping peaks and overlapping peaks with IDR ≤ 0.01 for all pairs of biological replicates. (PDF 3840 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Hansen, P., Hecht, J., Ibn-Salem, J. et al. Q-nexus: a comprehensive and efficient analysis pipeline designed for ChIP-nexus. BMC Genomics 17, 873 (2016). https://doi.org/10.1186/s12864-016-3164-6
ChIP-nexus
ChIP-exo
Duplication rates
Library complexity | CommonCrawl |
The Transactions of The Korean Institute of Electrical Engineers (전기학회논문지)
The Korean Institute of Electrical Engineers (대한전기학회)
Power Systems; Power T and D Equipment; Power Economics; Electrical Machinery; Electrical Power Electronics; Electrical New Traffic Systems; Electrical Alternative Energy Systems; Electrical Machines and Semiconductors; High Voltage and Discharge Applications; Photoelectrons and Electromagnetic waves; MEMS; Control and Instrumentation; Robotics and Automation; Computers and Artificial Intelligence; Signal Processing and Communication Systems; Biomedical Engineering.
http://www.trans.kiee.or.kr/ KSCI KCI SCOPUS
A Study on the Transformer Spare Capacity in the Existing Apartments for the Future Growth of Electric Vehicles
Choi, Jihun;Kim, Sung-Yul;Lee, Ju 1949
https://doi.org/10.5370/KIEE.2016.65.12.1949 PDF KSCI
Rapid Expansion of EVs(Electric Vehicles) is inevitable trends, to comply with eco-friendly energy paradigm according to Paris Agreement and to solve the environment problems such as global warming. In this paper, we analyze the limit point of transformer acceptable capacity as the increase of power demand considering EVs supply in the near future. Through the analysis of transformer utilization, we suggest methods to analyze the spare capacity of transformer for the case of optimal efficiency operation and emergency operation respectively. We have the results of 18.4~29% spare capacity for the charging infrastructure to the rated capacity of transformer by analyzing the existing sample apartments. It is analyzed that the acceptable number of EVs is 0.09~0.14 for optimal efficiency operation and 0.06~0.13 for emergency operation. Therefore, it is analyzed the power demand of EV will exceed the existing transformer spare capacity in 7~8 years as the annual growth rate of EVs is prospected 112.5% considering current annual growth rate of EVs and the government EV supply policy.
The Analysis of 4-Conductors Catenary System of AC Railway Feeding System and Calculation of Induced Voltage near Rail Track using the FDTD Method
Ryu, Kyu-Sang;Yeom, Hyoung-Sun;Cho, Gyu-Jung;Lee, Hun-Do;Kim, Cheol-Hwan 1958
AC railway feeding system use single phase to supply power to the railway vehicles. And the system use the rail track as a return current path, so that current flows in the rail. In this situation inductive interference on communication system and unsafe environment can appear on railway system. Therefore knowing the current distribution of catenary system and analysing the return current is required. In this study detail return current distribution was analyzed by modeling the catenary system as 4-conductors group. The distribution characteristics and trends of return current were studied by using the PSCAD/EMTDC and FDTD method that based on Maxwell equation was used to calculate the induced voltage. Simulation code was made by MATLAB. Using this study result data, we can suggest the proper installation location of digital device and these data can be used for additional studies related to return current or induced voltage as a base data.
Optimal Congestion Management Based on Sensitivity in Power System with Wind Farms
Choi, Soo-Hyun;Kim, Kyu-Ho 1965
This paper studies generator rescheduling technique for congestion management in power system with wind farms. The proposed technique is formulated to minimize the rescheduling cost of conventional and wind generators to alleviate congestion subject to operational line overloading. The generator rescheduling method has been used with incorporation of wind farms in the power system. The locations of wind farms are selected based upon power transfer distribution factor (PTDF). Because all generators in the system do not need to participate in congestion management, the rescheduling has been done by generator selection based on the proposed generator sensitivity factor (GSF). The selected generators have been rescheduled using linear programming(LP) optimization techniques to alleviate transmission congestion. The effectiveness of the proposed methodology has been analyzed on IEEE 14-bus systems.
Improvements of Grounding Performances Associated with Soil Ionization under Impulse Voltages
Kim, Hoe-Gu;Lee, Bok-Hee 1971
In this paper, electrical and physical characteristics associated with the ionization growth of soil under impulse voltages in a coaxial cylindrical electrode system to simulate a horizontally-buried ground electrode were experimentally investigated. The results were summarized as follows: Transient ground resistances decreased significantly by soil ionization. The voltage-current (V-I) curves for non-ionization in soil lined up in a straight line with the nearly same slope that is the ground resistance, but they showed a 'cross-closed loop' of ${\infty}$-shape under ionization. The conventional ground resistance and equivalent soil resistivity were inversely proportional to the peak value of injected impulse currents. On the other hand, the equivalent ionization radius and time-lag to the maximum value of ionization radius were increased with increasing the incident impulse voltages. An analysis method for the transient ground resistances of the ground electrode based on the ionization phenomena was proposed. The proposed method can be applied to analyze the transient performances of grounding systems for lightning protection in power system installations.
Voltage-controlled Over-current Relay for Loop-connected Distributed Generators
Kim, Tae-Hee;Kang, Sang-Hee 1979
A protection algorithm using a voltage-controlled overcurrent element for a looped collection circuit in a wind farm is suggested in this paper. Because the proposed algorithm uses voltage relaying signals as well as current relaying signals, any fault in the looped collection circuit can be cleared by voltage-controlled overcurrent relays located at the two adjacent relaying points, the nearest place in each direction from the fault point. The algorithm can also distinguish the external faults which occur at the outside of a wind farm from the internal faults. It means that the proposed algorithm can provide the proper ability of protection coordination to the relays in the looped collection circuits of a large wind farm. The performance of the proposed algorithm is verified under various fault conditions using PSCAD/EMTDC simulations.
Correlation Analysis for Electormagnetic Vibration Source and RMF of Small IPMSM
Lee, Won-Sik;Cho, Gyu-Won;Jun, Byung-Kil;Kim, Gyu-Tak 1986
The vibration soucre of motor has a electromagnetic and mechanical causes. The most widely known, electromagnetic reasons are cogging torque and RMF(Radial magnetic force). Recently, analysis of the cogging torque has been made actively. but analysis of the RMF was not filled. So, in this paper, analyzed RMF. the vibration test were performed for the basic and reduced model of cogging torque and RMF. And it analyzed for the effect of each factor on the vibration. Finally, the vibration was formulated for stator's weight and RMF. To this end, natural, cogging torque and RMF of frequency were analyzed and these relationships were considered.
Analysis of the Gain Characteristic in LLCC Resonant Converter for Plasma Power Supply
Kwon, Min-Jun;Kim, Tae-Hun;Lee, Woo-Cheol 1992
The plasma process is applied to various industrial fields such as high-tech IT industry, textiles and medical. Therefore, there is increasing interest in the plasma power supply, and demand for power devices of high efficiency and high power density is increased. Plasma power supply for process must solve the arc problem, when the plasma is unstable. The output capacitor is closely related to the arc problem. If the output capacitor is smaller, the damage from the arc problem is reduced. However, the small value of the output capacitor affects the operating characteristics of the power supply. In this paper, a LLC resonant converter is adopted, because it can achieve high efficiency and power density in the plasma DC power supply. However, due to the small value of the output capacitor, the converter is operated as a LLCC resonant converter. Therefore, a gain characteristic of LLCC resonant converter is analyzed by using the FHA (First Harmonic Approximation) in plasma power supply. Simulation and experimental results are presented to verify the characteristic analysis of LLCC Resonant Converter.
Critical Conduction Mode Bridgeless PFC Converter Based on a Digital Control
Kim, Tae-Hun;Lee, Woo-Cheol 2000
Generally, in order to implement the CRM(Critical Conduction Mode), the analog controller is used rather than a digital controller because the control is simple and uses less power. However, according to the semiconductor technology development and various user needs, digital control system based on a DSP is on the rise. Therefore, in this paper, the CRM bridgeless PFC converter based on a digital control is proposed. It is necessary to detect the inductor current when it reaches zero and peak value, for calculating the on time and off time by using the current information. However, in this paper, the on-time and off-time are calculated by using the proposed algorithm without any current information. If the switching-times are calculated through the steady-state analysis of the converter, they do not reflect transient status such as starting-up. Therefore, the calculated frequency is out of range, and the transient current is generated. In order to solve these problems, limitation method of the on-time and off-time is used, and the limitation values are varied according to the voltage reference. In addition, in steady state, depending on the switching frequency, the inductance is varied because of the resonance between the inductor and the parasitic capacitance of the switching elements. In order to solve the problem, inductance are measured depending on the switching frequency. The measured inductance are used to calculate the switching time for preventing the transient current. Simulation and experimental results are presented to verify the proposed method.
The Vibration Suppression using Reactive Power Compensator for Speed Control of Parallel Connected Dual Fan Motors fed by a Single Inverter
Yun, Chul;Kwon, Woo-Hyen;Cho, Nae-Soo 2008
This paper proposes analysis and suppression method for reactive power vibration of the slave motor caused by back-EMF mismatch between the master and the slave motor and stator resistance during middle-low speed operation. The master and slave motors are parallel connected dual SPMSMs(Surface mounted Permanent Magnet Synchronous Motors) fed by a single inverter. To suppress vibration of reactive power, RPC(Reactive Power Compensator) proposed in this paper analyzes flux-axis current vibration of the slave motor that occurs in middle-low speed operation using a mathematical model of the fan motor. And RPC adds vibration components detected from flux-axis current of the slave motor to flux-axis current of the master motor. The results of experiment conducted verify the efficacy of the proposed method.
A Study on the Characteristics of NiInZnO/Ag/NiInZnO Multilayer Thin Films Deposited by RF/DC Magnetron Sputter According to the Thickness of Ag Insertion Layer
Kim, Nam-Ho;Kim, Eun-Mi;Heo, Gi-Seok;Yeo, In-Seon 2014
Transparent, conductive electrode films, showing the particular characteristics of good conductivity and high transparency, are of considerable research interest because of their potential for use in opto-electronic applications, such as smart window, photovoltaic cells and flat panel displays. Multilayer transparent electrodes, having a much lower electrical resistance than widely-used transparent conducting oxide electrodes, were prepared by using RF/DC magnetron sputtering system. The multilayer structure consisted of three layers, [NiInZnO(NIZO)/Ag/NIZO]. The optical and electrical properties of the multilayered NIZO/Ag/NIZO structure were investigated in relation to the thickness of each layer. The optical and electrical characteristics of multilayer structures have been investigated as a function of the Ag and NIZO film thickness. High-quality transparent conductive films have been obtained, with sheet resistance of $9.8{\Omega}/sq$ for Ag film thickness of 8 nm. Also the multilayer films of inserted Ag 8 nm thickness showed a high optical transmittance above 93% in the visible range. The electrical and optical properties of the new multilayer films were mainly dependent on the thickness of Ag insertion layer.
Electromagnetic Compatibility Study of a Medical Lead for MRI Systems
Yoo, Hyoungsuk 2019
In the presence of an electrically conducting medical lead, radio frequency (RF) coils in magnetic resonance imaging (MRI) systems may concentrate the RF energy and cause tissue heating near the lead. A novel design for a medical lead to reduce this heating by introducing pins in the lead is presented. Peak 10 g specific absorption rate (SAR) in heart tissue, an indicator of heating, was calculated and compared for both conventional (Medtronic) lead design and our proposed design. Remcom XFdtd software was used to calculate the peak SAR distribution in a realistic model of the human body. The model contained a medical lead that was exposed to RF magnetic fields at 64 MHz (1.5 T), 128 MHz (3 T) and 300 MHz (7 T) using a model of an MR birdcage body coil. The proposed design of adding pins to the medical lead can significantly reduce the heating from different MRI systems.
Optimal Linearization-Based Robust Controller Design for Underwater Glider
Moon, Ji Hyun;Lee, Ho Jae 2023
This paper addresses a robust controller design technique for a nonlinear underwater glider with disturbances. We consider the buoyancy and pitching moment as control inputs, which generate additional nonlinearity on the plant dynamics. To deal with the nonlinearity, we utilize the optimal linearization technique. The conditions for the optimal linearization and the controller design are formulated in terms of matrix inequalities. The effectiveness of the proposed method is demonstrated through a simulation.
Design and Verification of the Hardware Architecture for the Active Seat Belt Control System Compliant to ISO 26262
Lee, Jun Hyok;Koag, Hyun Chul;Lee, Kyung-Jung;Ahn, Hyun-Sik 2030
This paper presents a hardware development procedure of the ASB(Active Seat Belt) control system to comply with ISO 26262. The ASIL(Automotive Safety Integrity Level) of an ASB system is determined through the HARA(Hazard Analysis and Risk Assessment) and the safety mechanism is applied to meet the reqired ASIL. The hardware architecture of the controller consists of a microcontroller, H-bridge circuits, passive components, and current sensors which are used for the input comparison. The required ASIL for the control systems is shown to be satisfied with the safety mechanism by calculation of the SPFM(Single Point Fault Metric) and the LFM(Latent Fault Metric) for the design circuits.
Design of Buoyancy and Moment Controllers of a Underwater Glider Based on a T-S Fuzzy Model
Lee, Gyeoung Hak;Kim, Do Wan 2037
This paper presents a fuzzy-model-based design approach to the buoyancy and moment controls of a class of nonlinear underwater glider. Through the linearization and the sector nonlinearity methodologies, the underwater glider dynamics is represented by a Takagi-Sugeno (T-S) fuzzy model. Sufficient conditions are derived to guarantee the asymptotic stability of the closed-loop system in the format of linear matrix inequality (LMI). Simulation results demonstrate the effectiveness of the proposed buoyancy and moment controllers for the underwater glider.
Ramp Metering under Exogenous Disturbance using Discrete-Time Sliding Mode Control
Jin, Xin;Chwa, Dongkyoung;Hong, Young-Dae 2046
Ramp metering is one of the most efficient and widely used control methods for an intelligent transportation management system on a freeway. Its objective is to control and upgrade freeway traffic by regulating the number of vehicles entering the freeway entrance ramp, in such a way that not only the alleviation of the congestion but also the smoothing of the traffic flow around the desired density level can be achieved for the maintenance of the maximum mainline throughput. When the cycle of the signal detection is larger than that of the system process, the density tracking problem needs to be considered in the form of the discrete-time system. Therefore, a discrete-time sliding mode control method is proposed for the ramp metering problem in the presence of both input constraint in the on-ramp and exogenous disturbance in the off-ramp considering the random behavior of the driver. Simulations were performed using a validated second-order macroscopic traffic flow model in Matlab environment and the simulation results indicate that proposed control method can achieve better performance than previously well-known ALINEA strategy in the sense that mainstream flow throughput is maximized and congestion is alleviated even in the presence of input constraint and exogenous disturbance.
Output Feedback Tracking Control of Wheeled Mobile Robots with Kinematic Disturbances
Chwa, Dongkyoung 2053
In this paper, we propose an output feedback tracking control method for the wheeled mobile robots with kinematic disturbances. The kinematic disturbances should be compensated to avoid the performance degradation. Also, the unavailable velocity of the mobile robot should be estimated. These should be estimated together by designing the nonlinear observer. Based on these estimates, the output feedback controller can be designed. The stability of the mobile robot control systems using the proposed method is rigorously analyzed and the simulation results are also provided to validate the proposed method.
Motion Estimation of a Moving Object in Three-Dimensional Space using a Camera
Range-based motion estimation of a moving object by using a camera is proposed. Whereas the existing results constrain the motion of an object for the motion estimation of an object, the constraints on the motion is relieved in the proposed method in that a more generally moving object motion can be handled. To this end, a nonlinear observer is designed based on the relative dynamics between the object and camera so that the object velocity and the unknown camera velocity can be estimated. Stability analysis and simulation results for the moving object are provided to show the effectiveness of the proposed method.
IP Address Lookup Algorithm Using a Vectored Bloom Filter
Byun, Hayoung;Lim, Hyesook 2061
A Bloom filter is a space-efficient data structure popularly applied in many network algorithms. This paper proposes a vectored Bloom filter to provide a high-speed Internet protocol (IP) address lookup. While each hash index for a Bloom filter indicates one bit, which is used to identify the membership of the input, each index of the proposed vectored Bloom filter indicates a vector which is used to represent the membership and the output port for the input. Hence the proposed Bloom filter can complete the IP address lookup without accessing an off-chip hash table for most cases. Simulation results show that with a reasonable sized Bloom filter that can be stored using an on-chip memory, an IP address lookup can be performed with less than 0.0003 off-chip accesses on average in our proposed architecture.
Genetic Programming Based Plant/Controller Simultaneous Optimization Methodology
Seo, Kisung 2069
This paper presents a methodology based on evolutionary optimization for simultaneously optimizing design parameters of controller and components of plant. Genetic programming(GP) based bond graph model generation is adopted to open-ended search for the plant. Also GP is applied to represent the controller with a unified method. The formulations of simultaneous plant-controller design optimization problem and the description of solution techniques based on bond graph are derived. A feasible solutions for a plant/controller design using the simultaneous optimization methodology is illustrated.
A Practical Approach to the Real Time Prediction of PM10 for the Management of Indoor Air Quality in Subway Stations
Jeong, Karpjoo;Lee, Keun-Young 2075
The real time IAQ (Indoor Air Quality) management is very important for large buildings and underground facilities such as subways because poor IAQ is immediately harmful to human health. Such IAQ management requires monitoring, prediction and control in an integrated and real time manner. In this paper, we present three PM10 hourly prediction models for such realtime IAQ management as both Multiple Linear Regression (MLR) and Artificial Neural Network (ANN) models. Both MLR and ANN models show good performances between 0.76 and 0.88 with respect to R (correlation coefficient) between the measured and predicted values, but the MLR models outperform the corresponding ANN models with respect to RMSE (root mean square error).
Artificial Neural Network-based Real Time Water Temperature Prediction in the Soyang River
Jeong, Karpjoo;Lee, Jonghyun;Lee, Keun Young;Kim, Bomchul 2084
It is crucial to predict water temperature for aquatic ecosystem studies and management. In this paper, we first address challenging issues in predicting water temperature in a real time manner and propose a distributed computing model to address such issues. Then, we present an Artificial Neural Network (ANN)-based water temperature prediction model developed for the Soyang River and a cyberinfrastructure system called WT-Agabus to run such prediction models in an automated and real time manner. The ANN model is designed to use only weather forecast data (air temperature and rainfall) that can be obtained by invoking the weather forecasting system at Korea Meteorological Administration (KMA) and therefore can facilitate the automated and real time water temperature prediction. This paper also demonstrates how easily and efficiently the real time prediction can be implemented with the WT-Agabus prototype system.
EKF-based Simultaneous Localization and Mapping of Mobile Robot using Laser Corner Pattern Matching
Kim, Tae-Hyeong;Park, Tae-Hyoung 2094
In this paper, we propose an extended Kalman filter(EKF)-based simultaneous localization and mapping(SLAM) method using laser corner pattern matching for mobile robots. SLAM is one of the most important problems of mobile robot. However, existing method has the disadvantage of increasing the computation time, depending on the number of landmarks. To improve computation time, we produce the corner pattern using classified and detected corner points. After producing the corner patterns, it is estimated that mobile robot's global position by matching them. The estimated position is used as measurement model in the EKF. To evaluated proposed method, we preformed the experiments in the indoor environments. Experimental results of proposed method are shown to maintain an accuracy and decrease the computation time.
Design of Hybrid Type Streetlight for Railway Station with Renewable Energy
Yoon, Yong-Ho;Kim, Jae-Moon 2103
Energy saving is as important as developments of green energy and alternative energy. This paper describes design of hybrid type streetlight for railway station with renewable energy as photovoltaic, wind, secondary battery. In designing hybrid type streetlight for railway station, generation energy with renewable energy and reliability is strongly needed to meet the demand of railway station. In order to achieve the high performance of a streetlight, photovoltaic, wind and secondary battery system, PV tracker, monitoring and GUI system with logging function are designed. To verify of performance of hybrid type streetlight for railway station, we have demonstration test to get of generation energy and flow of energy and the results are present in this paper.
IGBT DC Circuit Breaker with Paralleled MOV for 1,800V DC Railway Applications
Han, Moonseob;Lee, Chang-Mu;Kim, Ju-Rak;Chang, Sang-Hoon;Kim, In-Dong 2109
The rate of rise of the fault current in DC grids is very high compared to AC grids because of the low line impedance of DC lines. In AC grids the arc of the circuit breaker under current interruption is extinguished by the zero current crossing which is provided naturally by the system. In DC grids the zero current crossing must be provided by the circuit breaker itself. Unlike AC girds, the magnetic energy of DC grids is stored in the system inductance. The DC circuit breaker must dissipate the stored energy. In addition the DC breaker must withstand the residual overvoltage after the current interruption. The main contents of this paper are to ${\cdot}$ Explain the theoretical background for the design of DC circuit breaker. ${\cdot}$ Develop the simulation model in PSIM of the real scaled DC circuit breaker for 1,800V DC railway. ${\cdot}$ Suggest design guidelines for the DC circuit breaker based on the experimental work, simulations and design process.
Electric Leakage Point Detection System of Underground Power Cable Using Half-period Modulated Transmission Waveform and Earth Electric Potential Measurement
Jeon, Jeong Chay;Yoo, Jae-Geun 2113
The precise detection of electric leakage point of underground power cable is very important to reduce cost and time of maintenance and prevent electric shock accident through expedite repair of electric leakage point. This paper proposes a electric leakage point detection system underground power cable using of half-period modulated transmission waveform and earth electric potential measurement. The developed system is composed of transmitter to generate the wanted pulse waveform, receiver to measure and display earth electric potential by the transmitted pulse in electric leakage point and PC Software program to display of GPS coordinate on detection cable line. The performance of the electric leakage point detection system was tested in the constructed underground cable leakage detection test bed. The test results on signal generation voltage precision of signal transmitter, mean detection earth voltage, mean detection leakage current and electric leakage point detection error showed the developed system can be used in electric leakage point detection underground power cable.
A Development of Smart Black Box for Grid-connected Solar Power System
Park, Sung-Won;Kim, Dong-Wan;Lee, Jin-Woo 2119
In this paper, we developed a smart black box that can monitor and record the information of the sensor from subsystem in the smart grid system. The plant is the complex power system which is integrated by solar power system, grid-connected power systems, and BESS(battery energy storage system). The black box with the web-server application can connect and synchronize to an external monitoring system and a smart phone. We hope that this system is to contribute to improve operational efficiency, reliability, and stability for the smart grid power system.
A Research on Characteristics Tests for Current Transformers with Maximum mA Secondary Current of 250 mA
Song, Kwang-Jae;Lee, Il-Ho;Song, Sang-Hoon 2127
In this paper, characteristic tests for current transformers with maximum mA secondary current of 250 mA is performed. The purpose of this paper is not only to test the mA current transformers by following the IEEE Draft Standard for Current Transformers with Maximum mA Secondary Current of 250mA, but also to take into consideration certain applications in the use of the mA CTs for billing purposes.
A Computer Vision-based Assistive Mobile Application for the Visually Impaired
Secondes, Arnel A.;Otero, Nikki Anne Dominique D.;Elijorde, Frank I.;Byun, Yung-Cheol 2138
People with visual disabilities suffer environmentally, socially, and technologically. Navigating through places and recognizing objects are already a big challenge for them who require assistance. This study aimed to develop an android-based assistive application for the visually impaired. Specifically, the study aimed to create a system that could aid visually impaired individuals performs significant tasks through object recognition and identifying locations through GPS and Google Maps. In this study, the researchers used an android phone allowing a visually impaired individual to go from one place to another with the aid of the application. Google Maps is integrated to utilize GPS in identifying locations and giving distance directions and the system has a cloud server used for storing pinned locations. Furthermore, Haar-like features were used in object recognition.
Vulnerability Verification of 27 MHz Wireless Keyboards
Kim, Ho-Yeon;Sim, Bo-Yeon;Park, Ae-Sun;Han, Dong-Guk 2145
Internet generalization has led to increased demands for Internet banking. Various security programs to protect authentication information are being developed; however, these programs cannot protect the wireless communication sections of wireless keyboards. In particular, vulnerabilities have been reported in the radio communication sections of 27 MHz wireless keyboards. In this paper, we explain how to analyze M's 27 MHz wireless keyboard. We also experimentally show that an attacker can acquire authentication information during domestic Internet banking using a 27 MHz wireless keyboard. To do this, we set up an experimental encironment to analyze the electromagnetic signal of a 27 MHz wireless keyboard.
Imaging Device Identification using Sensor Pattern Noise Based on Wiener Filtering
Lee, Hae-Yeoun 2153
Multimedia such as image, audio, and video is easy to create and distribute with the advance of IT. Since novice uses them for illegal purposes, multimedia forensics are required to protect contents and block illegal usage. This paper presents a multimedia forensic algorithm for video to identify the device used for acquiring unknown video files. First, the way to calculate a sensor pattern noise using Wiener filter (W-SPN) is presented, which comes from the imperfection of photon detectors against light. Then, the way to identify the device is explained after estimating W-SPNs from the reference device and the unknown video. For the experiment, 30 devices including DSLR, compact camera, smartphone, and camcorder are tested and analyzed quantitatively. Based on the results, the presented algorithm can achieve the 96.0% identification accuracy.
Channel Reservation based DCF MAC Protocol for Improving Performance in IEEE 802.11 WLANs
Hyun, Jong-Uk;Kim, Sunmyeng 2159
In the IEEE 802.11 DCF (Distributed Coordination Function) protocol, the binary exponential backoff algorithm is used to avoid data collisions. However, as the number of stations increases of, the collision probability tends to grow and the overall network performance is reduced. To solve this problem, this paper proposes a data transmission scheme based on the channel reservation method. In the proposed scheme, channel time is divided into reservation period and contention period. During the reservation period, stations succeeded in channel reservation transmit their own data packets in sequence without contention. During the contention period, each station sends its data packets through contentions as in DCF. During both the reservation period and the contention period, each station sends a request for channel reservation for the next reservation period to an AP (Access Point). After receiving such a channel reservation request from each station, the AP decides whether the reservation is succeeded and sends the result via a beacon frame to each station. Performance of the proposed scheme is analyzed through simulations. The simulation results show that the proposed scheme tends to reduce the collision probability of DCF and to improve the overall network performance.
Security Improvement on Biometric-based Three Factors User Authentication Scheme for Multi-Server Environments
Moon, Jongho;Won, Dongho 2167
In the multi-server environment, remote user authentication has a very critical issue because it provides the authorization that enables users to access their resource or services. For this reason, numerous remote user authentication schemes have been proposed over recent years. Recently, Lin et al. have shown that the weaknesses of Baruah et al.'s three factors user authentication scheme for multi-server environment, and proposed an enhanced biometric-based remote user authentication scheme. They claimed that their scheme has many security features and can resist various well-known attacks; however, we found that Lin et al.'s scheme is still insecure. In this paper, we demonstrate that Lin et al.'s scheme is vulnerable against the outsider attack and user impersonation attack, and propose a new biometric-based scheme for authentication and key agreement that can be used in the multi-server environment. Lastly, we show that the proposed scheme is more secure and can support the security properties.
Effective Asymptotic SER Performance Analysis for M-PSK and M-DPSK over Rician-Nakagami Fading Channels
Lee, Hoojin 2177
Using the existing exact but quite complicated symbol error rate (SER) expressions for M-ary phase shift keying (M-PSK) and M-ary differential phase shift keying (M-DPSK), we derive effective and concise closed-form asymptotic SER formulas especially in Rician-Nakagami fading channels. The derived formulas can be utilized to efficiently verify the achievable error rate performances of M-PSK and M-DPSK systems for the Rician-Nakagami fading environments. In addition, by exploiting the modulation gains directly obtained from the asymptotic SER formulas, we also theoretically demonstrate that M-DPSK suffers an asymptotic SER performance loss of 3.01dB with respect to M-PSK for a given M in Rician-Nakagami fading channels at high signal-to-noise ratio (SNR).
Cogging Torque Reduction Design for CVVT Using Response Surface Methodology
Kim, Jae-Yui;Kim, Dong-min;Park, Soo-Hwan;Hon, Jung-Pyo 2183
This paper deals with the design process for an outer-rotor-type surface-mounted permanent magnet synchronous motor (SPMSM) used in continuous variable valve timing (CVVT) systems in automobiles with internal combustion engines. When the same size, outer-rotor-type SPMSMs generate larger torque and more stable than inner-rotor-type SPMSMs. For the initial design, space harmonic analysis (SHA) is used. In order to minimize the cogging torque, an optimization was conducted using Response Surface Methodology (RSM). At the end of the paper, Finite Element Analysis (FEA) is performed to verify the performance of the optimum model.
Finite Control Set Model Predictive Control with Pulse Width Modulation for Torque Control of EV Induction Motors
Park, Hyo-Sung;Koh, Byung-Kwon;Lee, Young-il 2189
This paper proposes a new finite control set-model predictive control (FCS-MPC) method for induction motors. In the method, the reference state that satisfies the given torque and rotor flux requirements is derived. Cost indices for the FCS-MPC are defined using the state tracking error, and a linear matrix inequality is formulated to obtain a proper weighting matrix for the state tracking error. The on-line procedure of the proposed FCS-MPC comprises of two steps: select the output voltage vector of the two level inverter minimizing the cost index and compute the optimal modulation factor of the minimizing output voltage vector in order to reduce the state tracking error and torque ripple. The steady state tracking error is removed by using an integrator to adjust the reference state. The simulation and experimental results demonstrated that the proposed FCS-MPC shows good torque, rotor flux control performances at different rotating speeds.
Range Extension of Light-Duty Electric Vehicle Improving Efficiency and Power Density of IPMSM Considering Driving Cycle
Kim, Dong-Min;Jung, Young-Hoon;Lim, Myung-Seop;Sim, Jae-Han;Hon, Jung-Pyo 2197
Recently, the trend of zero emissions has increased in automotive engineering because of environmental problems and regulations. Therefore, the development of battery electric vehicles (EVs), hybrid/plug-in hybrid electric vehicles (HEVs/PHEVs), and fuel cell electric vehicles (FCEVs) has been mainstreamed. In particular, for light-duty electric vehicles, improvement in electric motor performance is directly linked to driving range and driving performance. In this paper, using an improved design for the interior permanent magnet synchronous motor (IPMSM), the EV driving range for the light-duty EV was extended. In the electromagnetic design process, a 2D finite element method (FEM) was used. Furthermore, to consider mechanical stress, ANSYS Workbench was adopted. To conduct a vehicle simulation, the vehicle was modeled to include an electric motor model, energy storage model, and regenerative braking. From these results, using the advanced vehicle simulator (ADVISOR) based on MATLAB Simulink, a vehicle simulation was performed, and the effects of the improved design were described.
A Propagated-Mode LISP-DDT Mapping System
Ro, Soonghwan 2211
The Locator/Identifier Separation Protocol (LISP) is a new routing architecture that implements a new semantic for IP addressing. It enables the separation of IP addresses into two new numbering spaces: Endpoint Identifiers (EIDs) and Routing Locators (RLOCs). This approach will solve the issue of rapid growth of the Internet's DFZ (default-free zone). In this paper, we propose an algorithm called the Propagated-Mode Mapping System to improve the map request process of LISP-DDT.
An Efficient Context-aware Opportunistic Routing Protocol
Seo, Dong Yeong;Chung, Yun Won 2218
Opportunistic routing is designed for an environment where there is no stable end-to-end routing path between source node and destination node, and messages are forwarded via intermittent contacts between nodes and routed using a store-carry-forward mechanism. In this paper, we consider PRoPHET(Probabilistic Routing Protocol using History of Encounters and Transitivity) protocol as a base opportunistic routing protocol and propose an efficient context-aware opportunistic routing protocol by using the context information of delivery predictability and node type, e.g., pedestrian, car, and tram. In the proposed protocol, the node types of sending node and receiving node are checked. Then, if either sending node or receiving node is tram, messages are forwarded by comparing the delivery predictability of receiving node with predefined delivery predictability thresholds depending on the combination of sending node and receiving node types. Otherwise, messages are forwarded if the delivery predictability of receiving node is higher than that of sending node, as defined in PRoPHET protocol. Finally, we analyze the performance of the proposed protocol from the aspect of delivery ratio, overhead ratio, and delivery latency. Simulation results show that the proposed protocol has better delivery ratio, overhead ratio, and delivery latency than PRoPHET protocol in most of the considered simulation environments.
Analysis of Pulse Width Modulation Schemes for Electric Vehicle Power Converters
Quach, Ngoc-Thinh;Chae, Sang Heon;Kim, Eel-Hwan;Yang, Seung-Yong;Boo, Chang-Jin;Kim, Ho-Chan 2225
In order to overcome the problem of fossil fuel energy, electric vehicle (EV) has been used in recent years. The important issues of EV are driving distance and lifetime related to EV efficiency. A voltage source converter is one of the main components of EV which can be operated with various pulse width modulation (PWM) schemes such as continuous PWM schemes and discontinuous PWM schemes. These PWM schemes will cause the effects on the efficiency of converter system and the lifetime of EV. Therefore, this paper proposes an analysis of the PWM schemes for the power converter on the EV. The objective is to find out a best solution for the EV by comparing the total harmonic distortion (THD) and transient response between the various PWM schemes. The operation of traction motor on the EV with the PWM schemes will be verified by using Psim simulation program.
Smart IoT Hardware Control System using Secure Mobile Messenger
Lee, Sang-Hyeong;Kim, Dong-Hyun;Lee, Hae-Yeoun 2232
IoT industry has been highlighted in the domestic and foreign country. Since most IoT systems operate separate servers in Internet to control IoT hardwares, there exists the possibility of security problems. Also, IoT systems in markets use their own hardware controllers and devices. As a result, there are many limitations in adding new sensors or devices and using applications to access hardware controllers. To solve these problems, we have developed a novel IoT hardware control system based on a mobile messenger. For the security, we have adopted a secure mobile messenger, Telegram, which has its own security protection. Also, it can improve the easy of the usage without any installation of specific applications. For the enhancement of the system accessibility, the proposed IoT system supports various network protocols. As a result, there are many possibility to include various functions in the system. Finally, our IoT system can analyze the collected information from sensors to provide useful information to the users. Through the experiment, we show that the proposed IoT system can perform well.
Optimal MIFARE Classic Attack Flow on Actual Environment
Ahn, Hyunjin;Lee, Yerim;Lee, Su-Jin;Han, Dong-Guk 2240
MIFARE Classic is the most popular contactless smart card, which is primarily used in the management of access control and public transport payment systems. It has several security features such as the proprietary stream cipher Crypto 1, a challenge-response mutual authentication protocol, and a random number generator. Unfortunately, multiple studies have reported structural flaws in its security features. Furthermore, various attack methods that target genuine MIFARE Classic cards or readers have been proposed to crack the card. From a practical perspective, these attacks can be partitioned according to the attacker's ability. However, this measure is insufficient to determine the optimal attack flow due to the refined random number generator. Most card-only attack methods assume a predicted or fixed random number, whereas several commercial cards use unpredictable and unfixable random numbers. In this paper, we propose optimal MIFARE Classic attack procedures with regards to the type of random number generator, as well as an adversary's ability. In addition, we show actual attack results from our portable experimental setup, which is comprised of a commercially developed attack device, a smartphone, and our own application retrieving secret data and sector key.
Review of Virtual Power Plant Applications for Power System Management and Vehicle-to-Grid Market Development
Jin, Tae-Hwan;Park, Herie;Chung, Mo;Shin, Ki-Yeol;Foley, Aoife;Cipcigan, Liana 2251
The use of renewable energy sources and energy storage systems is increasing due to new policies in the energy industries. However, the increase in distributed generation hinders the reliability of power systems. In order to stabilize power systems, a virtual power plant has been proposed as a novel power grid management system. The virtual power plant plays includes different distributed energy resources and energy storage systems. We define a core virtual power plant technology related to demand response and ancillary service for the cases of Korea, America, and Europe. We also suggest applications of the proposed virtual power plant to the vehicle-to-grid market for restructuring national power industries in Korea.
Additional Transmission Protocol for Fairness Enhancement in IEEE 802.11 Wireless LANs
Kang, Tae-Uk;Kim, Sunmyeng 2262
In IEEE 802.11 wireless LANs, when a source node with low data rate occupies the channel resource for a long time, network performance degrades. In order to improve performance, the cooperative communication has been proposed. In the previous cooperative communication protocols, relay nodes deliver data packets only for a source node. In this paper, we propose an additional transmission scheme in which relay nodes select an additional source node based on several information and deliver data packets for the original source node and the selected additional source node. The proposed scheme improves performance and provides fairness among source nodes. Performance of the proposed scheme is investigated by simulation. Our results show that the proposed scheme outperforms the previous protocol in terms of fairness index and throughput.
Rearranged DCT Feature Analysis Based on Corner Patches for CBIR (contents based image retrieval)
Lee, Jimin;Park, Jongan;An, Youngeun;Oh, Sangeon 2270
In modern society, creation and distribution of multimedia contents is being actively conducted. These multimedia information have come out the enormous amount daily, the amount of data is also large enough it can't be compared with past text information. Since it has been increased for a need of the method to efficiently store multimedia information and to easily search the information, various methods associated therewith have been actively studied. In particular, image search methods for finding what you want from the video database or multiple sequential images, have attracted attention as a new field of image processing. Image retrieval method to be implemented in this paper, utilizes the attribute of corner patches based on the corner points of the object, for providing a new method of efficient and robust image search. After detecting the edge of the object within the image, the straight lines using a Hough transformation is extracted. A corner patches is formed by defining the extracted intersection of the straight line as a corner point. After configuring the feature vectors with patches rearranged, the similarity between images in the database is measured. Finally, for an accurate comparison between the proposed algorithm and existing algorithms, the recall precision rate, which has been widely used in content-based image retrieval was used to measure the performance evaluation. For the image used in the experiment, it was confirmed that the image is detected more accurately in the proposed method than the conventional image retrieval methods. | CommonCrawl |
My reflections on the Blackwell-Tapia prize
MBE Home
Chemostats and epidemics: Competition for nutrients/hosts
2013, 10(5&6): 1651-1668. doi: 10.3934/mbe.2013.10.1651
Different types of backward bifurcations due to density-dependent treatments
Baojun Song 1, , Wen Du 2, and Jie Lou 2,
Department of Mathematical Sciences, Montclair State University, Upper Montclair, NJ 07043
Department of Mathematics, Shanghai University, 99 Shangda Road, Shanghai 200444, China
Received October 2012 Revised May 2013 Published August 2013
A set of deterministic SIS models with density-dependent treatments are studied to understand the disease dynamics when different treatment strategies are applied. Qualitative analyses are carried out in terms of general treatment functions. It has become customary that a backward bifurcation leads to bistable dynamics. However, this study finds that finds that bistability may not be an option at all; the disease-free equilibrium could be globally stable when there is a backward bifurcation. Furthermore, when a backward bifurcation occurs, the fashion of bistability could be the coexistence of either dual stable equilibria or the disease-free equilibrium and a stable limit cycle. We also extend the formula for mean infection period from density-independent treatments to density-dependent ones. Finally, the modeling results are applied to the transmission of gonorrhea in China, suggesting that these gonorrhea patients may not seek medical treatments in a timely manner.
Keywords: density-dependent, gonorrhea., Treatment, bifurcation.
Mathematics Subject Classification: Primary: 92D30, 37G10, 34D20; Secondary: 34C23, 92B0.
Citation: Baojun Song, Wen Du, Jie Lou. Different types of backward bifurcations due to density-dependent treatments. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1651-1668. doi: 10.3934/mbe.2013.10.1651
F. Brauer and C. Castillo-Chavez, "Mathematical Models in Population Biology and Epidemiology,", Second edition, 40 (2012). doi: 10.1007/978-1-4614-1686-9. Google Scholar
H. Cao, Y. Zhou and B. Song, Complex dynamics of discrete SEIS models with simple demography,, Discrete Dynamics in Nature and Society, 2011 (6539). doi: 10.1155/2011/653937. Google Scholar
C. Castillo-Chavez and B. Song, Dynamical models of tuberculosis and their applications,, Math. Biosc. Eng., 1 (2004), 361. doi: 10.3934/mbe.2004.1.361. Google Scholar
X. S. Chen, X. D. Gong, G. J. Liang and G. C. Zhang, Epidemiologic trends of sexually transmitted diseases in China,, Sex Transm. Dis., 27 (2000), 138. doi: 10.1097/00007435-200003000-00003. Google Scholar
J. Cui, X. Mu and H. Wan, Saturation recovery leads to multiple endemic equilibria and backward bifurcation,, J. Theor. Biol., 254 (2008), 275. doi: 10.1016/j.jtbi.2008.05.015. Google Scholar
H. W. Hethcote and J. A. Yorke, "Gonorrhea Transmission Dynamics and Control,", Lecture Notes in Biomathematics, (1984). Google Scholar
Z. Hu, S. Liu and H. Wang, Backward bifurcation of an epidemic model with standard incidence rate and treatment rate,, Nonlinear Analysis, 9 (2008), 2302. doi: 10.1016/j.nonrwa.2007.08.009. Google Scholar
X. Li, W. Li and Mini Ghosh, Stability and bifurcation of an epidemic model with nonlinear incidence and treatment,, Applied Mathematics and Computation, 210 (2009), 141. doi: 10.1016/j.amc.2008.12.085. Google Scholar
B. R. Morin, L. Medina-Rios, E. T. Camacho and C. Castillo-Chavez, Static behavioral effects on gonorrhea transmission dynamics in a MSM population,, J. Theor. Biol., 267 (2012), 35. doi: 10.1016/j.jtbi.2010.07.027. Google Scholar
P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission,, Math. Biosci., 180 (2002), 29. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar
W. Wang, Backward bifurcation of an epidemic model with treatment,, Math. Biosci., 201 (2006), 58. doi: 10.1016/j.mbs.2005.12.022. Google Scholar
X. Zhang and X. Liu, Bifurcation of an epidemic model with saturated treatment function,, J. Math. Anal. Appl., 348 (2008), 433. doi: 10.1016/j.jmaa.2008.07.042. Google Scholar
Centers for Disease Control and Prevention, Gonorrhea-CDC fact sheet,, June 2012. Available from: \url{http://www.cdc.gov/std/gonorrhea/STDFact-gonorrhea-detailed.htm}., (2012). Google Scholar
, China Yearbook., Available from: \url{http://www.yearbook.cn/}., (). Google Scholar
, Ministry of Health of the People's Republic of China., Available from: \url{http://www.moh.gov.cn/publicfiles/business/htmlfiles/mohjbyfkzj/s2907/index.htm}., (). Google Scholar
Jishan Fan, Tohru Ozawa. An approximation model for the density-dependent magnetohydrodynamic equations. Conference Publications, 2013, 2013 (special) : 207-216. doi: 10.3934/proc.2013.2013.207
Jacques A. L. Silva, Flávia T. Giordani. Density-dependent dispersal in multiple species metapopulations. Mathematical Biosciences & Engineering, 2008, 5 (4) : 843-857. doi: 10.3934/mbe.2008.5.843
Pierre Degond, Silke Henkes, Hui Yu. Self-organized hydrodynamics with density-dependent velocity. Kinetic & Related Models, 2017, 10 (1) : 193-213. doi: 10.3934/krm.2017008
J. X. Velasco-Hernández, M. Núñez-López, G. Ramírez-Santiago, M. Hernández-Rosales. On carrying-capacity construction, metapopulations and density-dependent mortality. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 1099-1110. doi: 10.3934/dcdsb.2017054
Quansen Jiu, Zhouping Xin. The Cauchy problem for 1D compressible flows with density-dependent viscosity coefficients. Kinetic & Related Models, 2008, 1 (2) : 313-330. doi: 10.3934/krm.2008.1.313
Jianwei Yang, Peng Cheng, Yudong Wang. Asymptotic limit of a Navier-Stokes-Korteweg system with density-dependent viscosity. Electronic Research Announcements, 2015, 22: 20-31. doi: 10.3934/era.2015.22.20
Azmy S. Ackleh, Linda J. S. Allen. Competitive exclusion in SIS and SIR epidemic models with total cross immunity and density-dependent host mortality. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 175-188. doi: 10.3934/dcdsb.2005.5.175
Xulong Qin, Zheng-An Yao, Hongxing Zhao. One dimensional compressible Navier-Stokes equations with density-dependent viscosity and free boundaries. Communications on Pure & Applied Analysis, 2008, 7 (2) : 373-381. doi: 10.3934/cpaa.2008.7.373
Jishan Fan, Tohru Ozawa. Global Cauchy problem of an ideal density-dependent MHD-$\alpha$ model. Conference Publications, 2011, 2011 (Special) : 400-409. doi: 10.3934/proc.2011.2011.400
Jishan Fan, Tohru Ozawa. A regularity criterion for 3D density-dependent MHD system with zero viscosity. Conference Publications, 2015, 2015 (special) : 395-399. doi: 10.3934/proc.2015.0395
Xulong Qin, Zheng-An Yao. Global solutions of the free boundary problem for the compressible Navier-Stokes equations with density-dependent viscosity. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1041-1052. doi: 10.3934/cpaa.2010.9.1041
Tracy L. Stepien, Erica M. Rutter, Yang Kuang. A data-motivated density-dependent diffusion model of in vitro glioblastoma growth. Mathematical Biosciences & Engineering, 2015, 12 (6) : 1157-1172. doi: 10.3934/mbe.2015.12.1157
Weiping Yan. Existence of weak solutions to the three-dimensional density-dependent generalized incompressible magnetohydrodynamic flows. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 1359-1385. doi: 10.3934/dcds.2015.35.1359
Mei Wang, Zilai Li, Zhenhua Guo. Global weak solution to 3D compressible flows with density-dependent viscosity and free boundary. Communications on Pure & Applied Analysis, 2017, 16 (1) : 1-24. doi: 10.3934/cpaa.2017001
Jishan Fan, Fucai Li, Gen Nakamura. Global strong solution to the two-dimensional density-dependent magnetohydrodynamic equations with vaccum. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1481-1490. doi: 10.3934/cpaa.2014.13.1481
Wuming Li, Xiaojun Liu, Quansen Jiu. The decay estimates of solutions for 1D compressible flows with density-dependent viscosity coefficients. Communications on Pure & Applied Analysis, 2013, 12 (2) : 647-661. doi: 10.3934/cpaa.2013.12.647
Kaigang Huang, Yongli Cai, Feng Rao, Shengmao Fu, Weiming Wang. Positive steady states of a density-dependent predator-prey model with diffusion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3087-3107. doi: 10.3934/dcdsb.2017209
Tianyuan Xu, Shanming Ji, Chunhua Jin, Ming Mei, Jingxue Yin. Early and late stage profiles for a chemotaxis model with density-dependent jump probability. Mathematical Biosciences & Engineering, 2018, 15 (6) : 1345-1385. doi: 10.3934/mbe.2018062
Chuangxia Huang, Hua Zhang, Lihong Huang. Almost periodicity analysis for a delayed Nicholson's blowflies model with nonlinear density-dependent mortality term. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3337-3349. doi: 10.3934/cpaa.2019150
Guangwu Wang, Boling Guo. Global weak solution to the quantum Navier-Stokes-Landau-Lifshitz equations with density-dependent viscosity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6141-6166. doi: 10.3934/dcdsb.2019133
Baojun Song Wen Du Jie Lou | CommonCrawl |
Corporate Finance & Accounting
Corporate Finance & Accounting Financial Ratios
Cost of Goods Sold – COGS
By Adam Hayes
What Is Cost of Goods Sold?
Formula and Calculation for COGS
What Does the COGS Tell You?
Accounting Methods and COGS
Exclusions From COGS Deduction
Cost of Revenue vs. COGS
Operating Expenses vs. COGS
Limitations of COGS
Example of How to Use COGS
What Is Cost of Goods Sold – COGS?
Cost of goods sold (COGS) refers to the direct costs of producing the goods sold by a company. This amount includes the cost of the materials and labor directly used to create the good. It excludes indirect expenses, such as distribution costs and sales force costs.
Cost of goods sold is also referred to as "cost of sales."
Examining Costs Of Goods Sold (COGS)
COGS=Beginning Inventory+P−Ending InventorywhereP=Purchases during the period\begin{aligned} &\text{COGS}=\text{Beginning Inventory}+\text{P}-\text{Ending Inventory}\\ &\textbf{where}\\ &\text{P}=\text{Purchases during the period}\\ \end{aligned}COGS=Beginning Inventory+P−Ending InventorywhereP=Purchases during the period
Inventory that is sold appears in the income statement under the COGS account. The beginning inventory for the year is the inventory left over from the previous year—that is, the merchandise that was not sold in the previous year. Any additional productions or purchases made by a manufacturing or retail company are added to the beginning inventory. At the end of the year, the products that were not sold are subtracted from the sum of beginning inventory and additional purchases. The final number derived from the calculation is the cost of goods sold for the year.
COGS only applies to those costs directly related to producing goods intended for sale.
The balance sheet has an account called the current assets account. Under this account is an item called inventory. The balance sheet only captures a company's financial health at the end of an accounting period. This means that the inventory value recorded under current assets is the ending inventory. Since the beginning inventory is the inventory that a company has in stock at the beginning of its accounting period, it means that the beginning inventory is also the company's ending inventory at the end of the previous accounting period.
Cost of goods sold (COGS) is the direct cost attributable to the production of the goods sold in a company.
COGS is deducted from revenues (sales) in order to calculate gross profit and gross margin.
The value of COGS will change depending on the accounting standards used in the calculation.
The COGS is an important metric on the financial statements as it is subtracted from a company's revenues to determine its gross profit. The gross profit is a profitability measure that evaluates how efficient a company is in managing its labor and supplies in the production process.
Because COGS is a cost of doing business, it is recorded as a business expense on the income statements. Knowing the cost of goods sold helps analysts, investors, and managers estimate the company's bottom line. If COGS increases, net income will decrease. While this movement is beneficial for income tax purposes, the business will have less profit for its shareholders. Businesses thus try to keep their COGS low so that net profits will be higher.
Cost of goods sold (COGS) is the cost of acquiring or manufacturing the products that a company sells during a period, so the only costs included in the measure are those that are directly tied to the production of the products, including the cost of labor, materials, and manufacturing overhead. For example, the COGS for an automaker would include the material costs for the parts that go into making the car plus the labor costs used to put the car together. The cost of sending the cars to dealerships and the cost of the labor used to sell the car would be excluded.
Furthermore, costs incurred on the cars that were not sold during the year will not be included when calculating COGS, whether the costs are direct or indirect. In other words, COGS includes the direct cost of producing goods or services that were purchased by customers during the year.
As a rule of thumb, if you want to know if an expense falls under COGS, ask: "Would this expense have been an expense even if no sales were generated?"
The value of the cost of goods sold depends on the inventory costing method adopted by a company. There are three methods that a company can use when recording the level of inventory sold during a period: First In, First Out (FIFO), Last In, First Out (LIFO), and the Average Cost Method.
The earliest goods to be purchased or manufactured are sold first. Since prices tend to go up over time, a company that uses the FIFO method will sell its least expensive products first, which translates to a lower COGS than the COGS recorded under LIFO. Hence, the net income using the FIFO method increases over time.
The latest goods added to the inventory are sold first. During periods of rising prices, goods with higher costs are sold first, leading to a higher COGS amount. Over time, the net income tends to decrease.
Average Cost Method
The average price of all the goods in stock, regardless of purchase date, is used to value the goods sold. Taking the average product cost over a time period has a smoothing effect that prevents COGS from being highly impacted by extreme costs of one or more acquisitions or purchases.
Many service companies do not have any cost of goods sold at all. COGS is not addressed in any detail in generally accepted accounting principles (GAAP), but COGS is defined as only the cost of inventory items sold during a given period. Not only do service companies have no goods to sell, but purely service companies also do not have inventories. If COGS is not listed on the income statement, no deduction can be applied for those costs.
Examples of pure service companies include accounting firms, law offices, real estate appraisers, business consultants, professional dancers, etc. Even though all of these industries have business expenses and normally spend money to provide their services, they do not list COGS. Instead, they have what is called "cost of services," which does not count towards a COGS deduction.
Costs of revenue exist for ongoing contract services that can include raw materials, direct labor, shipping costs, and commissions paid to sales employees. These items cannot be claimed as COGS without a physically produced product to sell, however. The IRS website even lists some examples of "personal service businesses" that do not calculate COGS on their income statements. These include doctors, lawyers, carpenters, and painters.
Many service-based companies have some products to sell. For example, airlines and hotels are primarily providers of services such as transport and lodging, respectively, yet they also sell gifts, food, beverages, and other items. These items are definitely considered goods, and these companies certainly have inventories of such goods. Both of these industries can list COGS on their income statements and claim them for tax purposes.
Both operating expenses and cost of goods sold (COGS) are expenditures that companies incur with running their business. However, the expenses are segregated on the income statement. Unlike COGS, operating expenses (OPEX) are expenditures that are not directly tied to the production of goods or services. Typically, SG&A (selling, general, and administrative expenses) are included under operating expenses as a separate line item. SG&A expenses are expenditures that are not directly tied to a product such as overhead costs. Examples of operating expenses include the following:
Insurance costs
COGS can easily be manipulated by accountants or managers looking to cook the books. It can be altered by:
Allocating to inventory higher manufacturing overhead costs than those incurred
Overstating discounts
Overstating returns to suppliers
Altering the amount of inventory in stock at the end of an accounting period
Overvaluing inventory on hand
Failing to write-off obsolete inventory
When inventory is artificially inflated, COGS will be under-reported which, in turn, will lead to higher than the actual gross profit margin, and hence, an inflated net income.
Investors looking through a company's financial statements can spot unscrupulous inventory accounting by checking for inventory buildup, such as inventory rising faster than revenue or total assets reported.
As a historical example, let's calculate the cost of goods sold for J.C. Penney (NYSE: JCP) for fiscal year (FY) ended 2016. The first step is to find the beginning and ending inventory on the company's balance sheet:
Beginning inventory: Inventory recorded on the fiscal year ended 2015 = $2.72 billion
Ending inventory: Inventory recorded on the fiscal year ended 2016 = $2.85 billion
Purchases during 2016: Using the information above = $8.2 billion
Using the formula for COGS, we can compute the following:
$2.72 + 8.2 - 2.85 = $8.07 billion
If we look at the company's 2016 income statement, we see that the reported COGS is $8.07 billion, the exact figure that we calculated here.
Ending Inventory
Ending inventory is a common financial metric measuring the final value of goods still available for sale at the end of an accounting period.
Absorption Costing Definition
Absorption costing is a managerial accounting cost method of capturing all costs associated with manufacturing a particular product to include in its cost base.
Beginning Inventory: The Start of the Accounting Period
Beginning inventory is the book value of a company's inventory at the start of an accounting period. It is also the value of inventory carried over from the end of the preceding accounting period.
Gross Margin Defined
The gross margin represents the amount of total sales revenue that the company retains after incurring the direct costs associated with producing the goods and services sold by the company.
Selling, General & Administrative Expense (SG&A)
Selling, General & Administrative Expense (SG&A) is an income statement item that includes all selling-related costs and expenses of managing a company.
Flow Of Costs
Flow of costs refers to the manner or path in which costs move through a firm.
Analyzing Operating Margins
An Easy Way to Determine Cost of Goods Sold Using the FIFO Method
Does gross profit include labor and overhead?
Are depreciation and amortization included in gross profit?
Examples of Industries That Cannot Claim Cost of Goods Sold (COGS)
Tools for Fundamental Analysis
How are cost of goods sold and cost of sales different? | CommonCrawl |
MathsTools Apps
Apps. Directory
3D-Functions Plotter
Runge Kutta Methods
The Simplex Algorithm
Matrix calculator
Functions Analyzer
Prime numbers calculator
Latex Editor
Simplex Pivot Element
The 2-Phase Method
The online Simplex method
Runge-Kutta Methods
Fehlberg Methods
Runge-Kutta On line
Limiit and continuity
Chain Rule
The Complex plane
Complex variable functions
The complex derivate
Complex integration
Cauchy's integral formula
Special Relativity
Maxwell's Equations
Hilbert Spaces
Fourier Series Convergence
Discontinuities Behavour
The Online Fourier Series
Non Linear Programming
Linear programing Basics
Differentiable Functions
Convex Functions
Lagrange Multipliers
Eigenvalues
Matrix Diagonalizations
Jordan canonical form
Tensorial Algebra
Geodesics
Lagrangian method
Simplex method example
Finite Solution Example
Unbounded Solution Sample
Infinite Solutions Sample
Uncompatible Constraints
Jordan Cannonical form 3x3
Jordan 3x3(2)
Ejemplo Serie de Fourier
The Euler's method
Three eighths rule in Matlab
Dormand/Prince 4 and 5
Modelling the Brusselator
Maths Mobile
Android Mathstools
3D-Functions Plotter Android
Simplex Android Calculator
Android Integrator
Matrix Studio
TermsCookies policy
MathsTools Publishing
Basic concepts and principles
One of the most important ways to get involved in complex variable analysis is through complex integration. When we talk about complex integration we refer to the line integral.
Line integral definition begins with γ a differentiable curve such that
$$ \begin{matrix}\gamma : [a,b] \mapsto \mathbb{C}\\ \;\;\;\;\; \;\;\;\;\;\;\; x \mapsto \gamma(x) \end{matrix}$$
Now we divide the interval [a, b] in n parts zi such that z0a, and zn=b
For each subinterval we take \( E_{i}=f(\zeta_{i})(z_{i}-z_{i-1}), \; i=1,..,n\).
Then we take the partial sums \( \sum_{i=1}^{n}E_{i} = \sum_{i=1}^{n}f(\zeta_{i})(z_{i}-z_{i-1}) \).Making the limit when n tends to infinity we get the line integral as $$\int_{a}^{b}f(z)dz \;\;, \;\; \int_{C}f(z)dz $$
Both two formulas are analogous
The complex integral over a C curve is defined as
\( \int_{C}f(z)dz = \int_{C}(u+iv)(dx+idy) \) \(= \int_{C}udx -vdy + i\int_{C}vdx -udy\)
A very interesting property of the integral and that is used in most of proofs and arguments is the follwing
$$\left | \int_{a}^{b }f(z)dz \right | \le \int_{a}^{b }\left |f(z) \right |dz$$ Click here to see a proof of this fact.
Line integral definitionº
Given f, a complex variable function and γ a piecewise differentiable curve. We define the line integral of f over γ as: $$\int_{\gamma}f(z)dz = \int_{a}^{b}f(\gamma(t))\gamma'(t)dt $$
Extended theory
The most important therorem called Cauchy's Theorem which states that the integral over a closed and simple curve is zero on simply connected domains. Cauchy gave a first demonstration supposing that the function f has a continuous first derivative, later Eduard Gousart discovered that in fact, this hypothesis was redundant, for this reason Cauchy's theorem is sometimes called Cauchy-Gousart's Theorem. This will be the version that we will see here.
In the following theorems, C is a closed and simple curve and contained in a simply connected open R region (this is a domain).
Theorem of Cauchy - Gousart
Given f a, holomorphic function over R, then $$\int_{C}f(z)dz = 0 $$
Click here to see a proof of Cauchy's theorem.
Green's Theorem in the plane
Let P and Q be continuous functions and with continuous partial derivatives in R and on their boundary C. Then
\( \int_{C} P dx+ Q dy \) \(= \int\int_{R}[\frac{\partial Q}{\partial x}- \frac{\partial P}{\partial y}]dx dy \)
It is relatively simple to put Green's theorem in complex form:
Green's theorem in complex form
Given F, with continuous partial derivatives in R and on their boundary C. Then
\( \int_{C}F(z, \bar{z})dz \) \(= 2i\int\int_{R}\frac{\partial F}{\partial \bar{z}}dA \)
Click here to see a proof
The following Theorem is sometimes called reciprocal of the Cauchy's theorem
Morera's Theorem
Given f, a complez variable function, lets suposse that it verifies $$\int_{C}f(z)dz = 0 $$ then, f is Holomorphic over R.
Consequences of the Cauchy's theorem
The following theorems are consequences of Cauchy's theorem
Theorem 1
If a and b are two points of R then the integral $$ \int_{a}^{b} f(z) dz $$ It is independent of the path followed between a and b.
The proof of this theorem is simple, it is enough to observe if C is any path between a and b and C' is another different path, then for Cauchy's theorem, the total integral between C and C' is zero, as the path C 'does not matter, both line integrals would be same.
Lets a and b two points of R and F'(z) = f(z) then $$ \int_{a}^{b} f(z) dz = F(b) - F(a) $$
Reciprocally, if a and z are points of R and it is fulfilled
$$ F(z) = \int_{a}^{z} f(z) dz $$
then, F is Holomorphic in R and F'(z) = f(z)
The following theorem is very important, it talk about that the value of an integral over a closed and simple curve that surrounds a singularity does not depend on the curve
Given f a function Holomorphic in a region bounded by two closed and simple curves C and C' $$ \int_{C} f(z) dz = \int_{C'} f(z) dz$$ Where C and C' are traversed positively oriented, that is, counterclockwise.
The following theorem is a generalization of the previous one in a region with n singularities instead two.
Figura 1: Region closed between curves C and C'
Teorema 4
Given f a function Holomorphic in a region bounded by nclosed and simple curves \( C_{1}, C_{2}, ..., C_{n}\). Which are enclosed by another major curve C. Then.
\( \int_{C} f(z) dz = \int_{C_{1}} f(z) dz + \int_{C_{2}} f(z) dz + \) \( ... + \int_{C_{n}} f(z) dz \)
Where the curves curves \( C_{1}, C_{2}, ..., C_{n}\) are traversed positively oriented, that is, counterclockwise.
Figura 2: Region encerrada entre las curvas C y \( C_{1}, C_{2}, ..., C_{n}\)
Was useful? want add anything?
Post from other users
kamaljitsingh gill:
the theorem together make things clear for understanding .The content is explained nicely
Thanks, kamaljitsingh gill.
Nice to be useful.
Your Post:
You can hide this ad in 10 seconds... X Show ad
X Usamos cookies nuestras y de terceros, navegación en este site implica la aceptación de la política de cookies. View cookies policy. Close | CommonCrawl |
Euler totient function sum of divisors. Theorem 2.2 Apostol
Prove that : If $ n\ge 1 $, then $ \sum_{d|n}\phi(d)=n $.
Let $S$ denote the set $\{1,2,...,n\}$. We distribute the integers of $S$ into disjoint sets as follows. For each divisor $d$ of $n$, let
$A(d) = \{k \in S :(k,n) = d\}$
That is, $A(d)$ contains the elements of S which have the gcd d with n. The sets $A(d)$ for a disjoint collection whose union is S. Therefore if $f(d)$ denotes the number of integers in $A(d)$ we have $\sum_{d|n}f(d)=n$
I don't understand why the sum of $f(d)$ equals $n$. Can someone explain this?
elementary-number-theory analytic-number-theory totient-function
darij grinberg
artifex_somniaartifex_somnia
$\begingroup$ Related : math.stackexchange.com/questions/194705/… $\endgroup$ – Arnaud D. Nov 27 '18 at 15:14
The elements of $A(d)$ are the numbers $k$ in the interval $[1,n]$ (that is, the set $S$) such that $\gcd(k,n)=d$. If $k$ is such a number, then $k=d\ell$ for some $\ell \in [1,n/d]$ relatively prime to $n/d$. There are $\varphi(n/d)$ such $\ell$ in the interval $[1,n/d]$. Thus the number of elements in $A(d)$ is $\varphi(n/d)$.
The $A(d)$ are pairwise disjoint, and their union is the set $S=\{1,2,3,\dots,n\}$. It follows that $$\sum_{d|n} \varphi(n/d)=n.\tag{1}$$ But as $d$ ranges over the divisors of $n$, so does $n/d$. It follows that $$\sum_{d|n}\varphi(n/d)=\sum_{d|n}\varphi(d).\tag{2}$$ By (1), the sum on the left-hand side of (2) is equal to $n$. It follows that the sum on the right-hand side is also $n$.
André NicolasAndré Nicolas
$\begingroup$ Why is $\mathscr{l}$ relatively prime to $n/d$? $\endgroup$ – user522521 Mar 27 '18 at 16:00
$\begingroup$ This is literally the best explanation of this proof i have seen . $\endgroup$ – joker007 Jul 23 '20 at 13:47
We consider rational numbers
1/n,2/n,…,n/n
Clearly there are n numbers in the list, we obtain a new list by reducing each number in the above list to the lowest terms ; that is, express the above list as a quotient of relatively prime integers. The denominator of the numbers in the new list will be divisor of n. If d divides n, exactly phi(d) of the numbers will have d as their denominator(this is the meaning of lowest term). Hence, there are (summation of phi(d)) in the new list . Because the two list have same number of terms, we obtain the desired result.
user449276user449276
I like Gauß's proof: for each $d\mid n$, we have $\phi (d)$ generators for $C_d$, where $C_d$ is the cyclic group of order $d$. This is because, if $\langle g\rangle =C_d$, then $\langle g^k\rangle=C_d$ iff$(k,d)=1$.
Since every element of $C_n$ generates a cyclic subgroup, and every $C_d\le C_n$ is generated by some element of $C_n$, the claim follows.
Chris CusterChris Custer
Let's construct the same set $S_d=\{x: 1\leq x\leq n$ and $gcd(x,n)=d\}$ in a different way and find out. This way, I believe, one can feel all the details. Although one might argue that some of these facts needn't be proved as they are already clear in their definition. But as I've seen, most people can't agree, at first, how these are so obvious.
Take $A_d=\{x: 1\leq x\leq \frac{n}{d}$ and $ gcd(x,\frac{n}{d})=1\}$. Then of course, $\mid A_d\mid =\varphi(\frac{n}{d})$ as this is indeed the definition of $\varphi$. Now consider the set $B_d=\{x: x=d.y$ where $ y\in A_d\}$. Then again, of course, $\mid B_d\mid =\varphi(\frac{n}{d})$. For any $x \in B_d$, both $gcd(x,n)=d$ and $1\leq x\leq n$ are true. So, $B_d \subseteq S_d$. If there was an $m \in S_d$ but $m \notin B_d$, then that would mean, $\frac{m}{d}∉A_d$. But that can't be possible as $\frac {m}{d}(=x)$ satisfies $1\leq x\leq \frac {n}{d}$ and $gcd(x,\frac {n}{d})=1$, both the conditions to be in $A_d$. Hence, $B_d=S_d \Longrightarrow\mid S_d\mid =\mid B_d\mid =\varphi(\frac {n}{d})$. Now consider the set $S= \bigcup{S_d}$ . This set must include all integers from $1$ to $n$. For if it didn't, then there would exist an $x$ such that $1\leq x\leq n$ but $gcd(x,n)=k$ which is not one of the $d$s we considered. But that is not possible. So it follows that, $\mid S\mid =\sum{\mid S_d \mid } =\sum{\varphi(\frac{n}{d})}= \sum{\varphi(d)}=n$.
ShajidShajid
Here is another approach to solve this problem if you are familiar with cyclic groups although it is equivalent to approaches given in other answers. But knowing the interdependency of mathematical disciplines is always beneficial.
Let $G(a)$ be the cyclic group generated by the element $a$ of order $n$. By fundamental theorem of cyclic groups we know that for each divisor $k$ of $n$ , $G(a)$ has excatly one subgroup of order $k$ - namely $G(a^{\frac{n}{k}})$. Also we know that $G(a^k) = G(a)$ if and if only $gcd(n,k) = 1$.
Now if $d$ divides $n$ then there is exactly one subgroup of order $d$ and let it be $G(b)$. Then every element of order $d$ generates $G(b)$. But $G(b^k) = G(b)$ only if $gcd(k,d)=1$. So number of elements having order $d$ is $\phi(d)$.
Now if the total number of elements in given cyclic group is $n$, then by fundamental theorem of cyclic groups, $$ \sum_{d|n} \phi(d) = n $$
Infinity_hunterInfinity_hunter
1,28811 silver badge1414 bronze badges
Not the answer you're looking for? Browse other questions tagged elementary-number-theory analytic-number-theory totient-function or ask your own question.
Euler totient divisor sum
Is there a direct, elementary proof of $n = \sum_{k|n} \phi(k)$?
Counting Primitive Roots modulo a prime with Totient Function
Proof to a property of Euler's totient function
Relation between $\gcd$ and Euler's totient function .
Prime free proof of general multiplicative property of Euler Totient function
Alternative proof of : $n = \sum_{d|n} \phi(d)$ which is the sum of Euler Phi Function over Divisors
Proving Euler-Fermat's Theorem | CommonCrawl |
A comparison of generic drug prices in seven European countries: a methodological analysis
Olivier J. Wouters1Email author and
Panos G. Kanavos1
BMC Health Services ResearchBMC series – open, inclusive and trusted201717:242
Received: 19 October 2016
Policymakers and researchers frequently compare the prices of medicines between countries. Such comparisons often serve as barometers of how pricing and reimbursement policies are performing. The aim of this study was to examine methodological challenges to comparing generic drug prices.
We calculated all commonly used price indices based on 2013 IMS Health data on sales of 3156 generic drugs in seven European countries.
There were large differences in generic drug prices between countries. However, the results varied depending on the choice of index, base country, unit of volume, method of currency conversion, and therapeutic category. The results also differed depending on whether one looked at the prices charged by manufacturers or those charged by pharmacists.
Price indices are a useful statistical approach for comparing drug prices across countries, but researchers and policymakers should interpret price indices with caution given their limitations. Price-index results are highly sensitive to the choice of method and sample. More research is needed to determine the drivers of price differences between countries. The data suggest that some governments should aim to reduce distribution costs for generic drugs.
Pharmaceutical policy
Many European countries are facing severe cost pressures on health-care budgets, in part due to rising drug spending. In this context, the savings from greater use of generic drugs can help pay for other health-care services. Yet recent European Commission reports point to market failures for generic drugs [1, 2]. It is therefore important to regularly compare generic drug prices in countries with similar income levels in order to give public and private insurers a sense of whether they are over-paying for generic drugs or not. Such comparisons can serve as barometers of how pricing and reimbursement policies are performing [3–15].
Previous comparisons of generic drug prices have found that prices varied markedly across European and North American countries [16–24]. However, the studies often relied on different methods and samples, making it difficult to compare findings. In addition, most of the analyses had small sample sizes, which may have biased the results. Some earlier findings are also likely out of date given how often pricing and reimbursement regulations are changed.
As important, the impact of distribution margins and taxes on generic drug prices has been underexplored, even though studies indicate that those costs can account for more than 90% of the retail price of a generic drug, i.e., the price charged by pharmacists to patients or third-party payers [1]. Nearly all studies have looked at ex-manufacturer prices, i.e., those charged by manufacturers to wholesalers, which do not account for distribution costs.
In this study, we compared the ex-manufacturer and retail prices of a large sample of generic drugs in seven European countries. We calculated all commonly used price indices to outline the methodological challenges to comparing generic drug prices. It is critical that policymakers are aware of the advantages and limitations of these types of analyses, given that the results of price comparisons might be used to justify changes to pharmaceutical policies.
We acquired 2013 data from IMS Health on volumes and sales of 200 off-patent active ingredients in seven countries with similar income levels: Belgium, Denmark, France, Germany, Italy, Spain, and Sweden. These ingredients were available in 3156 strength-form combinations.1 Volumes were recorded in doses and grams of active ingredient.2 Sales were recorded in euros based on average exchange rates for the year.3 We excluded 213 products (6.7%, 213/3156) with missing volume data.
We restricted our analysis to the 110 active ingredients sold in all seven countries, which accounted for 54 (Italy) to 87% (Sweden) of total generic spend in each country. For each ingredient, we calculated the average price per dose and the average price per gram, both at the ex-manufacturer and retail levels. To do this, we divided total sales in euros across strength-form combinations by number of doses or grams sold.4
We then calculated four indices — unweighted, Paasche, Laspeyres, and Fisher — using prices per gram and prices per dose [25]. Unweighted indices (IU) were calculated as
$$ {I}_U=\frac{{\displaystyle {\sum}_i}{p}_i^c}{{\displaystyle {\sum}_i}{p}_i^b} \cdot 100 $$
where p was the price of active ingredient i in the comparator country or the base country. We selected Germany as the base country, which takes a value of 100 in all indices.
The other indices were weighted to account for consumption patterns. Paasche (IP) and Laspeyres indices (IL) were computed as
$$ {I}_P=\frac{{\displaystyle {\sum}_i}{p}_i^c{q}_i^c}{{\displaystyle {\sum}_i}{p}_i^b{q}_i^c} \cdot 100 $$
$$ {I}_L=\frac{{\displaystyle {\sum}_i}{p}_i^c{q}_i^b}{{\displaystyle {\sum}_i}{p}_i^b{q}_i^b} \cdot 100 $$
where q was the quantity in the comparator or base country (i.e., doses or grams). Finally, Fisher indices (IF) were calculated as
$$ {I}_F=\sqrt{I_P\cdot {I}_L} $$
Sensitivity and subgroup analyses
The results of Laspeyres indices can vary depending on which country is selected as the base, since this determines which quantity weights are used. For instance, atorvastatin, a cholesterol-reducing drug, was only the 40th most prescribed generic drug in Germany, in terms of number of doses sold, whereas it was one of the ten most prescribed generic drugs in three of the other countries. As a sensitivity analysis, we re-calculated all the price indices with France as the base country.
The results of price indices can also differ depending on whether exchange rates or purchasing power parities (PPPs) are used to convert monetary values to a common currency. Since exchange rates are sensitive to currency fluctuations, we re-calculated all of the indices based on PPP conversion factors. PPPs, which are measured in national currency units per US dollar, account for cross-country differences in the prices of goods and services. In this way, they equalize the purchasing power of different currencies.
Finally, we compared the prices of generic drugs in different therapeutic subgroups. To do this, we categorized the 110 active ingredients by anatomical main groups using the ATC/DDD system developed by the World Health Organization Collaborating Centre for Drug Statistics Methodology. Additional file 1: Appendix 1 shows the breakdown of active ingredients by group. We excluded ingredients that belonged to more than one group. For example, timolol is a beta blocker used to treat both high blood pressure (ATC group C) and glaucoma (ATC group S). We then compared the prices of the active ingredients belonging to the two largest groups in our sample: Cardiovascular system drugs (n = 25) and nervous system drugs (n = 29). The subgroup analysis used exchange-rate conversions and Germany as the base country.
The full results of the sensitivity and subgroup analyses can be found in Additional file 1: Appendices 2–4.
Ex-manufacturer vs. retail prices
Table 1 summarizes the main results with Germany as the base country. Prices varied markedly across countries. Denmark and Sweden consistently had the lowest ex-manufacturer and retail prices among the seven countries, while France and Italy had the highest in most of the weighted indices. In the Laspeyres (dose) index, for example, the Italian ex-manufacturer prices were, on average, 1.6 times the German ones and 2.6 times the Danish ones. Figure 1a shows that while Belgium, France, and Spain all had higher ex-manufacturer prices than Germany, the opposite was true about their retail prices, based on a Laspeyres dose index.
Ex-manufacturer and retail prices with Germany as the base country (2013)
Ex-manufacturer
Unweighted-D
Unweighted-G
Laspeyres-D
Laspeyres-G
Paasche-D
Paasche-G
Fisher-D
Fisher-G
D doses, G grams of active ingredient
Source: IMS Health 2013 (Pricing Insights database)
Results for different price indices in 2013 with Germany as the base country. For ease of interpretation, the unit of volume is doses in all the price indices. a Comparison of retail and ex-manufacturer prices (n = 110) in a Laspeyres index. b Contrast of ex-manufacturer prices (n = 110) in a Laspeyres index with German versus French weights. c Ex-manufacturer prices (n = 110) in weighted and unweighted indices. d Comparison of ex-manufacturer prices of cardiovascular system drugs (n = 25), nervous system drugs (n = 29), and all drugs (n = 110) in a Laspeyres index. (Source: IMS Health 2013, Pricing Insights database)
Unit of volume (doses vs. grams of active ingredient)
The results of the unweighted indices fluctuated widely depending on which unit of volume was used (Table 1). By contrast, most of the weighted results remained similar across the two units of volume.5 There were some exceptions: In the Laspeyres indices, for example, the French ex-manufacturer prices were lower than those in Italy when doses were used, whereas they were higher when grams of active ingredients were used (Table 1).
Weighting (Laspeyres vs. Paasche vs. Fisher)
The Paasche indices were always lower than the Laspeyres indices at both the ex-manufacturer and retail levels (Table 1 and Fig. 1b). The Fisher results — which are the geometric means of the Laspeyres and Paasche indices — fell between the latter two.
Figure 1c shows that the Laspeyres values dropped in all countries, except Denmark, when the French weights were used.6 This indicates that those drugs which were more highly consumed in France than in Germany were also cheaper in most of the other countries.
Currency conversion (exchange rates vs. purchasing power parities)
The results were largely unchanged when PPPs — rather than exchange rates — were used to convert sales in local currencies to a common unit. This suggests that variation in drug prices between these seven countries was, for the most part, not due to differences in the costs of goods and services.
Subgroup analyses
Figure 1d shows the ex-manufacturer prices of cardiovascular system drugs and nervous system drugs. The amount of price variation differed across therapeutic groups. In the full sample, there was a 2.5-fold difference in prices between the countries with the highest and lowest prices. By comparison, there were 3.1 and 3.5-fold differences in the prices of nervous system and cardiovascular drugs, respectively. Germany had the second highest prices for nervous system drugs, whereas it had among the lowest prices for cardiovascular system drugs.
In this analysis, we explored differences in the ex-manufacturer and retail prices of generic drugs across seven countries in 2013 using various price indices.
The ex-manufacturer and retail prices varied widely across countries. This is consistent with earlier studies comparing the prices of patented drugs at both levels [1, 13, 14]. More research is needed to disentangle the impact of supply- and demand-side policies, such as pricing, reimbursement, prescribing, and substitution rules, on the ex-manufacturer and retail prices of generics [26]. Prices variation is also likely due, in part, to differences in the regulation of wholesaler and pharmacy margins [1].
There are various methods for comparing drug prices across settings [25, 27], and they often produce remarkably different results. For example, the ex-manufacturer Laspeyres index (dose) in Table 1 suggests that the sample of generic drugs was about 60% more expensive in Italy than in Germany. On the other hand, the ex-manufacturer Paasche index (grams of active ingredient) indicates that the sample was about 35% cheaper in Italy than in Germany.
There were even larger differences between some of the weighted and unweighted indices. It might be especially important to use weighted indices when comparing generic drug prices, since studies suggest that these prices are closely linked to volume [28, 29]. Earlier studies have shown that the results of unweighted and weighted indices can differ sharply [4, 25], which is consistent with our findings. Extreme prices can skew the results of unweighted indices, so these indices are generally considered less reliable than weighted ones for comparing drug prices [25].
There is no consensus on which weighting method is most appropriate for comparing drug prices, as each has advantages and disadvantages [12, 25]. Academic and government studies have variously calculated unweighted [9, 10], Fisher [11], Paasche [4, 25], and Laspeyres indices [4, 17, 25, 30], often using different units of volume and/or base countries. The likely reason why Paasche results are usually lower than Laspeyres results, a finding which has been reported in previous drug price indices [4, 25], is that patients tend to consume more of the drugs that are cheaper in their countries. Therefore, when prices are weighted by local consumption, the indices show lower average prices — relative to the base country — than when prices are weighted by consumption in the base country.
The choice of unit of volume can influence the results if there are large, systematic differences between countries in the average strength per dose [25]. For example, previous studies have found that price-index results for Japan vary significantly depending on whether number of doses or grams of active ingredient serve as the unit of volume [3, 4, 17, 18, 25]. The authors of those studies attributed this finding to the tendency of Japanese clinicians to prescribe higher quantities of lower-strength products.
Despite such methodological challenges, it is still possible to glean useful information from price indices. In particular, it is important to look for consistency across indices. As an example, our results indicate that Denmark and Sweden had the lowest ex-manufacturer prices in nearly all weighted indices, regardless of whether Germany or France served as the base country. This strongly suggests that generic drugs were cheaper in Denmark and Sweden in 2013 than in the other five countries. By contrast, the French and Italian ex-manufacturer prices were among the highest in all weighted indices. Ideally, the results of price indices should be interpreted alongside other quantitative and qualitative data about the impact of individual policies on drug prices. On their own, price indices do not provide causal evidence on the effects of pricing and reimbursement rules, generic substitution laws, and other factors on the prices of generic drugs.
The findings in this study raise questions which merit further research. Both Sweden and Denmark operate tender-like systems for generic drugs,7 which may account for the low prices observed in each country [31, 32]. Tendering refers to the bulk purchase of generic drugs from the manufacturers that offer the lowest prices [33]. More work is needed to understand the impact of tendering on drug prices, and whether any observed price reductions can be sustained over time. There is concern that relying exclusively on tendering to procure generic drugs could create product shortages, drive generic drug firms out of business, and lead to higher generic drug prices over time [33]. There is little evidence, however, on the long-term effects of tendering.
It is also important to examine why there are large differences in the prices of drugs in various therapeutic areas, both within and between countries. Such variation may, in part, reflect market factors. For example, the marketing exclusivity for a drug can expire at different times across high-income countries depending on when the drug was approved in each jurisdiction. Also, some studies have observed an inverse relationship between the number of competitors in the market and generic drug prices [34, 35]. The speed of generic entry, in turn, has been found to be correlated with how much brand-name firms record in revenue in the years leading up to patent expiry [36, 37]. In other words, generic firms tend to prioritize more lucrative drug markets.
This study has limitations, most of which are inherent to drug price indices.
First, the data did not account for confidential discounts, which can be as high as 50% for some generic drugs in certain countries [38]. All list prices may, therefore, not have corresponded to the actual prices paid. However, if profits from discounts accrue to wholesalers or pharmacists, then list prices are more important to payers.
Second, Paasche and Laspeyres indices are underpinned by assumptions about the relationship between generic drug prices and usage which may not always hold. Specifically, the results of Laspeyres indices are valid if demand for prescription medicines is price inelastic. While empirical findings contradict this assumption [39, 40], the Paasche index instead assumes that the consumption pattern in the base country would look exactly like that of the comparator country if both had the same prices. The latter assumption might be less likely to hold true, since there are differences between countries in standards of care, disease prevalence rates, prescription drug coverage, and patient preferences — all of which can affect demand [25].
Third, by restricting the analysis to a common sample of drugs, we reduced the sample size. In some previous price indices for patented drugs, researchers instead conducted a series of comparisons between the base country and one other country at a time, looking at the drugs available in both countries. Such comparisons, which are called bilateral analyses, maximize the sample size for each country pair. We chose to instead calculate what are known as multilateral indices, which compare the prices of a sample of drugs available in all study countries. Multilateral indices provide information on how prices compare across all the countries rather than just between each pair. While a common sample might over-represent older, internationally available products [25], this is less of a concern when comparing generic drug prices. However, it is important to note that two countries with identical prices could show up as having differing price levels in a Paasche index if consumption patterns differ. Thus, multilateral price comparisons using Paasche indices should be interpreted with caution.
Fourth, we used common units of volume to aggregate data across formulations of active ingredients [25]. In using prices per dose, however, we assumed that a dose of a drug provides the same therapeutic benefit to any patient, regardless of strength-form combination. By contrast, prices per gram of active ingredient are sensitive to the selection of drugs, given that drug strengths often vary considerably between drugs [11]. The price per defined daily dose is an alternative metric. A defined daily dose is the "assumed average maintenance dose per day for a drug used for its main indication in adults." [41] We could not identify this dose for each drug in our dataset, as we did not have information about drug indications. However, defined daily doses are not always of equal therapeutic value to all patients, and they may not accurately reflect consumption patterns [25]. For example, a defined daily dose is not adjusted for differences in the duration of treatment. They are, therefore, not necessarily a better unit of comparison than doses or grams of active ingredient [25, 41]. Also, because defined daily doses are specified in terms of grams of active ingredient per day, indices based on defined daily doses and indices based on grams should generate similar findings if the average number of treatment days are fairly consistent across countries for most drugs [4].
Fifth, the drugs were listed by active ingredient, and no information was available on the indications for which the drugs were prescribed. However, a prior study found that the results of price indices were "virtually unchanged" when products were defined by active ingredient instead of active ingredient plus indication [25].
Lastly, we had to exclude 6.7% of drugs (213/3156) due to missing volume data.
Generic drug policy is an important topic given rising drug expenditures and concerns about the financial sustainability of many health-care systems. More research is needed to better understand the causes of variation in the prices of generic drugs across countries. This will help to identify which measures are most effective at reducing prices. Our findings suggest that some countries should focus on containing the distribution costs for generic drugs.
There are a number of methodological issues that can arise when trying to compare drug prices internationally. Drugs often differ across countries in terms of names, pack sizes, formulations, strengths, and manufacturers. They can also vary in terms of whether they are sold over-the-counter or through prescriptions, and whether they are sold in hospital or retail pharmacies. There is a trade-off between matching all of these factors — which produces more accurate price comparisons of individual products — and the sample size.
Once a sample of drugs has been chosen, there are various ways of calculating price indices to aggregate the data, each with its own advantages and disadvantages, as discussed in this paper. There is no gold standard for comparing drug prices. Our results showed that such comparisons are highly sensitive to the choice of method — for example, Laspeyres versus Paasche indices — which is consistent with the findings of earlier studies of patented drugs.
Overall, price indices are a useful statistical approach for comparing drug prices across countries, but policymakers and researchers should interpret price indices with caution given their limitations.
The dataset excluded generic drugs sold in hospital pharmacies, off-patent originator drugs, parallel-traded products, and off-patent biological drugs.
IMS Health refers to doses as "standard units."
These values were calculated by multiplying the number of packs sold of each product by the corresponding prices on a quarterly basis. For these calculations, IMS Health relied on the latest prices in each quarter from validated sources, such as government price lists and wholesaler invoices, excluding value-added taxes.
If sales of either <1,000 doses or < €1,000 were recorded in a country for a drug, we decided a priori to exclude the sales figures for that country, as was done in previous studies. Those values may reflect data-entry errors or inconsistencies in reporting across countries.
For the common sample of 110 active ingredients, the average number of grams of active ingredient per dose ranged from 0.09 grams in Sweden to 0.19 grams in Spain.
For ease of comparison to the other results, all prices are expressed in relation to those in Germany (index value = 100).
National government authorities in Denmark and Sweden operate tender-like systems: the relevant authority in each country asks drug makers to offer their best prices, and, in most cases, the cheapest products are the only ones which public payers will reimburse. This bidding process is repeated every two and four weeks in Denmark and Sweden, respectively. Payers in Germany and Spain also tender for generic drugs, but the tender results were kept confidential in 2013. The data, therefore, did not reflect tendering outcomes in either country.
ATC/DDD:
Anatomical therapeutic chemical classification system with defined daily doses
OECD:
Organization for Economic Co-operation and Development
PPP:
Purchasing power parity
We thank Joshua Chauvin, Olina Efthymiadou, Jeroen Luyten, Alessandra Ferrario, Nicola Foster, Erica Visintin, and Martin Wenzl for feedback on early versions of the manuscript.
We are grateful to Claire Machin and Per Troein (IMS Health) for supplying data. The statements, findings, conclusions, views, and opinions contained and expressed in this article are based in part on data obtained under license from the following IMS Health information service: Pricing Insights database, January - December 2013, IMS Health Incorporated. All Rights Reserved. The statements, findings, conclusions, views, and opinions contained and expressed herein are not necessarily those of IMS Health Incorporated or any of its affiliated or subsidiary entities.
The data that support the findings of this study are available from IMS Health but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of IMS Health.
OJW devised the study. OJW analyzed the data and interpreted the findings. PGK supervised the study. OJW drafted the manuscript. Both authors revised the manuscript and agreed on the final version of the paper before submission.
Ethics approval was not required for this manuscript since it does not contain any data collected from human subjects.
Additional file 1: Appendix 1. List of the 200 most-prescribed off-patent active ingredients in Europe in 2013 (anatomical main group in parentheses). Appendix 2. Ex-manufacturer and retail prices with France as the base country (2013). Appendix 3. Ex-manufacturer and retail prices based on PPP adjustments with Germany as the base country (2013). Appendix 4. Ex-manufacturer and retail prices of cardiovascular and nervous system drugs with Germany as the base country (2013). (DOCX 58 kb)
LSE Health, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, UK
Kanavos P, Schurer W, Vogler S. The pharmaceutical distribution chain in the European Union: structure and impact on pharmaceutical prices. Brussels: European Commission; 2010.Google Scholar
European Commission. Pharmaceutical sector inquiry - final report. Brussels: European Commission; 2009.Google Scholar
Danzon PM, Furukawa MF. Prices and availability of biopharmaceuticals: an international comparison. Health Aff. 2006;25(5):1353–62.View ArticleGoogle Scholar
Danzon PM, Chao LW. Cross-national price differences for pharmaceuticals: how large, and why? J Health Econ. 2000;19(2):159–95.View ArticlePubMedGoogle Scholar
Martikainen J, Kivi I, Linnosmaa I. European prices of newly launched reimbursable pharmaceuticals - a pilot study. Health Policy. 2005;74(3):235–46.View ArticlePubMedGoogle Scholar
Kanavos P, Ferrario A, Vandoros S, Anderson GF. Higher US branded drug prices and spending compared to other countries may stem partly from quick uptake of new drugs. Health Aff. 2013;32(4):753–61.View ArticleGoogle Scholar
Cameron A, Ewen M, Ross-Degnan D, Ball D, Laing R. Medicine prices, availability, and affordability in 36 developing and middle-income countries: a secondary analysis. Lancet. 2009;373(9659):240–9.View ArticlePubMedGoogle Scholar
Vogler S, Kilpatrick K, Babar ZUD. Analysis of medicine prices in New Zealand and 16 European countries. Value Health. 2015;18(4):484–92.View ArticlePubMedGoogle Scholar
U.S. Government Accountability Office (GAO). Prescription drugs: companies typically charge more in the United States than in the United Kingdom. Washington: U.S. Government Accountability Office (GAO); 1994. p. 56.Google Scholar
U.S. Government Accountability Office (GAO). Prescription drugs: companies typically charge more in the United States than in Canada. Washington: U.S. Government Accountability Office (GAO); 1992. p. 42.Google Scholar
U.S. Department of Commerce. In: Administration IT, editor. Pharmaceutical price controls in OECD countries: implications for U.S. consumers, pricing, research and development, and innovation. Washington: U.S. Department of Commerce; 2004. p. 125.Google Scholar
U.K. Office of Fair Trading (OFT). The pharmaceutical price regulation scheme: an OFT market study. London: U.K. Office of Fair Trading (OFT); 2007.Google Scholar
Garattini L, Motterlini N, Cornago D. Prices and distribution margins of in-patent drugs in pharmacy: a comparison in seven European countries. Health Policy. 2008;85(3):305–13.View ArticlePubMedGoogle Scholar
Kanavos PG, Vandoros S. Determinants of branded prescription medicine prices in OECD countries. Health Econ Policy Law. 2011;6(3):337–67.View ArticlePubMedGoogle Scholar
Lee D. The pharmaceutical price regulation scheme - 11th report to parliament. London: Department of Health; 2012.Google Scholar
Kanavos P. Measuring performance in off-patent drug markets: a methodological framework and empirical evidence from twelve EU member states. Health Policy. 2014;118(2):229–41.View ArticlePubMedGoogle Scholar
Danzon PM, Furukawa MF. International prices and availability of pharmaceuticals in 2005. Health Aff. 2008;27(1):221–33.View ArticleGoogle Scholar
Danzon PM, Furukawa MF. Prices and availability of pharmaceuticals: evidence from nine countries. Health Aff. 2003;22(6):W521–36.Google Scholar
Simoens S. International comparison of generic medicine prices. Curr Med Res Opin. 2007;23(11):2647–54.View ArticlePubMedGoogle Scholar
Mansfield SJ. Generic drug prices and policy in Australia: room for improvement? a comparative analysis with England. Aust Health Rev. 2014;38(1):6–15.View ArticlePubMedGoogle Scholar
Gooi M, Bell CM. Differences in generic drug prices between the US and Canada. Appl Health Econ Health Policy. 2008;6(1):19–26.View ArticlePubMedGoogle Scholar
Law MR. Money left on the table: generic drug prices in Canada. Healthcare Policy. 2013;8(3):17–25.PubMedPubMed CentralGoogle Scholar
Aho E, Johansson P, Rönnholm G. International comparison of medicine prices 2015 [in Swedish]. Stockholm: Tandvårds- och läkemedelsförmånsverket; 2015. p. 88.Google Scholar
Brekke KB, Holmås TH, Straume OR. Comparing pharmaceutical prices in Europe: a comparison of prescription drug prices in Norway with nine western European countries. Bergen: The Institute for Research in Economics and Business Administration; 2011.Google Scholar
Danzon PM, Kim JD. International price comparisons for pharmaceuticals. Measurement and policy issues. PharmacoEconomics. 1998;14 Suppl 1:115–28.View ArticlePubMedGoogle Scholar
Panteli D, Arickx F, Cleemput I, et al. Pharmaceutical regulation in 15 European countries review. Health Syst Transit. 2016;18(5):1–122.Google Scholar
Machado M, O'Brodovich R, Krahn M, Einarson TR. International drug price comparisons: quality assessment. Rev Panam Salud Publica. 2011;29(1):46–51.PubMedGoogle Scholar
Dylst P, Simoens S. Does the market share of generic medicines influence the price level? a European analysis. PharmacoEconomics. 2011;29(10):875–82.View ArticlePubMedGoogle Scholar
Danzon PM, Furukawa MF. Cross-national evidence on generic pharmaceuticals: phamacy vs. physician-driven markets. NBER Working Paper 17226. Cambridge: National Bureau for Economic Research; 2011.Google Scholar
Aho E, Johansson P, Rönnholm G. International comparison of medicines prices - An analysis of swedish medicine prices in relation to 15 European countries [in Swedish]. Stockholm: Tandvårds- och läkemedelsförmånsverket; 2014. p. 81.Google Scholar
Tandvårds- och läkemedelsförmånsverket. Prices in our database. 2017. http://www.tlv.se/In-English/price-database/.
Danish Medicines Agency. Medicine Prices. 2017.Google Scholar
Dylst P, Vulto A, Simoens S. Tendering for outpatient prescription pharmaceuticals: what can be learned from current practices in Europe? Health Policy. 2011;101(2):146–52.View ArticlePubMedGoogle Scholar
Reiffen D, Ward MR. Generic drug industry dynamics. Rev Econ Stat. 2005;87(1):37–49.View ArticleGoogle Scholar
Wiggins SN, Maness R. Price competition in pharmaceuticals: the case of anti-infectives. Econ Inq. 2004;42(2):247–63.View ArticleGoogle Scholar
Bae JP. Drug patent expirations and the speed of generic entry. Health Serv Res. 1997;32(1):87–101.PubMedPubMed CentralGoogle Scholar
Costa-Font J, McGuire A, Varol N. Price regulation and relative delays in generic drug adoption. J Health Econ. 2014;38:1–9.View ArticlePubMedGoogle Scholar
Vogler S, Zimmermann N, Habl C, Piessnegger J, Bucsics A. Discounts and rebates granted to public payers for medicines in European countries. Southern Med Rev. 2012;5(1):38–46.Google Scholar
Ellison SF, Cockburn I, Griliches Z, Hausman J. Characteristics of demand for pharmaceutical products: an examination of four cephalosporins. Rand J Econ. 1997;28(3):426–46.View ArticlePubMedGoogle Scholar
Gemmill MC, Costa-Font J, McGuire A. In search of a corrected prescription drug elasticity estimate: a meta-regression approach. Health Econ. 2007;16(6):627–43.View ArticlePubMedGoogle Scholar
WHO Collaborating Centre for Drug Statistics Methodology. DDD: Definition and general considerations. 2009. http://www.whocc.no/ddd/definition_and_general_considera/.
Utilization, expenditure, economics and financing systems | CommonCrawl |
analytic approximation of a non-negative matrix by a sequence of positive matrices
Asked 10 years, 10 months ago
Let $L \in \{0,1\}^{n \times n}$ be a non-negative matrix whose row sum is 1. ($L$ stands for "limit"). It is known that there exists a unique $r \times r$ principal minor of $L$ that is a permutation matrix for some $r$, $1 \leq r \leq n$. Fix an $n \times n$ non-negative matrix $\epsilon$ such that $$\epsilon_{ij} = 0 \Leftrightarrow L_{ij} = 1.$$ For $k > 0$, let $$T(k) = L + E(k)$$ be an analytic perturbation of $L$, where $$ E(k)_{ij} = \exp(-kE_{ij}). $$ Then $T(k)$ is a positive operator, thus it has a unique principal (Perron) eigenvalue and eigenvector, denoted $\rho(k)$ and $v(k)$, respectively. It is known that as $k \to \infty$, $\rho(k)^{1/k}$ converges to 1 and $v(k)^{1/k}$ (elementwise) converges to the all 1's vector. (This is because we forced the row sum of $L$ to be 1. In general these guys converge to the tropical max-times eigenvalue and eigenvector of $L$, respectively, but this is not important to the question).
My question is: for large $k$, what we can say about the "rate of convergence" of the quantities $\rho(k)^{1/k}$ and $v(k)^{1/k}$? By this I mean I want to know have an expansion of the type $$\rho(k)^{1/k} = 1 + \mbox{ (higher order terms in $k$ and $\epsilon_{ij}$)}, $$ and I'd like to know the expansion to as much precision as possible.
Some thoughts: the eigenvalues of $L$ are the $r$ roots of unity, each with multiplicity 1, and $0$ with multiplicity $n - r$. Thus easy off-the-shelf bounds on the operator norm of $E(k)$ (and Grioschinn-disks type of arguments) give a rate that is dependent on $r$. However, $\rho(k)^{1/k}$ is a real number bigger than 1, thus this sequence is converging towards the real eigenvalue $1$ from above on the real axis in $\mathbb{C}$. Thus the "correct" convergence rate should not depend on $r$ (and this is true in simulations). The reason is these theorems do not take into account the fact that the perturbed operator is positive and that we really have an analytic perturbation.
Some pointers on how to handle this problem would be much appreciated. I feel that this type of perturbation must have been studied in the literature (especially physics?) I'm also not familiar with analytic perturbation of operators myself - I've read a little of Baumgartel's book but didn't see immediately relevant results. Reference pointers would also be great.
Ngoc
sp.spectral-theory
perturbation
na.numerical-analysis
cv.complex-variables
edited Jul 26, 2012 at 3:59
Ngoc Mai TranNgoc Mai Tran
$\begingroup$ Who is this Grioschinn, of whom you speak? $\endgroup$
– Igor Rivin
$\begingroup$ @IgorRivin: Probably the OP means Gerschgorin. $\endgroup$
– Federico Poloni
$\begingroup$ Yes, sorry for the mutated spelling... $\endgroup$
– Ngoc Mai Tran
Dmitri Alekseevky, Andreas Kriegl, Mark Losik, Peter W. Michor: Choosing roots of polynomials smoothly, Israel J. Math 105 (1998), p. 203-233. (pdf)
given an algorithmic approach to the real analytic parameterization of eigenvalues. Maybe this can help. See also here for a later overview of available results.
Peter MichorPeter Michor
I might be misunderstanding something, but why is this not a standard perturbation estimate, of the sort studied exhaustively in the first chapter of Kato's perturbation theory, or Golub-Van Loan, or Horn-Johnson? There will be a difference depending on whether the Perron-Frobenius eigenvalue has multiplicity 1, or higher than one (in the latter case the matrix can be block diagonalized into permutation matrices).
Igor RivinIgor Rivin
$\begingroup$ To my understanding this is why: in this case the limiting eigenvalue always has multiplicity 1, but there are r-1 other guys of the same modulus. If the perturbation was not known to be of the above form, then these other eigenvalues play a role in the estimate, and hence the standard theory gives a bound on the error term that depends on $r$. But in this case, the perturbation is such that the perturbed eigenvalue always live on the real axis, and it approaches the limit from the right-hand side. Thus I think the bound on the error term should not depend on $r$, and... $\endgroup$
$\begingroup$ And the convergence should be faster than that can be obtained from Kato chapter 1. Also, this is an analytic perturbation, so we should be able to write down some series expansion explicitly in terms of $\epsilon_{ij}$ and $k$. I don't think these immediately come out of Kato's chapter 1. (But I suspect that it can be obtained elsewhere) $\endgroup$
numerically track spectrum curves of a parameter dependent linear operator
Eigenvalues of non-symmetric matrix and its transpose
Finding the smallest eigenvalues of a large, but structured, matrix
Is Rellich's function valued theorem valid for a rank defficient function valued matrix?
How much can I perturb a symmetric stochastic matrix and keep positive solutions?
Proving that the eigenvalues of a certain matrix product are positive
Spectral radius's relation with row sum | CommonCrawl |
Define nuclear fusion.
Discuss processes to achieve practical fusion energy generation.
While basking in the warmth of the summer sun, a student reads of the latest breakthrough in achieving sustained thermonuclear power and vaguely recalls hearing about the cold fusion controversy. The three are connected. The Sun's energy is produced by nuclear fusion (see Figure 1). Thermonuclear power is the name given to the use of controlled nuclear fusion as an energy source. While research in the area of thermonuclear power is progressing, high temperatures and containment difficulties remain. The cold fusion controversy centered around unsubstantiated claims of practical fusion power at room temperatures.
Figure 1. The Sun's energy is produced by nuclear fusion. (credit: Spiralz)
Nuclear fusion is a reaction in which two nuclei are combined, or fused, to form a larger nucleus. We know that all nuclei have less mass than the sum of the masses of the protons and neutrons that form them. The missing mass times [latex]{c^2}[/latex] equals the binding energy of the nucleus—the greater the binding energy, the greater the missing mass. We also know that [latex]{\text{BE/}A}[/latex] , the binding energy per nucleon, is greater for medium-mass nuclei and has a maximum at Fe (iron). This means that if two low-mass nuclei can be fused together to form a larger nucleus, energy can be released. The larger nucleus has a greater binding energy and less mass per nucleon than the two that combined. Thus mass is destroyed in the fusion reaction, and energy is released (see Figure 2). On average, fusion of low-mass nuclei releases energy, but the details depend on the actual nuclides involved.
Figure 2. Fusion of light nuclei to form medium-mass nuclei destroys mass, because BE/A is greater for the product nuclei. The larger BE/A is, the less mass per nucleon, and so mass is converted to energy and released in these fusion reactions.
The major obstruction to fusion is the Coulomb repulsion between nuclei. Since the attractive nuclear force that can fuse nuclei together is short ranged, the repulsion of like positive charges must be overcome to get nuclei close enough to induce fusion. Figure 3 shows an approximate graph of the potential energy between two nuclei as a function of the distance between their centers. The graph is analogous to a hill with a well in its center. A ball rolled from the right must have enough kinetic energy to get over the hump before it falls into the deeper well with a net gain in energy. So it is with fusion. If the nuclei are given enough kinetic energy to overcome the electric potential energy due to repulsion, then they can combine, release energy, and fall into a deep well. One way to accomplish this is to heat fusion fuel to high temperatures so that the kinetic energy of thermal motion is sufficient to get the nuclei together.
Figure 3. Potential energy between two light nuclei graphed as a function of distance between them. If the nuclei have enough kinetic energy to get over the Coulomb repulsion hump, they combine, release energy, and drop into a deep attractive well. Tunneling through the barrier is important in practice. The greater the kinetic energy and the higher the particles get up the barrier (or the lower the barrier), the more likely the tunneling.
You might think that, in the core of our Sun, nuclei are coming into contact and fusing. However, in fact, temperatures on the order of [latex]{^{108} \textbf{K}}[/latex] are needed to actually get the nuclei in contact, exceeding the core temperature of the Sun. Quantum mechanical tunneling is what makes fusion in the Sun possible, and tunneling is an important process in most other practical applications of fusion, too. Since the probability of tunneling is extremely sensitive to barrier height and width, increasing the temperature greatly increases the rate of fusion. The closer reactants get to one another, the more likely they are to fuse (see Figure 4). Thus most fusion in the Sun and other stars takes place at their centers, where temperatures are highest. Moreover, high temperature is needed for thermonuclear power to be a practical source of energy.
Figure 4. (a) Two nuclei heading toward each other slow down, then stop, and then fly away without touching or fusing. (b) At higher energies, the two nuclei approach close enough for fusion via tunneling. The probability of tunneling increases as they approach, but they do not have to touch for the reaction to occur.
The Sun produces energy by fusing protons or hydrogen nuclei [latex]{^1 \textbf{H}}[/latex] (by far the Sun's most abundant nuclide) into helium nuclei [latex]{^4 \text{He}}[/latex]. The principal sequence of fusion reactions forms what is called the proton-proton cycle:
[latex]$\begin{array} {r @{{} \rightarrow{}} l @{{} \;\;\; {}} l} {^1 \textbf{H} + {^1 \textbf{H}}} & {{^2 \textbf{H}} + e ^+ + v_e} & {(0.42 \;\text{MeV})} \\[1em] {{^1 \textbf{H}} + {^2 \textbf{H}}} & {{^3 \text{He}} + \gamma} & {(5.49 \;\text{MeV})} \\[1em] {{^3 \text{He}} + {^3 \text{He}}} & {{^4 \text{He}} + {^1 \textbf{H}} + {^1 \textbf{H}}} & {(12.86 \text{MeV})} \end{array}$[/latex]
where [latex]{e ^+}[/latex] stands for a positron and [latex]{v_e}[/latex] is an electron neutrino. (The energy in parentheses is released by the reaction.) Note that the first two reactions must occur twice for the third to be possible, so that the cycle consumes six protons ( [latex]{^1 \textbf{H}}[/latex] ) but gives back two. Furthermore, the two positrons produced will find two electrons and annihilate to form four more [latex]{\gamma}[/latex] rays, for a total of six. The overall effect of the cycle is thus
[latex]$\begin{array} {r @{{} \rightarrow {}}l @{{} \;\;\; {}} l} {{2e ^-} + 4 {^1 \textbf{H}}} & {{^4 \text{He}} + {2v_e} + {6 \gamma}} & {(26.7 \;\text{MeV})} \end{array}$[/latex]
where the 26.7 MeV includes the annihilation energy of the positrons and electrons and is distributed among all the reaction products. The solar interior is dense, and the reactions occur deep in the Sun where temperatures are highest. It takes about 32,000 years for the energy to diffuse to the surface and radiate away. However, the neutrinos escape the Sun in less than two seconds, carrying their energy with them, because they interact so weakly that the Sun is transparent to them. Negative feedback in the Sun acts as a thermostat to regulate the overall energy output. For instance, if the interior of the Sun becomes hotter than normal, the reaction rate increases, producing energy that expands the interior. This cools it and lowers the reaction rate. Conversely, if the interior becomes too cool, it contracts, increasing the temperature and reaction rate (see Figure 5). Stars like the Sun are stable for billions of years, until a significant fraction of their hydrogen has been depleted. What happens then is discussed in Chapter 34 Introduction to Frontiers of Physics .
Figure 5. Nuclear fusion in the Sun converts hydrogen nuclei into helium; fusion occurs primarily at the boundary of the helium core, where temperature is highest and sufficient hydrogen remains. Energy released diffuses slowly to the surface, with the exception of neutrinos, which escape immediately. Energy production remains stable because of negative feedback effects.
Theories of the proton-proton cycle (and other energy-producing cycles in stars) were pioneered by the German-born, American physicist Hans Bethe (1906–2005), starting in 1938. He was awarded the 1967 Nobel Prize in physics for this work, and he has made many other contributions to physics and society. Neutrinos produced in these cycles escape so readily that they provide us an excellent means to test these theories and study stellar interiors. Detectors have been constructed and operated for more than four decades now to measure solar neutrinos (see Figure 6). Although solar neutrinos are detected and neutrinos were observed from Supernova 1987A (Figure 7), too few solar neutrinos were observed to be consistent with predictions of solar energy production. After many years, this solar neutrino problem was resolved with a blend of theory and experiment that showed that the neutrino does indeed have mass. It was also found that there are three types of neutrinos, each associated with a different type of nuclear decay.
Figure 6. This array of photomultiplier tubes is part of the large solar neutrino detector at the Fermi National Accelerator Laboratory in Illinois. In these experiments, the neutrinos interact with heavy water and produce flashes of light, which are detected by the photomultiplier tubes. In spite of its size and the huge flux of neutrinos that strike it, very few are detected each day since they interact so weakly. This, of course, is the same reason they escape the Sun so readily. (credit: Fred Ullrich)
Figure 7. Supernovas are the source of elements heavier than iron. Energy released powers nucleosynthesis. Spectroscopic analysis of the ring of material ejected by Supernova 1987A observable in the southern hemisphere, shows evidence of heavy elements. The study of this supernova also provided indications that neutrinos might have mass. (credit: NASA, ESA, and P. Challis)
The proton-proton cycle is not a practical source of energy on Earth, in spite of the great abundance of hydrogen ([latex]{^1 \textbf{H}}[/latex]). The reaction [latex]{{^1 \textbf{H}} + {^1 \textbf{H}} \rightarrow {^2 \textbf{H}} + e^+ + v_e}[/latex] has a very low probability of occurring. (This is why our Sun will last for about ten billion years.) However, a number of other fusion reactions are easier to induce. Among them are:
[latex]$\begin{array} {r @{{} \rightarrow{}} l @{{} \;\;\; {}} l} {^2 \textbf{H} + {^2 \textbf{H}}} & {{^3 \textbf{H}} + {^1 \textbf{H}}} & {(4.03 \;\text{MeV})} \\[1em] {{^2 \textbf{H}} + {^2 \textbf{H}}} & {{^3 \text{He}} + n} & {(3.27 \;\text{MeV})} \\[1em] {{^2 \textbf{H}} + {^3 \textbf{H}}} & {{^4 \text{He}} + n} & {(17.59 \text{MeV})} \\[1em] {{^2 \textbf{H}} + {^2 \textbf{H}}} & {{^4 \text{He}} + \gamma} & {(23.85 \;\text{MeV})} \end{array}$[/latex]
Deuterium ([latex]{^2 \textbf{H}}[/latex]) is about 0.015% of natural hydrogen, so there is an immense amount of it in sea water alone. In addition to an abundance of deuterium fuel, these fusion reactions produce large energies per reaction (in parentheses), but they do not produce much radioactive waste. Tritium ([latex]{^3 \textbf{H}}[/latex]) is radioactive, but it is consumed as a fuel (the reaction [latex]{{^2 \textbf{H}} + {^3 \textbf{H}} \rightarrow {^4 \text{He}} + n}[/latex]), and the neutrons and [latex]{\gamma}[/latex] s can be shielded. The neutrons produced can also be used to create more energy and fuel in reactions like
[latex]$\begin{array} {r @{{} \rightarrow {}}l @{{} \;\;\; {}} l} {{n} + {^1 \textbf{H}}} & {{^2 \textbf{H}} + {\gamma}} & {(20.68 \;\text{MeV})} \end{array}$[/latex]
[latex]$\begin{array} {r @{{} \rightarrow {}}l @{{} \;\;\; {}} l} {{n} + {^1 \textbf{H}}} & {{^2 \textbf{H}} + {\gamma}} & {(2.22 \;\text{MeV})} \end{array}$[/latex]
Note that these last two reactions, and [latex]{{^2 \textbf{H}} + {^2 \textbf{H}} \rightarrow {^4 \text{He}} + \gamma}[/latex], put most of their energy output into the [latex]{\gamma}[/latex] ray, and such energy is difficult to utilize.
The three keys to practical fusion energy generation are to achieve the temperatures necessary to make the reactions likely, to raise the density of the fuel, and to confine it long enough to produce large amounts of energy. These three factors—temperature, density, and time—complement one another, and so a deficiency in one can be compensated for by the others. Ignition is defined to occur when the reactions produce enough energy to be self-sustaining after external energy input is cut off. This goal, which must be reached before commercial plants can be a reality, has not been achieved. Another milestone, called break-even, occurs when the fusion power produced equals the heating power input. Break-even has nearly been reached and gives hope that ignition and commercial plants may become a reality in a few decades.
Two techniques have shown considerable promise. The first of these is called magnetic confinement and uses the property that charged particles have difficulty crossing magnetic field lines. The tokamak, shown in Figure 8, has shown particular promise. The tokamak's toroidal coil confines charged particles into a circular path with a helical twist due to the circulating ions themselves. In 1995, the Tokamak Fusion Test Reactor at Princeton in the US achieved world-record plasma temperatures as high as 500 million degrees Celsius. This facility operated between 1982 and 1997. A joint international effort is underway in France to build a tokamak-type reactor that will be the stepping stone to commercial power. ITER, as it is called, will be a full-scale device that aims to demonstrate the feasibility of fusion energy. It will generate 500 MW of power for extended periods of time and will achieve break-even conditions. It will study plasmas in conditions similar to those expected in a fusion power plant. Completion is scheduled for 2018.
Figure 8. (a) Artist's rendition of ITER, a tokamak-type fusion reactor being built in southern France. It is hoped that this gigantic machine will reach the break-even point. Completion is scheduled for 2018. (credit: Stephan Mosel, Flickr)
The second promising technique aims multiple lasers at tiny fuel pellets filled with a mixture of deuterium and tritium. Huge power input heats the fuel, evaporating the confining pellet and crushing the fuel to high density with the expanding hot plasma produced. This technique is called inertial confinement, because the fuel's inertia prevents it from escaping before significant fusion can take place. Higher densities have been reached than with tokamaks, but with smaller confinement times. In 2009, the Lawrence Livermore Laboratory (CA) completed a laser fusion device with 192 ultraviolet laser beams that are focused upon a D-T pellet (see Figure 9).
Figure 9. National Ignition Facility (CA). This image shows a laser bay where 192 laser beams will focus onto a small D-T target, producing fusion. (credit: Lawrence Livermore National Laboratory, Lawrence Livermore National Security, LLC, and the Department of Energy)
Example 1: Calculating Energy and Power from Fusion
(a) Calculate the energy released by the fusion of a 1.00-kg mixture of deuterium and tritium, which produces helium. There are equal numbers of deuterium and tritium nuclei in the mixture.
(b) If this takes place continuously over a period of a year, what is the average power output?
According to [latex]{{^2 \textbf{H}} + {^3 \textbf{H}} \rightarrow {^4 \text{He}} + n}[/latex], the energy per reaction is 17.59 MeV. To find the total energy released, we must find the number of deuterium and tritium atoms in a kilogram. Deuterium has an atomic mass of about 2 and tritium has an atomic mass of about 3, for a total of about 5 g per mole of reactants or about 200 mol in 1.00 kg. To get a more precise figure, we will use the atomic masses from Appendix A. The power output is best expressed in watts, and so the energy output needs to be calculated in joules and then divided by the number of seconds in a year.
Solution for (a)
The atomic mass of deuterium ([latex]{^2 \textbf{H}}[/latex]) is 2.014102 u, while that of tritium ([latex]{^3 \textbf{H}}[/latex]) is 3.016049 u, for a total of 5.032151 u per reaction. So a mole of reactants has a mass of 5.03 g, and in 1.00 kg there are [latex]{(1000 \;\textbf{g})/(5.03 \;\text{g/mol}) = 198.8 \;\text{mol of reactants}}[/latex]. The number of reactions that take place is therefore
[latex]{(198.8 \;\text{mol})(6.02 \times 10^{23} \;\text{mol}^{-1}) = 1.20 \times 10^{26} \;\text{reactions}}[/latex]
The total energy output is the number of reactions times the energy per reaction:
[latex]{E = (1.20 \times 10^{26} \;\text{reactions})(17.59 \;\text{MeV/reaction}) (1.602 \times 10^{-13} \;\text{J/MeV}) = 3.37 \times 10^{14} \;\textbf{J}}[/latex]
Solution for (b)
Power is energy per unit time. One year has [latex]{3.16 \times 10^7 \;\textbf{s}}[/latex], so
[latex]$\begin{array}{ r @{{}={}} l} {P} & {\frac{E}{t} = \frac{3.37 \times 10^{14} \;\textbf{J}}{3.16 \times 10^7 \;\textbf{s}}} \\[1em] & {1.07 \times 10^7 \;\textbf{W} = 10.7 \;\text{MW}} \end{array}$[/latex]
By now we expect nuclear processes to yield large amounts of energy, and we are not disappointed here. The energy output of [latex]{3.37 \times 10^{14} \;\textbf{J}}[/latex] from fusing 1.00 kg of deuterium and tritium is equivalent to 2.6 million gallons of gasoline and about eight times the energy output of the bomb that destroyed Hiroshima. Yet the average backyard swimming pool has about 6 kg of deuterium in it, so that fuel is plentiful if it can be utilized in a controlled manner. The average power output over a year is more than 10 MW, impressive but a bit small for a commercial power plant. About 32 times this power output would allow generation of 100 MW of electricity, assuming an efficiency of one-third in converting the fusion energy to electrical energy.
Nuclear fusion is a reaction in which two nuclei are combined to form a larger nucleus. It releases energy when light nuclei are fused to form medium-mass nuclei.
Fusion is the source of energy in stars, with the proton-proton cycle,
being the principal sequence of energy-producing reactions in our Sun.
The overall effect of the proton-proton cycle is
where the 26.7 MeV includes the energy of the positrons emitted and annihilated.
Attempts to utilize controlled fusion as an energy source on Earth are related to deuterium and tritium, and the reactions play important roles.
Ignition is the condition under which controlled fusion is self-sustaining; it has not yet been achieved. Break-even, in which the fusion energy output is as great as the external energy input, has nearly been achieved.
Magnetic confinement and inertial confinement are the two methods being developed for heating fuel to sufficiently high temperatures, at sufficient density, and for sufficiently long times to achieve ignition. The first method uses magnetic fields and the second method uses the momentum of impinging laser beams for confinement.
1: Why does the fusion of light nuclei into heavier nuclei release energy?
2: Energy input is required to fuse medium-mass nuclei, such as iron or cobalt, into more massive nuclei. Explain why.
3: In considering potential fusion reactions, what is the advantage of the reaction [latex]{{^2 \textbf{H}} + {^3 \textbf{H}} \rightarrow {^4 \text{He}} + n}[/latex] over the reaction [latex]{{^2 \textbf{H}}+{^2 \textbf{H}} \rightarrow {^3 \text{He}} +n}[/latex] ?
4: Give reasons justifying the contention made in the text that energy from the fusion reaction [latex]{{^2 \textbf{H}} + {^2 \textbf{H}} \rightarrow {^4 \text{He}} + \gamma}[/latex] is relatively difficult to capture and utilize.
1: Verify that the total number of nucleons, total charge, and electron family number are conserved for each of the fusion reactions in the proton-proton cycle in
[latex]{{^1 \textbf{H}} + {^1 \textbf{H}} \rightarrow {^2 \textbf{H}} + e^+ + v_e ,}[/latex]
[latex]{{^1 \textbf{H}} + {^2 \textbf{H}} \rightarrow {^3 \text{He}} + \gamma ,}[/latex]
[latex]{{^3 \text{He}} + {^3 \text{He}} \rightarrow {^4 \text{He}} + {^1 \textbf{H}} + {^1 \textbf{H}} .}[/latex]
(List the value of each of the conserved quantities before and after each of the reactions.)
2: Calculate the energy output in each of the fusion reactions in the proton-proton cycle, and verify the values given in the above summary.
3: Show that the total energy released in the proton-proton cycle is 26.7 MeV, considering the overall effect in [latex]{{^1 \textbf{H}} + {^1 \textbf{H}} \rightarrow {^2 \textbf{H}} + e^+ + v_e}[/latex], [latex]{^1 \textbf{H} + {^2 \textbf{H}} \rightarrow {^3 \text{He} + \gamma} }[/latex] , and [latex]{{^3 \text{He}} + {^3 \text{He}} \rightarrow {^4 \text{He}} + {^1 \textbf{H}} + {^1 \textbf{H}}}[/latex] and being certain to include the annihilation energy.
4: Verify by listing the number of nucleons, total charge, and electron family number before and after the cycle that these quantities are conserved in the overall proton-proton cycle in [latex]{2e-+ 4 {^1 \textbf{H}} \rightarrow {^4 \text{He}} + 2v_{\textbf{e}} + 6 \gamma}[/latex].
5: The energy produced by the fusion of a 1.00-kg mixture of deuterium and tritium was found in Example Example 1 – Calculating Energy and Power from Fusion. Approximately how many kilograms would be required to supply the annual energy use in the United States?
6: Tritium is naturally rare, but can be produced by the reaction [latex]{n + {^2 \textbf{H}} \rightarrow {^3 \textbf{H}} + \gamma}[/latex]. How much energy in MeV is released in this neutron capture?
7: Two fusion reactions mentioned in the text are
[latex]{n + {^3 \text{He}} \rightarrow {^4 \text{He}} + \gamma}[/latex]
[latex]{n+ {^1 \textbf{H}} \rightarrow {^2 \textbf{H}}+ \gamma}[/latex].
Both reactions release energy, but the second also creates more fuel. Confirm that the energies produced in the reactions are 20.58 and 2.22 MeV, respectively. Comment on which product nuclide is most tightly bound, [latex]{^4 \text{He}}[/latex] or [latex]{^2 \textbf{H}}[/latex].
8: (a) Calculate the number of grams of deuterium in an 80,000-L swimming pool, given deuterium is 0.0150% of natural hydrogen.
(b) Find the energy released in joules if this deuterium is fused via the reaction [latex]{{^2 \textbf{H}} + {^2 \textbf{H}} \rightarrow {^3 \text{He}} + n}[/latex].
(c) Could the neutrons be used to create more energy?
(d) Discuss the amount of this type of energy in a swimming pool as compared to that in, say, a gallon of gasoline, also taking into consideration that water is far more abundant.
9: How many kilograms of water are needed to obtain the 198.8 mol of deuterium, assuming that deuterium is 0.01500% (by number) of natural hydrogen?
10: The power output of the Sun is [latex]{4 \times 10^{26} \;\textbf{W}}[/latex].
(a) If 90% of this is supplied by the proton-proton cycle, how many protons are consumed per second?
(b) How many neutrinos per second should there be per square meter at the Earth from this process? This huge number is indicative of how rarely a neutrino interacts, since large detectors observe very few per day.
Another set of reactions that result in the fusing of hydrogen into helium in the Sun and especially in hotter stars is called the carbon cycle. It is
[latex]$\begin{array}{l @{{}\rightarrow{}}l} {^{12} \textbf{C} + ^1 \textbf{H}} & {^{13} \textbf{N} + \gamma ,} \\[1em] {^{13} \textbf{N}} & {^{13} \textbf{C} + e^+ + v_{e} ,} \\[1em] {^{13} \textbf{C} + ^1 \textbf{H}} & {^{14} \textbf{N} + \gamma ,} \\[1em] {^{14} \textbf{N} + ^{1} \textbf{H}} & {^{15} \textbf{O}+ \gamma ,} \\[1em] {^{15} \textbf{O}} & {^{15} \textbf{N} + e^+ + v_e,} \\[1em] {^{15} \textbf{N} + ^1 \textbf{H}} & {^{12} \textbf{C} + ^4 \text{He} .} \end{array}$[/latex]
11: Write down the overall effect of the carbon cycle (as was done for the proton-proton cycle in [latex]{2e^- + ^{41} \textbf{H} \rightarrow ^4 \text{He} + 2v_e + 6 \gamma}[/latex] ). Note the number of protons ( [latex]{^1 \textbf{H}}[/latex]) required and assume that the positrons (
[latex]{e^+}[/latex]) annihilate electrons to form more [latex]{\gamma}[/latex] rays.
12: (a) Find the total energy released in MeV in each carbon cycle (elaborated in the above problem) including the annihilation energy.
(b) How does this compare with the proton-proton cycle output?
13: Verify that the total number of nucleons, total charge, and electron family number are conserved for each of the fusion reactions in the carbon cycle given in the above problem. (List the value of each of the conserved quantities before and after each of the reactions.)
14: Integrated Concepts
The laser system tested for inertial confinement can produce a 100-kJ pulse only 1.00 ns in duration. (a) What is the power output of the laser system during the brief pulse?
(b) How many photons are in the pulse, given their wavelength is [latex]{1.06 \;\mu \text{m}}[/latex] ?
(c) What is the total momentum of all these photons?
(d) How does the total photon momentum compare with that of a single 1.00 MeV deuterium nucleus?
Find the amount of energy given to the [latex]{^4 \text{He}}[/latex] nucleus and to the [latex]{\gamma}[/latex] ray in the reaction [latex]{n + {^{3} \text{He}} \rightarrow {^{4} \text{He}} + \gamma}[/latex] , using the conservation of momentum principle and taking the reactants to be initially at rest. This should confirm the contention that most of the energy goes to the [latex]{\gamma}[/latex] ray.
(a) What temperature gas would have atoms moving fast enough to bring two [latex]{^3 \text{He}}[/latex] nuclei into contact? Note that, because both are moving, the average kinetic energy only needs to be half the electric potential energy of these doubly charged nuclei when just in contact with one another.
(b) Does this high temperature imply practical difficulties for doing this in controlled fusion?
(a) Estimate the years that the deuterium fuel in the oceans could supply the energy needs of the world. Assume world energy consumption to be ten times that of the United States which is [latex]{8 \times 10^{19}}[/latex] J/y and that the deuterium in the oceans could be converted to energy with an efficiency of 32%. You must estimate or look up the amount of water in the oceans and take the deuterium content to be 0.015% of natural hydrogen to find the mass of deuterium available. Note that approximate energy yield of deuterium is [latex]{3.37 \times 10^{14}}[/latex] J/kg.
(b) Comment on how much time this is by any human measure. (It is not an unreasonable result, only an impressive one.)
when fusion power produced equals the heating power input
when a fusion reaction produces enough energy to be self-sustaining after external energy input is cut off
inertial confinement
a technique that aims multiple lasers at tiny fuel pellets evaporating and crushing them to high density
a technique in which charged particles are trapped in a small region because of difficulty in crossing magnetic field lines
a reaction in which two nuclei are combined, or fused, to form a larger nucleus
proton-proton cycle
the combined reactions [latex]{{^1 \textbf{H}} + {^1 \textbf{H}} \rightarrow {^2 \textbf{H}} + e^+ + v_e}[/latex], and [latex]{{^3 \text{He}} + {^3 \text{He}} \rightarrow {^4 \text{He}} + {^1 \textbf{H}} + {^1 \textbf{H}}}[/latex]
1: (a) [latex]{A = 1+1=2}[/latex] , [latex]{Z=1+1=1+1}[/latex] , [latex]{\text{efn}=0= -1+1}[/latex]
(b) [latex]{A=1+2=3}[/latex] , [latex]{Z=1+1=2}[/latex] , [latex]{\text{efn}=0=0}[/latex]
(c) [latex]{A = 3+3=4+1+1}[/latex] , [latex]{Z=2+2=2+1+1}[/latex] , [latex]{\text{efn}=0=0}[/latex]
3: [latex]$\begin{array}{r @{{}={}} l} {E} & {(m_{\textbf{i}} - m_{\textbf{f}})c^2} \\[1em] & {[4m (1\textbf{H}) - m(4 \text{He})]c^2} \\[1em] & {[4(1.007825) - 4.002603](931.5 \;\text{MeV})} \\[1em] & {26.73 \;\text{MeV}} \end{array}$[/latex]
5: [latex]{3.12 \times 10^5 \;\text{kg}}[/latex] (about 200 tons)
[latex]$\begin{array}{r @{{}={}} l} {E} & {(m_{\textbf{i}} - m_{\textbf{f}})c^2} \\[1em] {E_1} & { (1.008665 + 3.016030 - 4.002603)(931.5 \;\text{MeV})} \\[1em] & {20.58 \;\text{MeV}} \\[1em] {E_2} & {(1.008665 + 1.007825 - 2.014102)(931.5 \;\text{MeV})} \\[1em] & {2.224 \;\text{MeV}} \end{array}$[/latex]
[latex]{^4 \text{He}}[/latex] is more tightly bound, since this reaction gives off more energy per nucleon.
9: [latex]{1.19 \times 10^4 \;\text{kg}}[/latex]
[latex]{2e^- + ^{41} \textbf{H} \rightarrow ^4 \text{He} + 7 \gamma +2v_e}[/latex]
13: (a) [latex]{A=12+1=13}[/latex] , [latex]{Z=6+1=7}[/latex] , [latex]{\text{efn}=0=0}[/latex]
(b) [latex]{A=13=13}[/latex] , [latex]{Z=7=6+1}[/latex] , [latex]{\text{efn}=0=-1+1}[/latex]
(c) [latex]{A=13+1=14}[/latex] , [latex]{Z=6+1=7}[/latex] , [latex]{\text{efn}=0=0}[/latex]
(d) [latex]{A=14+1=15}[/latex] , [latex]{Z=7+1=8}[/latex] , [latex]{\text{efn}=0=0}[/latex]
(e) [latex]{A=15=15}[/latex] , [latex]{Z=8=7+1}[/latex] , [latex]{\text{efn}=0=-1+1}[/latex]
(f) [latex]{A=15+1=12+4}[/latex] , [latex]{Z=7+1=6+2}[/latex] , [latex]{\text{efn} = 0 = 0}[/latex]
15: [latex]{E_{\gamma} = 20.6 \;\text{MeV}}[/latex]
[latex]{E_{^4 \text{He}} = 5.68 \times 10^{-2} \;\text{MeV}}[/latex]
(a) [latex]{3 \times 10^9 \;\textbf{y}}[/latex]
(b) This is approximately half the lifetime of the Earth.
Previous: 32.4 Food Irradiation
Next: 32.6 Fission | CommonCrawl |
Optimal control problem for Allen-Cahn type equation associated with total variation energy
DCDS-S Home
Modeling drug-protein dynamics
February 2012, 5(1): 183-189. doi: 10.3934/dcdss.2012.5.183
Stripe patterns and the Eikonal equation
Mark A. Peletier 1, and Marco Veneroni 2,
Dept. of Mathematics and Computer Science and Institute for Complex Molecular Systems, Technische Universiteit Eindhoven, PO Box 513, 5600 MB Eindhoven, Netherlands
Technische Universität Dortmund, Fakultät für Mathematik, Lehrstuhl I, Vogelpothsweg 87, 44227 Dortmund, Germany
Received April 2009 Revised December 2009 Published February 2011
We study a new formulation for the Eikonal equation $|\nabla u| =1$ on a bounded subset of $\R^2$. Considering a field $P$ of orthogonal projections onto $1$-dimensional subspaces, with div$ P \in L^2$, we prove existence and uniqueness for solutions of the equation $P$ div $P$=0. We give a geometric description, comparable with the classical case, and we prove that such solutions exist only if the domain is a tubular neighbourhood of a regular closed curve.
This formulation provides a useful approach to the analysis of stripe patterns. It is specifically suited to systems where the physical properties of the pattern are invariant under rotation over 180 degrees, such as systems of block copolymers or liquid crystals.
Keywords: orientable vector fields, Eikonal equation, pattern formation, Gamma-convergence, block copolymers..
Mathematics Subject Classification: 35L65, 35B6.
Citation: Mark A. Peletier, Marco Veneroni. Stripe patterns and the Eikonal equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (1) : 183-189. doi: 10.3934/dcdss.2012.5.183
J. M. Ball and A. Zarnescu, Orientability and energy minimization in liquid crystal models,, preprint, (). Google Scholar
E. Bodenschatz, W. Pesch and G. Ahlers, Recent developments in Rayleigh-Bénard convection,, in, 32 (2000), 709. Google Scholar
J. A. Boon, C. J. Budd and G. W. Hunt, Level set methods for the displacement of layered materials,, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 463 (2007), 1447. Google Scholar
M. G. Crandall, L. C. Evans and P.-L. Lions, Some properties of viscosity solutions of Hamilton-Jacobi equations,, Trans. Amer. Math. Soc., 282 (1984), 487. doi: 10.1090/S0002-9947-1984-0732102-X. Google Scholar
N. Ercolani, R. Indik, A. C. Newell and T. Passot, Global description of patterns far from onset: A case study,, in, 1 (2003), 411. Google Scholar
J. E. Hutchinson, Second fundamental form for varifolds and the existence of surfaces minimising curvature,, Indiana Univ. Math. J., 35 (1986), 45. doi: 10.1512/iumj.1986.35.35003. Google Scholar
P.-E. Jabin, F. Otto and B. Perthame, Line-energy Ginzburg-Landau models: Zero-energy states,, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 1 (2002), 187. Google Scholar
M. Peletier and M. Röger, Partial localization, lipid bilayers, and the elastica functional,, Arch. Ration. Mech. Anal., 193 (2009), 475. doi: 10.1007/s00205-008-0150-4. Google Scholar
M. A. Peletier and M. Veneroni, Non-oriented solutions of the eikonal equation,, C. R. Math. Acad. Sci. Paris., 348 (2010), 1099. doi: 10.1016/j.crma.2010.09.011. Google Scholar
M. A. Peletier and M. Veneroni, Stripe patterns in a model for block copolymers,, Math. Models Methods Appl. Sci., 20 (2010), 843. doi: 10.1142/S0218202510004465. Google Scholar
A. Ruzette and L. Leibler, Block copolymers in tomorrow's plastics,, Nature Materials, 4 (2005), 19. doi: 10.1038/nmat1295. Google Scholar
Sylvia Serfaty. Gamma-convergence of gradient flows on Hilbert and metric spaces and applications. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1427-1451. doi: 10.3934/dcds.2011.31.1427
Christian Kuehn, Pasha Tkachov. Pattern formation in the doubly-nonlocal Fisher-KPP equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 2077-2100. doi: 10.3934/dcds.2019087
Chadi Nour. Construction of solutions to a global Eikonal equation. Conference Publications, 2007, 2007 (Special) : 779-783. doi: 10.3934/proc.2007.2007.779
Simone Cacace, Maurizio Falcone. A dynamic domain decomposition for the eikonal-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 109-123. doi: 10.3934/dcdss.2016.9.109
Jaime Cruz-Sampedro. Schrödinger-like operators and the eikonal equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 495-510. doi: 10.3934/cpaa.2014.13.495
Julien Cividini. Pattern formation in 2D traffic flows. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 395-409. doi: 10.3934/dcdss.2014.7.395
Yuan Lou, Wei-Ming Ni, Shoji Yotsutani. Pattern formation in a cross-diffusion system. Discrete & Continuous Dynamical Systems - A, 2015, 35 (4) : 1589-1607. doi: 10.3934/dcds.2015.35.1589
Peter Rashkov. Remarks on pattern formation in a model for hair follicle spacing. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1555-1572. doi: 10.3934/dcdsb.2015.20.1555
Rui Peng, Fengqi Yi. On spatiotemporal pattern formation in a diffusive bimolecular model. Discrete & Continuous Dynamical Systems - B, 2011, 15 (1) : 217-230. doi: 10.3934/dcdsb.2011.15.217
Tian Ma, Shouhong Wang. Dynamic transition and pattern formation for chemotactic systems. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2809-2835. doi: 10.3934/dcdsb.2014.19.2809
Taylan Sengul, Shouhong Wang. Pattern formation and dynamic transition for magnetohydrodynamic convection. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2609-2639. doi: 10.3934/cpaa.2014.13.2609
Gianni Dal Maso. Ennio De Giorgi and $\mathbf\Gamma$-convergence. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1017-1021. doi: 10.3934/dcds.2011.31.1017
Alexander Mielke. Deriving amplitude equations via evolutionary $\Gamma$-convergence. Discrete & Continuous Dynamical Systems - A, 2015, 35 (6) : 2679-2700. doi: 10.3934/dcds.2015.35.2679
Leonardo Câmara, Bruno Scárdua. On the integrability of holomorphic vector fields. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 481-493. doi: 10.3934/dcds.2009.25.481
Jifeng Chu, Zhaosheng Feng, Ming Li. Periodic shadowing of vector fields. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3623-3638. doi: 10.3934/dcds.2016.36.3623
Martin Baurmann, Wolfgang Ebenhöh, Ulrike Feudel. Turing instabilities and pattern formation in a benthic nutrient-microorganism system. Mathematical Biosciences & Engineering, 2004, 1 (1) : 111-130. doi: 10.3934/mbe.2004.1.111
Ping Liu, Junping Shi, Zhi-An Wang. Pattern formation of the attraction-repulsion Keller-Segel system. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2597-2625. doi: 10.3934/dcdsb.2013.18.2597
Fengqi Yi, Eamonn A. Gaffney, Sungrim Seirin-Lee. The bifurcation analysis of turing pattern formation induced by delay and diffusion in the Schnakenberg system. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 647-668. doi: 10.3934/dcdsb.2017031
R.A. Satnoianu, Philip K. Maini, F.S. Garduno, J.P. Armitage. Travelling waves in a nonlinear degenerate diffusion model for bacterial pattern formation. Discrete & Continuous Dynamical Systems - B, 2001, 1 (3) : 339-362. doi: 10.3934/dcdsb.2001.1.339
Guanqi Liu, Yuwen Wang. Pattern formation of a coupled two-cell Schnakenberg model. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1051-1062. doi: 10.3934/dcdss.2017056
Mark A. Peletier Marco Veneroni | CommonCrawl |
Evaluating elliptic integrals
I am interested in evaluating some elliptic integrals, and I have not been able to secure a reference to do exactly what I need. Most of the references I've found seem to focus on reducing more general elliptic integrals to Legendre form, but leave out the part about actually dealing with complete elliptic integrals. In particular, I am interested in the following toy problem, which is to show the following: $$\displaystyle \int_0^\infty \frac{dx}{\sqrt{x(x+2)(x+3)}} = \sqrt{2}K(-1/2),$$ where $K(k)$ is the complete elliptic integral of the first kind. The above result is due to Wolfram Alpha.
I tried the obvious substitution which is $x = \tan \theta$, and after some labour we obtain the integral
$$\displaystyle \int_0^{\pi/2} \frac{2 d \theta}{\sqrt{7 \sin 2\theta + 5 \sin^2 2 \theta + 5 \sin 2 \theta \cos 2 \theta}},$$ which again can be checked to evaluate to $\sqrt{2}K(-1/2)$, although in this case Wolfram only gave the numerical value and not the closed form. Further, this last one does not look like what the 'correct' form should be, which is
$$\displaystyle \int_0^{\pi/2} \frac{\sqrt{2} d \theta}{\sqrt{1 - (1/4)\sin^2 \theta}}.$$
Based on some data I got from playing around with Wolfram, I suspect that the following is true: Suppose that $0 < a < b$. Then
$$\displaystyle \int_0^\infty \frac{dx}{\sqrt{x(x+a)(x+b)}} = \frac{2 K(1 - b/a)}{\sqrt{a}}.$$
Any help would be much appreciated, I apologize if this problem is in fact trivial or well-known.
na.numerical-analysis
special-functions
elliptic-integrals
arsmath
asked Oct 16, 2015 at 3:56
Stanley Yao XiaoStanley Yao Xiao
$\begingroup$ Mathematica yields the last integral instantly, so why "suspect"? Or are you looking for an explicit proof? $\endgroup$
– Suvrit
$\begingroup$ @Suvrit admittedly, my skill at using mathematica is limited, as I am not familiar with its ability to do symbolic manipulation. But yes, I would like an explicit argument $\endgroup$
– Stanley Yao Xiao
$\begingroup$ You ma want to look at Handbook of Elliptic Integrals for Engineers and Scientists by Byrd and Friedman (Springer, 1971). It contains many explicit formulas of this kind. Moreover, early in the book, they discuss how to reduce general elliptic integrals (with integrands rational in the square root of a polynomial up to degree four) to the standard ones (first, second or third kind). $\endgroup$
– Igor Khavkine
This question has been out for a while, and I think it deserves a thorough answer, even tho I came quite late to this party.
As both the classical Legendre-Jacobi theory and the Carlson theory have been mentioned by other users, I'll treat the OP's integral from both viewpoints.
Legendre-Jacobi
The OP came pretty close to using the correct substitution. One thing that could have been done instead is to recall the Pythagorean identity $1+\tan^2 u=\sec^2 u$, so that the substitution that should have been used is $x=a \tan^2 u$, where $a=2$ or $a=3$. Taking the smaller value of $a$, and after some amount of algebra, we obtain
$$\begin{align*}\int_0^\infty\frac{\mathrm dx}{\sqrt{x(x+2)(x+3)}}&=\int_0^{\pi/2}\frac{2}{\sqrt{3-\sin^2u}}\mathrm du\\ &=\frac2{\sqrt{3}}\int_0^{\pi/2}\frac{\mathrm du}{\sqrt{1-\frac13\sin^2u}}\\&=\frac2{\sqrt{3}}K\left(\frac13\right)\end{align*}$$
where I use the parameter convention for elliptic integrals. (This is the same convention used in Abramowitz and Stegun. Relatedly, see here for an extended discussion on the notational confusion surrounding elliptic integrals.)
Had we chosen the substitution with $a=3$ instead, we would have instead obtained the result $\sqrt{2}K\left(-\frac12\right)$, which is equivalent through the imaginary modulus transformation
$$K(-m)=\frac1{\sqrt{1+m}}K\left(\frac{m}{m+1}\right),\quad m>0$$
In Carlson's theory, there is the general hypergeometric function
$$R_{-a}(b_1,\dots,b_k;z_1,\dots,z_k)=\frac1{\mathbf B\left(a,-a+\sum_j b_j\right)}\int_0^\infty u^{-a-1+\sum_j b_j}\prod_j \left(u+z_j\right)^{-b_j}\mathrm du$$
where $\mathbf B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$ is the usual Euler beta function.
This multivariate hypergeometric function is a homogenized/symmetrized version of the classical Lauricella $F_D$ function (see e.g. this paper where Carlson introduced his function, tho that reference uses an opposite sign convention for $a$).
With this consideration, the OP's original integral can be expressed either as a three-variable Carlson integral, or a two-variable Carlson integral:
$$\begin{align*}\int_0^\infty\frac{\mathrm dx}{\sqrt{x(x+2)(x+3)}}&=2\,R_{-\frac12}\left(\frac12,\frac12,\frac12;0,2,3\right)\\&=\pi\,R_{-\frac12}\left(\frac12,\frac12;2,3\right)\end{align*}$$
The three-variable (incomplete) case occurs often enough that it is given the notation
$$\begin{align*}R_F(x,y,z)&=\frac12\int_0^\infty\frac{\mathrm du}{\sqrt{(u+x)(u+y)(u+z)}}\\&=R_{-\frac12}\left(\frac12,\frac12,\frac12;x,y,z\right)\end{align*}$$
and the two-variable form corresponds to the so-called "complete case",
$$\begin{align*}R_K(x,y)&=R_{-\frac12}\left(\frac12,\frac12;x,y\right)\\&=\frac2{\pi}R_F(0,x,y)\end{align*}$$
In fact, the two-variable form has the integral representation
$$R_K(x,y)=\frac2{\pi}\int_0^{\pi/2}\frac{\mathrm du}{\sqrt{x\cos^2 u+y\sin^2 u}}$$
which one recognizes to be related to the integral representation of Gauss's arithmetic-geometric mean (AGM):
$$R_K(x,y)=\frac1{\operatorname{agm}(\sqrt{x},\sqrt{y})}$$
Thus, the OP's integral is $\pi/\operatorname{agm}(\sqrt{2},\sqrt{3})$.
Additionally, the two-variable Carlson function is also related to the Gauss hypergeometric function, being a homogeneous version of it:
$$R_{-a}(b_1,b_2;x,y)=y^{-a}{}_2 F_1\left({{a,b_1}\atop{b_1+b_2}}\middle|1-\frac{x}{y}\right)$$
so one has
$$\begin{align*}\pi\,R_{-\frac12}\left(\frac12,\frac12;2,3\right)&=\frac{\pi}{\sqrt{3}}{}_2 F_1\left({{\frac12,\frac12}\atop{1}}\middle|\frac13\right)\\&=\frac2{\sqrt{3}}K\left(\frac13\right)\end{align*}$$
where we have used the hypergeometric representation of the complete elliptic integral of the first kind.
answered Jul 30, 2019 at 9:55
J. M. isn't a mathematicianJ. M. isn't a mathematician
$\begingroup$ (Somewhat relatedly, I recently wrote a Mathematica package implementing the Carlson integrals. I have still been working on them on-and-off despite computer difficulties, so the identities remain fresh in my mind.) $\endgroup$
– J. M. isn't a mathematician
It seems to be known as symmetric elliptic integrals of Carlson. Look in the NIST book, 19.15 and further. There are a lot of formulas in it. It seems you seek for exactly the formula 19.22.8 on page 505, note that in it $R_F$ is defined by 19.16.1-19.16.4 and AGM is exactly the complete Legendre integral $K$ as you suggested.
SergeiSergei
$\begingroup$ Thank you for the reference, it does indeed answer this question. However, the reference provided does not contain a proof for formula 19.22.8, which is dissatisfying to me $\endgroup$
$\begingroup$ They give the reference to the original book of Carlson with all proofs, please note it:B. C. Carlson (1977b). Special Functions of Applied Mathematics. New York: Academic Press. $\endgroup$
– Sergei
What is the advantage of inverting elliptic integrals?
invert complete elliptic integral of first kind K(k)
Integrating the complete elliptic integral K
Reduction of integral for geodesic area to elliptic integrals
Elliptic-type integral with nested radical
Some Log integrals related to Gamma value | CommonCrawl |
If we know spin isn't actually rotation, why do we still speak of intrinsic angular momentum? [duplicate]
Why can't I just think the spin as rotating? (3 answers)
The spin of an electron was classically thought of a spinning ball of charge. We know that that is not the case in the quantum picture, as the electron is pointlike.
So why, then, do we still describe quantum spin as "intrinsic angular momentum?" Why isn't it just "intrinsic magnetic moment," or something else?
angular-momentum electrons quantum-spin terminology elementary-particles
Qmechanic♦
T3db0tT3db0t
$\begingroup$ Because an electron does have an angular momentum - something that has been experimentally verified many times. $\endgroup$ – John Rennie Mar 18 '18 at 16:42
$\begingroup$ FWIW even an extended object like a simple diatomic molecule in quantum mechanics does not rotate classically despite what you may be thinking. In a state of definite angular momentum, the orientation is underdetermined and in fact blurry across the whole orientational sphere. All you can really say is at best there are some nodal planes where it is definitely not oriented along, at least in higher angular momentum states than the ground state. You can have "superposition" states (or at least I believe) where the orientation is more well-defined and spins around, but the tradeoff (cont'd) $\endgroup$ – The_Sympathizer Mar 19 '18 at 3:40
$\begingroup$ (cont'd) is there is less information in the angular momentum side of things. $\endgroup$ – The_Sympathizer Mar 19 '18 at 3:40
$\begingroup$ And of course like anything quantum, if you measure the angular momentum of the superpositioned molecule then it resets to a definite (randomized) angular-momentum state and now its orientation is back to being a total blur. $\endgroup$ – The_Sympathizer Mar 19 '18 at 3:42
$\begingroup$ Possible duplicate of Why can't I just think the spin as rotating? $\endgroup$ – sammy gerbil Mar 20 '18 at 3:24
There are two main reasons that we describe spin as an 'intrinsic angular momentum':
Because it is an angular momentum. This comes primarily from a fundamental level, in that angular momentum is always the generator for rotations and the Noether charge that is guaranteed to be conserved if the theory is independent of orientation, and more generally angular momentum is always canonically conjugate to orientation. In all of those aspects, when it comes to electrons, the role of angular momentum is played by spin.
Now, those sound like a lot of heavy-handed terms, but by them I mostly mean: the role of angular momentum in classical as well as quantum physics goes well beyond describing spinning balls of stuff, and the identification of spin as an angular momentum within that framework isn't much affected by the loss of one minor component of the description.
That said, though, there's plenty of experimental evidence that spin really is convertible into the regular mechanical angular momentum of spinning balls of stuff, from the Einstein-de Haas effect onwards. If you don't include spin into your system's total angular momentum, then your angular momentum books will be unbalanced.
Because it is intrinsic. Generally, if a system has linear momentum $\mathbf P$, we distinguish between
extrinsic angular momenta, which transform as $$ \mathbf L \mapsto \mathbf L' = \mathbf L + \mathbf r_0\times \mathbf P$$ when the origin of the frame of coordinates is displaced by $\mathbf r_0$, versus
intrinsic angular momenta, which are not affected by such a change.
The angular momentum of the Earth's orbital motion around the sun is of the former type, and the angular momentum of its rotation about its axis is of the latter; electrons' spin is of the second type.
Put those two components together, and the name "intrinsic angular momentum" is perfectly justified.
Emilio PisantyEmilio Pisanty
$\begingroup$ Is it safe to say the latter intrinsic angular momentum may be the same as a whirlpool, galaxy or maybe even a black hole? Either way it's rotation. Why can't spin be rotation? Who says can't to be and why? $\endgroup$ – Bill Alsept Mar 20 '18 at 22:36
$\begingroup$ @Bill This is not the place for a rant. $\endgroup$ – Emilio Pisanty Mar 20 '18 at 23:14
$\begingroup$ Wow! Someone's being sensitive?? My first sentence was a question or suggestion to see if I understood what you were saying. My second sentence was just a true statement. My last two were real questions. What the heck are you talking about rant?? $\endgroup$ – Bill Alsept Mar 20 '18 at 23:18
$\begingroup$ No, it is not safe to say they're the "same" because you haven't even said what you mean by "same". As to how you infer that "it's rotation" - that's not what I said. Anything else - take it elsewhere. $\endgroup$ – Emilio Pisanty Mar 20 '18 at 23:35
$\begingroup$ I meant the same as in your statement above where you said "Earths orbital motion around the sun". Sorry for the confusion i'm only trying to have a conversation. It just sounds like you're having a bad day. It is not I having the rant. $\endgroup$ – Bill Alsept Mar 20 '18 at 23:43
"angular momentum" is accepted because of the similarity with orbital angular momentum as explained below, and "intrinsic" means that we do not know what that "spin" actually is and where it comes from. And indeed, that's just a name, not meaning that there is really a rotation in the meaning of classical mechanics.
At the beginning, from the analogy that rotating charge generates a magnetic moment, people called that "spin angular momentum" and even imagined a real spin or rotation. That name has been accepted although they were wrong.
The reason for accepting the name, "intrinsic angular momentum", can be clarified in two side. On the one hand, spin has many same properties with orbital angular momentum, such as the commutation relation- the fundamental of quantum mechanics- between 3 spatial components, so it's "angular momentum". On the other, it's impossible for point particles to rotate in the meaning of Newton mechanics for many reasons and no experiment has discovered that rotation, so it's "intrinsic", like that the charge of a electron is "intrinsic", which often means that we do not know why it exists, where it comes from, and even what it is.
Actually, the spin can be deduced from Dirac's QM, but that's another thing. Anyway, that name, "intrinsic angular momentum", can be understood from the point view of history.
The name is simply a question of technical English / Language usage. The name's motivation is foremostly that, even in the absence of an intuitive, everyday visual perception of "rotation", there is still a quantity that is conserved by dent of Noether's Theorem given the invariance of a Lagrangian formulation of mechanics under transformation of space by the rotation group $\mathrm{SO}(3)$, and latter's image under various representations (e.g. corresponding transformation groups on quantum state spaces).
So, from the abstract standpoint of Noether's theorem, the root of the phenomenon is still exactly the same as that of more "everyday" angular momentum, such as of a skier or an acrobat that just happens to be accompanied by a certain visual experience, so why call it something else?
Summarize this answer by asking yourself, "What if we had evolved as unsighted but clever beings? Should we still have a notion of angular momentum if we couldn't see?". Indeed, through Noether's theorem, we most assuredly should, although it might not have an "everyday" analogy as it does for sighted creatures.
WetSavannaAnimalWetSavannaAnimal
The electron has a non-zero amount of mechanical angular momentum. This is demonstrated in a setup that uses the Einstein-de Haas effect. (I will abbreviate that to 'E-dH effect'.)
In an answer to a Stackexchange question about the E-dh effect contributor Gary Godfrey has described the usual setup as follows: "The spins of all the electrons in the cylinder are aligned by the magnetic field from the coil. Then the field is reversed so the electron spins line up the other way. This imparts angular momentum to the cylinder for each electron flipped. The effect on the cylinder is very small, so the flipping is repeated many times at the torsional resonant frequency of the cylinder on the fiber. This pumps the resonance up to some maximum deflection. Using the fiber spring constant, the fiber damping coefficient, and the moment of inertia of the cylinder you can calculate how much angular momentum per flip is being transferred to the cylinder."
(There is a 17 second youtube video uploaded by University of Osnabrück physics department, showing their E-dH effect setup in swing. )
About electron size:
With experiments that bring out the particle-like behavior of electrons one can arrive at an upper bound for the size for the electron-as-a-particle. As you are referring to in your question, that upper bound was found to be smaller than the minimum size that would be necessary to explain the magnetic moment being generated by spinning in the classical sense. That doesn't necessarily mean the electron is point-like. It just says: too small to be compatible with classical explanation.
CleonisCleonis
One reason is by analogy - the algebra describing the spin of the particle coincides with that describing angular momentum (which a point like particle can also have independent of its spin)
$\begingroup$ Good answer, but I think you need to name the algebras and mathematical structures in question $\endgroup$ – WetSavannaAnimal Mar 20 '18 at 1:16
The assignment of spins to particles preserves conservation of momentum,it makes the mathematical theories consistent with observations.
For example , the assignment of spin 1 to photons in e+e- --> gamma gamma:
The Born cross-section for the process e+e- -> gamma gamma (gamma) was determined, confirming the validity of QED at the highest energies ever attained in electron-positron collision
The calculations would not fit the data with 0 intrinsic angular momentum (spin) contributions from the photons.
anna vanna v
I am not sure "spin is not actually rotation", and when we say that "the electron is pointlike", it probably should not be taken literally. Both statements ("no rotation" and "pointlike") are problematic because of the uncertainty principle.
akhmeteliakhmeteli
Not the answer you're looking for? Browse other questions tagged angular-momentum electrons quantum-spin terminology elementary-particles or ask your own question.
Why can't I just think the spin as rotating?
Spin, orbital angular momentum and total angular momentum
The Einstein–de Haas effect on a ferromagnetic coil which generates the external magnetic field
Intrinsic angular momentum in classical mechanics
Where does the electron get its high magnetic moment from?
Do particles also have intrinsic linear momentum (linear analogue of spin)?
Electron's spin Angular Momentum numeric value
Why electron has spin angular momentum?
Derivation of Bohr magneton; how is the angular momentum derived if not from spin?
Real QM cause of magnetic dipole moment
Why light have angular momentum?
When an electron changes its spin, or any other intrinsic property, is it still the same electron? | CommonCrawl |
DCDS-B Home
Using the immersed boundary method to model complex fluids-structure interaction in sperm motility
March 2011, 15(2): 325-341. doi: 10.3934/dcdsb.2011.15.325
An optimal-order error estimate for a family of characteristic-mixed methods to transient convection-diffusion problems
Huan-Zhen Chen 1, , Zhao-Jie Zhou 1, , Hong Wang 2, and Hong-Ying Man 3,
School of Mathematical Sciences, Shandong Normal University, Jinan 250014, China, China
Department of Mathematics, University of South Carolina, Columbia, South Carolina 29208, United States
Department of Mathematics, Beijing Institute of Technology, Beijing 100081, China
Received February 2010 Revised April 2010 Published December 2010
In this paper we prove an optimal-order error estimate for a family of characteristic mixed method with arbitrary degree of mixed finite element approximations for the numerical solution of transient convection diffusion equations. This paper generalizes the results in [1, 61]. The proof of the main results is carried out via three lemmas, which are utilized to overcome the difficulties arising from the combination of MMOC and mixed finite element methods. Numerical experiments are presented to justify the theoretical analysis.
Keywords: characteristic-mixed methods, mixed finite element methods, Transient convection diffusion problems, optimal order error estimate..
Mathematics Subject Classification: Primary: 65N30, 65N15; Secondary: 76M1.
Citation: Huan-Zhen Chen, Zhao-Jie Zhou, Hong Wang, Hong-Ying Man. An optimal-order error estimate for a family of characteristic-mixed methods to transient convection-diffusion problems. Discrete & Continuous Dynamical Systems - B, 2011, 15 (2) : 325-341. doi: 10.3934/dcdsb.2011.15.325
T. Arbogast and M. F. Wheeler, A characteristics-mixed finite element method for advection-dominated transport problems,, SIAM J. Numer. Anal., 32 (1995), 404. doi: 10.1137/0732017. Google Scholar
D. N. Arnolds, L. R. Scott and M. Vogelus, Regular inversion of the divergence operator with Dirichlet boundary conditions on a polygonal,, Ann. Scuola. Norm. Sup. Pisa, (1988), 169. Google Scholar
M. Bause and P. Knabner, Uniform error analysis for Lagrange-Galerkin approximations of convection-dominated problems,, SIAM J. Numer. Anal., 39 (2002), 1954. doi: 10.1137/S0036142900367478. Google Scholar
J. P. Benque and J. Ronat, Quelques difficulties des modeles numeriques en hydraulique,, Comp. Meth. Appl. Mech. Engrg., (1982), 471. Google Scholar
P. J. Binning and M. A. Celia, A finite volume Eulerian-Lagrangian localized adjoint method for solution of the contaminant transport equations in two-dimensional multi-phase flow systems,, Water Resour. Res., 32 (1996), 103. doi: 10.1029/95WR02763. Google Scholar
F. Brezzi, On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian multipliers,, RAIRO Anal. Numér., 8 (1974), 129. Google Scholar
F. Brezzi and M. Fortin, "Mixed and Hybrid Finite Element Methods,", Springer Series in Computational Mathematics, 15 (1991). Google Scholar
M. A. Celia, T. F. Russell, I. Herrera and R. E. Ewing, An Eulerian-Lagrangian localized adjoint method for the advection-diffusion equation,, Advances in Water Resources, 13 (1990), 187. doi: 10.1016/0309-1708(90)90041-2. Google Scholar
Z. Chen, Characteristic mixed discontinuous finite element methods for advection-dominated diffusion problems,, Comput. Methods Appl. Mech. Engrg., 191 (2002), 2509. doi: 10.1016/S0045-7825(01)00411-X. Google Scholar
Z. Chen, S.-H. Chou and D. Y. Kwak, Characteristic-mixed covolume methods for advection-dominated diffusion problems,, Numerical Linear Algebra with Applications, 13 (2006), 677. doi: 10.1002/nla.492. Google Scholar
P. G. Ciarlet, "The Finite Element Method for Elliptic Problems,", Studies in Mathematics and its Applications, 4 (1978). doi: 10.1016/S0168-2024(08)70178-4. Google Scholar
H. K. Dahle, R. E. Ewing and T. F. Russell, Eulerian-Lagrangian localized adjoint methods for a nonlinear convection-diffusion equation,, Comp. Meth. Appl. Mech. Engrg., 122 (1995), 223. doi: 10.1016/0045-7825(94)00733-4. Google Scholar
C. N. Dawson, T. F. Russell and M. F. Wheeler, Some improved error estimates for the modified method of characteristics,, SIAM J. Numer. Anal., 26 (1989), 1487. doi: 10.1137/0726087. Google Scholar
J. Douglas Jr., F. Furtado and F. Pereira, On the numerical simulation of water flooding of hetergeneous petroleum reserviors,, Comput. Geosci., 1 (1997), 155. doi: 10.1023/A:1011565228179. Google Scholar
J. Douglas, Jr., C.-S. Huang and F. Pereira, The modified method of characteristics with adjusted advection,, Numer. Math., 83 (1999), 353. doi: 10.1007/s002110050453. Google Scholar
J. Douglas, Jr. and T. F. Russell, Numerical methods for convection-dominated diffusion problems based on combining the method of characteristics with finite element or finite difference procedures,, SIAM J. Numer. Anal., 19 (1982), 871. doi: 10.1137/0719063. Google Scholar
M. S. Espedal and R. E. Ewing, Characteristic Petrov-Galerkin subdomain methods for two-phase immiscible flow,, Proceedings of the first world congress on computational mechanics (Austin, 64 (1987), 113. Google Scholar
L. C. Evans, "Partial Differential Equations,", Graduate Studies in Mathematics, 19 (1998). Google Scholar
R. E. Ewing (Ed.), "The Mathematics of Reservoir Simulation,", Research Frontiers in Applied Mathematics 1, (1984). Google Scholar
R. E. Ewing, T. F. Russell and M. F. Wheeler, Convergence analysis of an approximation of miscible displacement in porous media by mixed finite elements and a modified method of characteristics,, Comput. Methods Appl. Mech. Engrg., 47 (1984), 73. doi: 10.1016/0045-7825(84)90048-3. Google Scholar
A. O. Garder, D. W. Peaceman and A. L. Pozzi, Numerical calculations of multidimensional miscible displacement by the method of characteristics,, Soc. Pet. Eng. J., 4 (1964), 26. Google Scholar
D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order,", Second edition. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], (1983). Google Scholar
R. W. Healy and T. F. Russell, A finite-volume Eulerian-Lagrangian localized adjoint method for solution of the advection-dispersion equation,, Water Resour. Res., 29 (1993), 2399. doi: 10.1029/93WR00403. Google Scholar
R. W. Healy and T. F. Russell, Solution of the advection-dispersion equation in two dimensions by a finite-volume Eulerian-Lagrangian localized adjoint method,, Adv. Water Res., 21 (1998), 11. Google Scholar
J. M. Hervouet, Applications of the method of characteristics in their weak formulation to solving two-dimensional advection-equations on mesh grids,, in, (1986), 149. Google Scholar
C. Johnson and V. Thomée, Error estimates for some mixed finite element methods for parabolic type problems,, RAIRO Anal. Numer., 15 (1981), 41. Google Scholar
X. Li, W. Wu and O. C. Zienkiewicz, Implicit characteristic Galerkin method for convection-diffusion equations,, Int. J. Numer. Meth. Engrg., 47 (2000), 1689. doi: 10.1002/(SICI)1097-0207(20000410)47:10<1689::AID-NME850>3.0.CO;2-W. Google Scholar
K. W. Morton, A. Priestley and E. Süli, Stability of the Lagrangian-Galerkin method with nonexact integration,, RAIRO Model. Math. Anal. Num., 22 (1988), 625. Google Scholar
J. C. Nédélec, A new family of mixed finite elements in $\mathbf R^3$,, Numerische Mathematik, 50 (1986), 57. doi: 10.1007/BF01389668. Google Scholar
S. P. Neuman, An Eulerian-Lagrangian numerical scheme for the dispersion-convection equation using conjugate space-time grids,, J. Comp. Phys., 41 (1981), 270. doi: 10.1016/0021-9991(81)90097-8. Google Scholar
D. W. Peaceman, "Fundamentals of Numerical Reservoir Simulation,", Elsevier, (1977). Google Scholar
G. F. Pinder and H. H. Cooper, A numerical technique for calculating the transient position of the saltwater front,, Water Resou. Res., (1970), 875. Google Scholar
O. Pironneau, On the transport-diffusion algorithm and its application to the Navier-Stokes equations,, Numer. Math., 38 (): 309. doi: 10.1007/BF01396435. Google Scholar
P. A. Raviart and J. M. Thomas, A mixed finite element method for 2nd order elliptic problems,, Mathematical Aspects of the Finite Element Method, 606 (1975), 292. Google Scholar
H.-G. Roos, M. Stynes and L. Tobiska, "Numerical Methods for Singularly Perturbed Differential Equations,", Convection-Diffusion and Flow Problems, (1996). Google Scholar
E. Varoglu and W. D. L. Finn, Finite elements incorporating characteristics for one-dimensional diffusion-convection equation,, J. Comput. Phys., 34 (1980), 371. doi: 10.1016/0021-9991(80)90095-9. Google Scholar
H. Wang, A family of ELLAM schemes for advection-diffusion-reaction equations and their convergence analyses,, Numerical Methods for PDEs, 14 (1998), 739. Google Scholar
H. Wang, An optimal-order error estimate for an ELLAM scheme for two-dimensional linear advection-diffusion equations,, SIAM J. Numer. Anal., 37 (2000), 1338. doi: 10.1137/S0036142998335686. Google Scholar
H. Wang, An optimal-order error estimate for MMOC and MMOCAA schemes for multidimensional advection-reaction equations,, Numerical Methods for PDEs, 18 (2002), 69. Google Scholar
H. Wang, An optimal-order error estimate for a family of ELLAM-MFEM approximations to porous medium flow,, SIAM J. Numer. Anal., 46 (2008), 2133. doi: 10.1137/S0036142903428281. Google Scholar
H. Wang and M. Al-Lawatia, A locally conservative Eulerian-Lagrangian control-volume method for transient advection-diffusion equations,, Numerical Methods for Partial Differential Equations, 22 (2005), 577. doi: 10.1002/num.20106. Google Scholar
H. Wang, H. K. Dahle, R. E. Ewing, M. S. Espedal, R. C. Sharpley and S. Man, An ELLAM scheme for advection-diffusion equations in two dimensions,, SIAM J. Sci. Comput., 20 (1999), 2160. doi: 10.1137/S1064827596309396. Google Scholar
H. Wang, R. E. Ewing, G. Qin and S. L. Lyons, "An Eulerian-Lagrangian Formulation for Compositional Flow in Porous Media,", The 2006 Society of Petroleum Engineering Annual Technical Conference in San Antonio, (2006), 24. Google Scholar
H. Wang, R. E. Ewing, G. Qin, S. L. Lyons, M. Al-Lawatia and S. Man, A family of Eulerian-Lagrangian localized adjoint methods for multi-dimensional advection-reaction equations,, J. Comput. Phys., 152 (1999), 120. doi: 10.1006/jcph.1999.6239. Google Scholar
H. Wang, R. E. Ewing and T. F. Russell, Eulerian-Lagrangian localized methods for convection-diffusion equations and their convergence analysis,, IMA J. Numer. Anal., 15 (1995), 405. doi: 10.1093/imanum/15.3.405. Google Scholar
H. Wang, X. Shi and R. E. Ewing, An ELLAM scheme for multidimensional advection-reaction equations and its optimal-order error estimate,, SIAM. J. Numer. Anal., 38 (2001), 1846. doi: 10.1137/S0036142999362389. Google Scholar
H. Wang and K. Wang, Uniform estimates for Eulerian-Lagrangian methods for singularly perturbed time-dependent problems,, SIAM J. Numer. Anal., 45 (2007), 1305. doi: 10.1137/060652816. Google Scholar
K. Wang, A uniformly optimal-order error estimate of an ELLAM scheme for unstady-state advection-diffusion equations,, International Journal of Numerical Analysis and Modeling, 5 (2008), 286. Google Scholar
K. Wang, An optimal-order estimate for MMOC-MFEM approximations to porous medium flow,, Numer. Methods for Partial Differential Equations, 25 (2008), 1283. doi: 10.1002/num.20397. Google Scholar
K. Wang, A uniform optimal-order estimate for an Eulerian-Lagrangian discontinuous Galerkin method for transient advection-diffusion equations,, Numer. Methods for Partial Differential Equations, 25 (2009), 87. doi: 10.1002/num.20338. Google Scholar
K. Wang and H. Wang, A uniform estimate for the ELLAM scheme for transport equations,, Numer. Methods for PDEs, 24 (2008), 535. Google Scholar
K. Wang and H. Wang, An optimal-order error estimate to the modified method of characteristics for a degenerate convection-diffusion equation,, International Journal of Numerical Analysis and Modeling, 6 (2009), 217. Google Scholar
K. Wang and H. Wang, A uniform estimate for the MMOC for two-dimensional advection-diffusion equations,, Numer. Methods for PDEs, 26 (2010), 1054. Google Scholar
K. Wang, H. Wang and M. Al-Lawatia, An Eulerian-Lagrangian discontinuous Galerkin method for transient advection-diffusion equations,, Numer. Methods for Partial Differential Equations, 23 (2007), 1343. doi: 10.1002/num.20223. Google Scholar
K. Wang, H. Wang and M. Al-Lawatia, A CFL-free explicit characteristic interior penalty scheme for linear advection-reaction equations,, Numer. Methods for PDEs, 26 (2010), 561. Google Scholar
K. Wang, H. Wang, M. Al-Lawatia and H. Rui, A family of characteristic discontinuous Galerkin methods for transient advection-diffusion equations and their optimal-order $L^2$ error estimates,, Commun. Comput. Phys., 6 (2009), 203. doi: 10.4208/cicp.2009.v6.p203. Google Scholar
M. F. Wheeler and C. N. Dawson, An operator-splitting method for advection-diffusion-reaction problems,, MAFELAP Proceedings, (1988), 463. Google Scholar
L. Wu and H. Wang, An Eulerian-Lagrangian single-node collocation method for transient advection-diffusion equations in multiple space dimensions,, Numerical Methods for Partial Differential Equations, 20 (2004), 284. doi: 10.1002/num.10094. Google Scholar
L. Wu, H. Wang and G. F. Pinder, A nonconventional Eulerian-Lagrangian single-node collocation method with Hermite polynomials for unsteady-state advection-diffusion equations,, Numerical Methods for PDEs, 19 (2003), 271. Google Scholar
L. Wu and K. Wang, A single-node characteristic collocation method for unsteady-state convection-diffusion equations in three-dimensional spaces,, Numerical Methods for PDEs., (). doi: 10.1002/num.20552. Google Scholar
D. Yang, A characteristic mixed method with dynamic finite-element space for convection-dominated diffusion problems,, J. Computational and Applied mathematics, 43 (1992), 343. doi: 10.1016/0377-0427(92)90020-X. Google Scholar
Xiaomeng Li, Qiang Xu, Ailing Zhu. Weak Galerkin mixed finite element methods for parabolic equations with memory. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 513-531. doi: 10.3934/dcdss.2019034
M. González, J. Jansson, S. Korotov. A posteriori error analysis of a stabilized mixed FEM for convection-diffusion problems. Conference Publications, 2015, 2015 (special) : 525-532. doi: 10.3934/proc.2015.0525
Chunjuan Hou, Yanping Chen, Zuliang Lu. Superconvergence property of finite element methods for parabolic optimal control problems. Journal of Industrial & Management Optimization, 2011, 7 (4) : 927-945. doi: 10.3934/jimo.2011.7.927
Martin Burger, José A. Carrillo, Marie-Therese Wolfram. A mixed finite element method for nonlinear diffusion equations. Kinetic & Related Models, 2010, 3 (1) : 59-83. doi: 10.3934/krm.2010.3.59
Tao Lin, Yanping Lin, Weiwei Sun. Error estimation of a class of quadratic immersed finite element methods for elliptic interface problems. Discrete & Continuous Dynamical Systems - B, 2007, 7 (4) : 807-823. doi: 10.3934/dcdsb.2007.7.807
Jérôme Droniou. Remarks on discretizations of convection terms in Hybrid mimetic mixed methods. Networks & Heterogeneous Media, 2010, 5 (3) : 545-563. doi: 10.3934/nhm.2010.5.545
Lijuan Wang, Jun Zou. Error estimates of finite element methods for parameter identifications in elliptic and parabolic systems. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1641-1670. doi: 10.3934/dcdsb.2010.14.1641
Youngmok Jeon, Eun-Jae Park. Cell boundary element methods for convection-diffusion equations. Communications on Pure & Applied Analysis, 2006, 5 (2) : 309-319. doi: 10.3934/cpaa.2006.5.309
Runchang Lin. A robust finite element method for singularly perturbed convection-diffusion problems. Conference Publications, 2009, 2009 (Special) : 496-505. doi: 10.3934/proc.2009.2009.496
Ferdinando Auricchio, Lourenco Beirão da Veiga, Josef Kiendl, Carlo Lovadina, Alessandro Reali. Isogeometric collocation mixed methods for rods. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 33-42. doi: 10.3934/dcdss.2016.9.33
Qun Lin, Hehu Xie. Recent results on lower bounds of eigenvalue problems by nonconforming finite element methods. Inverse Problems & Imaging, 2013, 7 (3) : 795-811. doi: 10.3934/ipi.2013.7.795
A. Naga, Z. Zhang. The polynomial-preserving recovery for higher order finite element methods in 2D and 3D. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 769-798. doi: 10.3934/dcdsb.2005.5.769
Lili Ju, Wensong Wu, Weidong Zhao. Adaptive finite volume methods for steady convection-diffusion equations with mesh optimization. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 669-690. doi: 10.3934/dcdsb.2009.11.669
Dongho Kim, Eun-Jae Park. Adaptive Crank-Nicolson methods with dynamic finite-element spaces for parabolic problems. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 873-886. doi: 10.3934/dcdsb.2008.10.873
Petr Knobloch. Error estimates for a nonlinear local projection stabilization of transient convection--diffusion--reaction equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (5) : 901-911. doi: 10.3934/dcdss.2015.8.901
Antoine Benoit. Finite speed of propagation for mixed problems in the $WR$ class. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2351-2358. doi: 10.3934/cpaa.2014.13.2351
Tianliang Hou, Yanping Chen. Superconvergence for elliptic optimal control problems discretized by RT1 mixed finite elements and linear discontinuous elements. Journal of Industrial & Management Optimization, 2013, 9 (3) : 631-642. doi: 10.3934/jimo.2013.9.631
Wolf-Jüergen Beyn, Janosch Rieger. Galerkin finite element methods for semilinear elliptic differential inclusions. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 295-312. doi: 10.3934/dcdsb.2013.18.295
Zhangxin Chen. On the control volume finite element methods and their applications to multiphase flow. Networks & Heterogeneous Media, 2006, 1 (4) : 689-706. doi: 10.3934/nhm.2006.1.689
Philip Trautmann, Boris Vexler, Alexander Zlotnik. Finite element error analysis for measure-valued optimal control problems governed by a 1D wave equation with variable coefficients. Mathematical Control & Related Fields, 2018, 8 (2) : 411-449. doi: 10.3934/mcrf.2018017
Huan-Zhen Chen Zhao-Jie Zhou Hong Wang Hong-Ying Man | CommonCrawl |
A study on use of animals as traditional medicine by Sukuma Tribe of Busega District in North-western Tanzania
Rajeev Vats1 &
Simion Thomas1
Faunal resources have played an extensive range of roles in human life from the initial days of recorded history. In addition to their importance, animals have been acknowledged in religion, art, music and literature and several other different cultural manifestations of mankind. Human beings are acquainted with use of animals for foodstuff, cloth, medicine, etc. since ancient times. Huge work has been carried out on ethnobotany and traditional medicine. Animal and their products are also holding medicinal properties that can be exploited for the benefit of human beings like plants. In Tanzania, many tribal communities are spread all over the country and these people are still totally depended on local customary medicinal system for their health care. In the world Tanzania is gifted with wide range of floral and faunal biodiversity. The use of traditional medicine from animals by Sukuma ethnic group of Busega district is the aim of the present study.
In order to collect the information on ethnozoological use about animal and their products predominant among this tribe in Busega district, a study was carried out from August 2012, to July 2013. Data were collected through semi-structured questionnaire and open interview with 180 (118 male and 62 females) selected people. The people from whom the data were collected comprise old age community members, traditional health practicener, fishermen and cultural officers. The name of animal and other ethnozoological information were documented. Pictures and discussion were also recorded with the help of camera and voice recorder.
A total of 42 various animal species were used in nearly 30 different medicinal purposes including STD, stoppage of bleeding, reproductive disorders, asthma, weakness, tuberculosis, cough, paralysis and wound and for other religious beliefs. It has been noticed that animal used by Sukuma tribe, comprise of seventeen mammals, seven birds, four reptiles, eight arthropods and two mollusks. Some of the protected species were also used as important medicinal resources. We also found that cough, tuberculosis, asthma and other respiratory diseases are the utmost cited disease, as such, a number of traditional medicines are available for the treatment.
The present work indicates that 42 animal species were being used to treat nearly 30 different ailments and results show that ethnozoological practices are an important alternative medicinal practice by the Sukuma tribe living in Bungesa district. The present study also indicates the very rich ethnozoological knowledge of these people in relation to traditional medicine. So there is a critical need to properly document to keep a record of the ethnozoological information. We hope that the information generated in this study will be useful for further research in the field of ethnozoology, ethnopharmacology and conservation approach.
Faunal resources have played a wide range of roles in human life from the earliest days of recorded history. Human beings are familiar with use of animals and plants for food, cloth, medicine, etc. since ancient times [1,2]. The study of relationship between the human societies and the animal resources around them deals under Ethnozoology [3]. Since prehistoric time's animals, their parts, and products have created part of the inventory of medicinal substances used in numerous cultures [4]. The world health organization estimates that most of the world's population relies primarily on animal and plant based medicines [5]. Of the 252 indispensible chemicals that have been selected by the World Health Organization, 8.7% derived from animals [6]. In Brazil, Alves et al. reported the medicinal use of 283 animal species for the treatment of various ailments [7]. In Bahia state, in the northeast of Brazil, over 180 medicinal animals have been recorded in traditional health care practices [8]. In Traditional Chinese Medicine more than 1500 animal species have been recorded to be some medicinal use [9]. Alves and Rosa recorded the use of 97 animal species as traditional medicine in urban areas of NE and N Brazil [10]. Lev and Amar conducted a survey in the selected markets of Israel and found 20 animal species, which products were sold as traditional drugs [11]. Tamang people of Nepal identify the 11 animal species for used in zootherapeutic purposes [12]. Alves and Rosa in the North and north- east regions of Brazil carried out a survey in fishing communities and recorded 138 animal species, used as traditional medicine [13]. Alves et al. also reported nearly 165 reptile's species were used in traditional folk medicine around the world [14]. Alves conducted a review study in Northeast Brazil and lists 250 animal species for the treatment of diverse ailments [15]. Lev and Amar conducted a study in the selected markets in the kingdom of Jordan and identified 30 animal species, and their products were retailed as traditional medications [16]. In India use of traditional medicine are documented in works like Ayurveda and Charaka Samhita. A number of animals are mentioned in Ayurvedic system, which includes 41 Mammals, 41 Aves, 16 Reptiles, 21 Fishes and 24 Insects [17]. Different ethnic group and tribal people use animals and their products for healing practices of human ailments in present times in India [18]. In Hindu religion people used the various products obtained from the cow viz. milk, urine, dung, curd and ghee since ancient times [19].
Tanzania is gifted with immense faunal and floral biodiversity, because of the thrilling variation in geographical and climatic condition prevailing in the country. In Tanzania, traditional medicine has existed even before colonial times. It used to play a vital role in the doctrine of chiefdoms that existed during pre-colonial era. Colonialists, with their intension to rule Africa had to find a way to discourage all sort of activities which would have provided an opportunity for developing Africans [20]. In Tanzania, different tribal communities are dispersed all over the country, people of these communities are extremely knowledgeable about the animals and their medicinal value, and they also deliver extensive information about the use of animals and their by-products as medicine. Most of the tribal people are totally dependent on local traditional medicinal system for their health care because they are living in very remote areas where hospital and other modern medicinal facilities are not available and even negligible, so they use their traditional knowledge for medicinal purpose and this knowledge is passed through oral communication from generation to generation. It is estimated that more than 80% of the rural population in Tanzania depends on the traditional medicine [21].
A lot of work has been done on utilization of plants and their products as traditional and allopathic medicine in the world. Like plants, animal and their products also keep medicinal properties [22]. Most ethnobiological studies conducted in Tanzania have focused on traditional knowledge of plants and less in animals [23,24]. A little work has been done in Ethnozoology in Tanzania and particularly no work is documented in Sukuma tribe and there is a definite scarcity of ethnobiological knowledge when it comes to animal products. The present study briefly reports an ethnomedicinal/traditional medicinal study among Sukuma tribe in Bugusa district in Tanzania.
The study area
The intended study was carried out in Busega District at Simiyu region. The Busega district is one of five districts in Simiyu Region of Tanzania, namely, Meatu, Itilima, Bariadi, Maswa and Busega. Busega district is located on the northwestern part of Simiyu Region and shares borders with Magu districts in west, Bariadi districts in south, The southeastern part is covered by the Serengeti game reserve and Bunda district. In north side it bordered with Lake Victoria. As a result, many community members utilize both aquatic and terrestrial organisms as a source of medicine.
Busega district is located between latitude 20 10' and 20 50' South and between longitude 330 and 340 East. The district headquarter is in Nyashimo town. The district is divided into thirteen (13) wards and fifty four (54) villages as per Tanzania Population and Housing Census 2012 [25]. Busega district is Tropical in nature with sun overhead of equator on March and October. Temperature is tropical and range between 25°C and 30°C with average annual temperature of 27°C. There are two wet seasons, the long rains from mid-March to early June, during which the precipitation is between 700 mm to 1000 mm and averages 800 mm per annum and short rains from October to December, during which the rainfall is between 400 mm to 500 mm [26]. Figure 1: Map of the study area.
Map of Simiyu region showing all district under the region including District Busega (Wilaya ya Busega).
The Sukuma tribe
The Sukuma are a patrilineal society; the role of the women being to take care of their husbands and children while men are overseer of the family [27,28]. Young people marry only when they are ready to carry the responsibilities marriage entails. They are initiated into adulthood in a ceremony known as "lhane". The Sukuma do not practice circumcision as part of initiation, but organize a separate ceremony. The young people involved in "lhane" have to be prepared well. Respected elders of the community tutor the initiates on their roles and responsibilities in the family and the whole community. The initiates have to think, act and participate as adults in all rituals. After "lhane" the initiates are considered adults and cannot be asked to deliver messages anywhere as this is a job for non-initiates [28].
The Sukuma are believed to being very superstitious, and most will seek aid from the "Bafumu", "Balaguzi" and "Basomboji" locally used to refer as medicine men, diviners and sooth sayers, respectively. The Basukuma have many stories based on their beliefs on death and sufferings. Traditional healers believe that fate is determined by "Shing'wengwe" and "Shishieg'we", that is ogres and spirits. The ogres are usually shown as being half human, half demon, or as terrible monsters [28]. The economic condition of the Sukuma people is not good. Agriculture, animal husbandry; poultry forming and laboring are source of income. Educational level is also found very low. The life of the people are full of traditions and social customs from birth to death owning to outdated customs, not attuned to remain competitive in the current economic scenario of privatization [Figures 2, 3, 4, 5].
Sukuma lady doing traditional prayer.
Ancestral shrines in a rural Sukuma healer's compound.
Sukuma lady with her children and traditional house.
Traditional healers selling medicines in local market.
In order to obtain ethnozoological information about animal and their products used in traditional medicine, a study was conducted from August 2012 to July 2013 in the Busega district of Simiyu region, Tanzania. The ethnomedicinal data (local name of animals, mode of preparation and administration) were collected through semi-structured questionnaire (in their local language mainly Kiswahili, with the help of local mediator), interview and group discussion with selected people of the tribe. The selection of informants was based on their experience, recognition as expert and knowledge old aged person concerning traditional medicine. A total of 180 (118 male and 62 female) people were selected to collect ethnozoological information, these information were collected from local traditional healers, farmers, fisherman and cultural officer. We interviewed 98 (55%) informants within age group 55 and above, followed by 42 informants (23%) with 45 to 54 age group and 40 (22%) with 35–44 years age group.
They were inquired, about the illnesses cured by animal based medicines and the manner in which the medicines were prepared and administered. They were also requested thorough information about mode of preparation and blending of animal products used as ingredients and whether they use animal in the healing practice, since this type of information indicate how a given medicine can be therapeutically effective in term of the right ingredients, the proper dose and the right length of medication. The name of animals and other related information to this study were documented. Some pictures of Sukuma people at their local place and in their life style in study area were taken.
As stated by them, their traditional ethnozoological acquaintance was mainly attained through parental heritage and experience about medicinal value of animal to heal their families or themselves. The scientific name and species of animals were identified using relevant and standard literature [29,30].
For the data analysis, fidelity level (FL) calculated that demonstrates the percentage of respondents claiming the use of a certain animal species for the same illnesses, was calculated for the most frequently reported diseases or ailments as:
$$ \mathrm{F}\mathrm{L}\left(\%\right) = \mathrm{N}\mathrm{p} \times 100\ /\ \mathrm{N} $$
Where Np is the number of respondents that claim a use of a species to treat a specific disease, and N is the number of respondents that use the animals as a medicine to treat any given disease [31]. The range of fidelity level (FL) is from 1% to 100%. High use value (close to 100%) show that this particular animal species are used by large number of people while a low value show that the respondents disagree on that spices to be used in the treatment of ailments.
Result and discussion
The present study revealed the traditional medicinal knowledge of treating many types of ailments using different animal and their products by the local Sukuma people inhabitants of Simuyu region, Tanzania. Many old generation people were found to lack formal education, but they have acquaintance about use of local faunal and floral resources for traditional medicinal and other purposes [12], Sukuma people are one of them [Table 1].
Table 1 Knowledge of animal resource use among Sukuma Tribe of Busega District
The Table 1 shows that, Sukuma people of Busega district were using 42 animal species for the treatment of over 30 different kinds of illnesses. The animal species used as traditional medicine by these people comprise of seventeen mammals, seven birds, four reptiles, eight arthropods and two mollusks species. Highest number of animal belonged to mammalian taxonomic group (n = 17, 41%), birds (n = 7, 17%), reptiles (n = 4, 9.5%), fishes (n = 4, 9.5%) and arthropods (n = 8, 19%) respectively. Sukuma people use these animal and their products for the treatment of more than 30 types of different illnesses including asthma, paralysis, cough, fever, cold, STD, wound healing etc. These animals were used as whole or byproducts of these animals like milk, blood, organ, flesh, tooth, urine, honey, feather etc. for the treatment of various illnesses and used in the preparations of traditional medicine [Figures 6, 7, 8, 9, 10, 11, 12, 13].
Threskiornis aethiopicus.
Butastur rufipennis.
Agama Mwanzae.
Trigoniulus corallines.
Skin of Panthera leo.
Dried Mormyrus kannume.
Achatina Fulica Shell.
Dried Asterias sp.
Fidelity levels (FL) demonstrate the percentage of respondents claiming the use of a certain animals for curing of the illness. The uses of animals that are generally known by the Sukuma respondents have higher fidelity level is shown in Table 1.
Table: 1 also shows that cough, Tuberculosis, asthma, and other respiratory diseases are most frequently quoted disease among Sukuma people, as such, a number of traditional medicine are available for the treatment of such diseases, many animal byproducts were used like flesh of gazelle, horn of rhino, nail of mungos, and honey are some of them. Another important aspect of the present study that needs to be mentioned is that the Sukuma people also use some endangered, vulnerable and near threatened animal species as medicinal resources. A total of 42 identified animal species, of which 12 (28.57%) are included in the IUCN Red Data list [32]. It is important to mention here that species such as Tanzanian woolly bat, grey crowned crane, are listed as endangered while Black rhino and Victoria tilapia are listed as critically endangered and hippopotamus, African elephant, Simba (Panthera leo), Cobra (Naja siamensis) are listed as vulnerable in IUCN Red Data list. These tribal people have scarce knowledge, many irrational belief and myths associated with customs that cause harm to animal life. Thus these traditional medicine and animals byproducts should be tested for their appropriate medicinal components, if cited animal species among these people, byproducts of these animals, were used in the treatment of various illnesses.
Sukuma people also use one animal product with other animal products or plant derivatives to found indefensible, the people should be aware about the endangered and protected animal species and their importance in biodiversity. Consequently, the socio-ecological system has to be strengthened through sustainable management and conservation of biodiversity [33] [Table 2].
Table 2 Conservation status of animal utilized in traditional medicine
Main threats of conservations in Tanzania includes overexploitation of natural resources due to poverty, rapid human population growths, weak wildlife policy and legislations, habitat alterations as well as inadequate funding. Poaching or illegal off take of wildlife resources has gone continuously regardless of wildlife conservation laws. However, traditional hunters in Tanzania have not been serious threat to wildlife. Wildlife populations are threatened by commercial poaching in which animal are used in bush meat trade and traditional medicine [34]. Despite medicinal purpose, Sukuma people also use animal resources for other purpose in their daily life. The Sukuma people use slough (molted skin of various animals) to decorate their traditional houses and this type of decoration are also reported in many other tribes living in other parts of Tanzania [Figures 14, 15, 16, 17].
Different products obtained from animal resources among Sukuma Tribes.
The current study shows that forty two animals were found to be used among Sukuma tribe of Busega district. Twelve animal species are officially considered as threatened species by IUCN red list (2012) were found among the set of faunistic resources prescribed as medicines at the time of this research. The latter author noted that Sukuma healers who are also diviners are more likely to use both wild and domesticated animals in their diagnoses. Moreover mammals, reptiles, birds, fish, and amphibians have been used in the field of traditional medicine for different purposes. However, mammals seem to be used much (40.50%) compare to other group among Sukuma tribe, followed by aves (16.7%). Amphibians are not commonly used in Sukuma society.
The present study also shows that the Sukuma people have very rich folklore and traditional knowledge in the utilization of different animal. So there is an urgent need to properly document to keep a record of the ethnomedicinal data of animal products and their medicinal uses. More studies are prerequisite for scientific validation to endorse medicinal value of such products and to include this knowledge in policies of conservation and management of animal resources. We hope that the present information will be helpful in further research in the field of ethnozoology, ethnopharmacology and biodiversity conservation viewpoint.
Alves RRN. Relationships between fauna and people and the role of ethnozoology in animal conservation. Ethnobiol Conserv. 2012;1(2):1–69.
Judith H: Information Resources on Human-Animal Relationships Past and Present. AWIC (Animal Welfare Information Center). Resource Series No. 30 2005
Lohani U, Rajbhandari K, Shakuntala K. Need for systematic ethnozoological studies in the conservation of ancient knowledge system of Nepal - a review. Indian J Tradit Knowl. 2008;7(4):634–7.
Lev E. Traditional healing with animals (zootherapy): medieval to present-day Levantine practice. J Ethnopharmacol. 2003;85:107–18.
WHO/IUCN/WWF: Guidelines on Conservation of Medicinal Plants. Switzerland 1993.
Marques JGW. Fauna medicinal: recurso do ambiente ou ameaça à biodiversidade? Mutum. 1997;1(1):4.
Alves RRN, Rosa IL, Santana GG. The q in Brazil. Bio Sci. 2007;57(11):949–55.
Costa-Neto EM. Implications and applications of folk zootherapy in the state of Bahia. Northeastern Brazil Sustain Dev. 2004;12(3):161–74.
China National Corporation of Traditional and Herbal Medicine. Materia medica commonly used in China Beijing. China Beijing: Science Press; 1995.
Alves RRN, Rosa IL. Zootherapy goes to town: The use of animal-based remedies in urban areas of NE and N Brazil. J Ethnopharmacol. 2007;113:541–55.
Lev E, Amar Z. Ethnophrmacological survey of traditional drugs sold in Israel at the end of the 20th century. J Ethnopharmacol. 2000;72:191–205.
Tamang G. An ethnozoological study of the Tamang people. Our Nat. 2003;1:37–41.
Alves RRN, Rosa IL. Zootherapeutic practices among fishing communities in North and Northeast Brazil: a comparison. J Ethnopharmacol. 2007;111:82–103.
Alves RRN, Vieira WL, Santana GG. Reptiles used in traditional folk medicine: conservation implications. Biodivers Conserv. 2008;17(1):2037–49.
Alves RRN. Fauna used in popular medicine in Northeast Brazil. J Ethnobiol Ethnomed. 2009;5:1.
Lev E, Amar Z. Ethnophrmacological survey of traditional drugs sold in the kingdom of Jordan. J Ethnopharmacol. 2002;82:131–45.
Tripathy BB. Drabya Guna Kalpa Druma, Orissa (Vols. I & II). Publ. D.P. Tripathy, Bellaguntha (Ganjam District), 5 1995
DP Jaroli DP, Mahawar MM, Vyas N. An ethnozoological study in the adjoining areas of Mount Abu wildlife sanctuary. J Ethnobiol Ethnomed. 2010;6(6):1–9.
Simoons FJ. The purification rule of the five products of the cow in Hinduism. Ecol Food Nutr. 1974;3:21–34.
Mbwambo ZH, Mahunnah RA, Kayombo EJ. Traditional health practitioner and the scientist: bridging the gap in contemporary health research in Tanzania. Tanzania Health Res Bull. 2007;9(2):115–20.
Traditional Medicine Strategy 2002-05, WHO/EDM/TRM2002.1, 2002, WHO, Geneva, Switzerland
Oudhia P. Traditional knowledge about medicinal insects, mites and spiders in Chhattisgarh. India: Insect Environment; 1995.
Kisangau DP, Lyaruu HVM, Hosea KM, Joseph CC. Use of traditional medicines in the management of HIV/AIDS opportunistic infections in Tanzania: a case in the Bukoba rural district. J Ethnobiol Ethnomed. 2007;3(1):29. Doi: 10.1186/1746-4269 -3-29.
Clack TAR. Culture, History and Identity: Landscapes of Inhabitation in the Mount Kilimanjaro Area, Tanzania, Essays in honour of Paramount Chief Thomas Lenana Mlanga Marealle II (1915–2007) BAR International Series 1966. 2009.
United Republic of Tanzania (URT). Traditional and Alternative Medicine Act No. 23 of 2002, United Republic of Tanzania. Dar es Salaam: Government Printer; 2002.
De Rwetabula JF, Smedt RM, Mwanuzi F. Transport of micro pollutants and phosphates in the Simiyu River (tributary of Lake Victoria), Tanzania. In: Submitted and presented at The 1st International Conference on Environmental Science and Technology, New Orleans, Louisiana, USA January 23-26th. 2004. p. 2005.
Cory H. Sukuma law and custom. London: Oxford University Press; 1953.
Birley MH. Resource Management in Sukumaland, Tanzania, Africa. J Int Afr Inst. 1982;52:1–30.
Ali S. The book of Indian Birds. Bombay: Bombay: Natural History Society; 1996.
Prater SH. The Book of Indian Animals. Bombay: Bombay Natural History Society; 1996.
Alexiades MN. Selected Guidelines for Ethnobotanical Research: A Field Manual, Advances in Economic Botany Bronx: The New York Botanical Garden. 1996. p. 10.
The IUCN Red List of Threatened species. 2009, http://www.iucnredlist.org
Kakati LN, Bendang A, Doulo V. Indigenous knowledge of zootherapeutic use of vertebrate origin by the Ao Tribe of Nagaland. J Hum Ecol. 2006;19(3):163–7.
Severre EM. Conservation of wildlife outside core wildlife protected areas in the new millennium, millennium conference. Mweka, Tanzania: College of African Wildlife Management; 2000.
Authors are thankful to the Head and Dean of Biological sciences for providing all facilities and reinforcements during the study. We are also highly grateful to all the respondents who shared their traditional ethnozoological knowledge and permitted us to take pictures. Without their involvement, this study would have been impossible.
School of Biological Sciences, College of Natural and Mathematical Sciences, the University of Dodoma, Dodoma, Tanzania
Rajeev Vats
& Simion Thomas
Search for Rajeev Vats in:
Search for Simion Thomas in:
Correspondence to Rajeev Vats.
All authors had significant intellectual contribution towards the design of the field study, data collection, data analysis and write-up of the manuscript. Both authors read and approved the final manuscript.
Vats, R., Thomas, S. A study on use of animals as traditional medicine by Sukuma Tribe of Busega District in North-western Tanzania. J Ethnobiology Ethnomedicine 11, 38 (2015) doi:10.1186/s13002-015-0001-y
DOI: https://doi.org/10.1186/s13002-015-0001-y
Keywods
Ethnozoology
Medicinal animals | CommonCrawl |
Mathematical analysis and modeling of DNA segregation mechanisms
Effect of rotational grazing on plant and animal production
April 2018, 15(2): 407-428. doi: 10.3934/mbe.2018018
Mathematical model for the growth of Mycobacterium tuberculosis in the granuloma
Eduardo Ibargüen-Mondragón 1,, , Lourdes Esteva 2, and Edith Mariela Burbano-Rosero 3,
Departamento de Matemáticas y Estadística, Facultad de Ciencias Exactas y Naturales, Universidad de Nariño, Calle 18 Cra 50, Pasto, Colombia
Departamento de Matemáticas, Facultad de Ciencias, Universidad Nacional Autónoma de México, 04510 México DF, México
Departamento de Biología, Facultad de Ciencias Exactas y Naturales, Universidad de Nariño, Calle 18 Cra 50, Pasto, Colombia
* Corresponding author: Eduardo Ibargüen-Mondragón
Grant No 182-01/11/201, Vicerrectoría de Investigaciones, Posgrados y Relaciones Internacionales de la Universidad de Nariño.
Received July 27, 2016 Accepted May 07, 2017 Published January 2018
In this work we formulate a model for the population dynamics of Mycobacterium tuberculosis (Mtb), the causative agent of tuberculosis (TB). Our main interest is to assess the impact of the competition among bacteria on the infection prevalence. For this end, we assume that Mtb population has two types of growth. The first one is due to bacteria produced in the interior of each infected macrophage, and it is assumed that is proportional to the number of infected macrophages. The second one is of logistic type due to the competition among free bacteria released by the same infected macrophages. The qualitative analysis and numerical results suggests the existence of forward, backward and S-shaped bifurcations when the associated reproduction number $R_0$ of the Mtb is less unity. In addition, qualitative analysis of the model shows that there may be up to three bacteria-present equilibria, two locally asymptotically stable, and one unstable.
Keywords: Ordinary differential equations, S-shaped bifurcation, tuberculosis, granuloma, macrophages and T cells.
Mathematics Subject Classification: Primary: 34D23, 93D20; Secondary: 65L05.
Citation: Eduardo Ibargüen-Mondragón, Lourdes Esteva, Edith Mariela Burbano-Rosero. Mathematical model for the growth of Mycobacterium tuberculosis in the granuloma. Mathematical Biosciences & Engineering, 2018, 15 (2) : 407-428. doi: 10.3934/mbe.2018018
J. Alavez, R. Avendao, L. Esteva, J. A. Flores, J. L. Fuentes-Allen, G. Garca-Ramos, G. Gmez and J. Lpez Estrada, Population dynamics of antibiotic resistant M. tuberculosis, Math Med Biol, 24 (2007), 35-56. Google Scholar
R. Antia, J. C. Koella and V. Perrot, Model of the Within-host dynamics of persistent mycobacterial infections, Proc R Soc Lond B, 263 (1996), 257-263. doi: 10.1098/rspb.1996.0040. Google Scholar
M. A. Behr and W. R. Waters, Is tuberculosis a lymphatic disease with a pulmonary portal?, Lancet, 14 (2004), 250-255. doi: 10.1016/S1473-3099(13)70253-6. Google Scholar
S. M. Blower and T. Chou, Modeling the emergence of the hot zones: Tuberculosis and the amplification dynamics of drug resistance, Nat Med, 10 (2004), 1111-1116. doi: 10.1038/nm1102. Google Scholar
C. Castillo-Chávez and B. Song, Dynamical models of tuberculosis and their applications, Math Biosci Eng, 1 (2004), 361-404. doi: 10.3934/mbe.2004.1.361. Google Scholar
T. Cohen and M. Murray, Modelling epidemics of multidrug-resistant M. tuberculosis of heterogeneous fitness, Nat Med, 10 (2004), 1117-1121. Google Scholar
A. M. Cooper, Cell-mediated immune responses in tuberculosis, Annu Rev Immunol, 27 (2009), 393-422. doi: 10.1146/annurev.immunol.021908.132703. Google Scholar
C. Dye and M. A. Espinal, Will tuberculosis become resistant to all antibiotics?, Proc R Soc Lond B, 268 (2001), 45-52. doi: 10.1098/rspb.2000.1328. Google Scholar
F. R. Gantmacher, The Theory of Matrices, AMS Chelsea Publishing, Providence, RI, 1998. Google Scholar
E. Guirado and L. S. Schlesinger, Modeling the Mycobacterium tuberculosis granuloma-the critical battlefield in host immunity and disease, Frontiers in Immunology, 4 (2013), 1-7. doi: 10.3389/fimmu.2013.00098. Google Scholar
T. Gumbo, A. Louie, M. R. Deziel, L. M. Parsons, M. Salfinger and G. L. Drusano, Drusano, Selection of a moxifloxacin dose that suppresses drug resistance in Mycobacterium tuberculosis, by use of an in vitro pharmacodynamic infection model and mathematical modeling, J Infect Dis, 190 (2004), 1642-1651. Google Scholar
E. G. Hoal-Van Helden, D. Hon, L. A. Lewis, N. Beyers and P. D. Van Helden, Mycobacterial growth in human macrophages: Variation according to donor, inoculum and bacterial strain, Cell Biol Int, 25 (2001), 71-81. doi: 10.1006/cbir.2000.0679. Google Scholar
E. Ibargüen-Mondragón, L. Esteva and L. Chávez-Galán, A mathematical model for cellular immunology of tuberculosis, Math Biosci Eng, 8 (2011), 973-986. doi: 10.3934/mbe.2011.8.973. Google Scholar
E. Ibargüen-Mondragón and L. Esteva, Un modelo matemático sobre la dinámica del Mycobacterium tuberculosis en el granuloma, Revista Colombiana de Matemáticas, 46 (2012), 39-65. Google Scholar
E. Ibargüen-Mondragón, J. P. Romero-Leiton, L. Esteva and E. M. Burbano-Rosero, Mathematical modeling of bacterial resistance to antibiotics by mutations and plasmids, J Biol Syst, 24 (2016), 129-146. doi: 10.1142/S0218339016500078. Google Scholar
E. Ibargüen-Mondragón, S. Mosqueraa, M. Cerón, E. M. Burbano-Rosero, S. P. Hidalgo-Bonilla, L. Esteva and J. P. Romero-Leiton, Mathematical modeling on bacterial resistance to multiple antibiotics caused by spontaneous mutations, BioSystems, 117 (2014), 60-67. Google Scholar
S. Kaufmann, How can immunology contribute to the control of tuberculosis?, Nat Rev Immunol, 1 (2001), 20-30. doi: 10.1038/35095558. Google Scholar
D. Kirschner, Dynamics of Co-infection with M. tuberculosis and HIV-1, Theor Popul Biol, 55 (1999), 94-109. Google Scholar
H. Koppensteiner, R. Brack-Werner and M. Schindler, Macrophages and their relevance in Human Immunodeficiency Virus Type Ⅰ infection, Retrovirology, 9 (2012), p82. doi: 10.1186/1742-4690-9-82. Google Scholar
Q. Li, C. C. Whalen, J. M. Albert, R. Larkin, L. Zukowsy, M. D. Cave and R. F. Silver, Differences in rate and variability of intracellular growth of a panel of Mycobacterium tuberculosis clinical isolates within monocyte model, Infect Immun, 70 (2002), 6489-6493. doi: 10.1128/IAI.70.11.6489-6493.2002. Google Scholar
G. Magombedze, W. Garira and E. Mwenje, Modellingthe human immune response mechanisms to mycobacterium tuberculosis infection in the lungs, Math Biosci Eng, 3 (2006), 661-682. doi: 10.3934/mbe.2006.3.661. Google Scholar
S. Marino and D. Kirschner, The human immune response to the Mycobacterium tuberculosis in lung and lymph node, J Theor Biol, 227 (2004), 463-486. doi: 10.1016/j.jtbi.2003.11.023. Google Scholar
J. Murphy, R. Summer, A. A. Wilson, D. N. Kotton and A. Fine, The prolonged life-span of alveolar macrophages, Am J Respir Cell Mol Biol, 38 (2008), 380-385. doi: 10.1165/rcmb.2007-0224RC. Google Scholar
G. Pedruzzi, K. V. Rao and S. Chatterjee, Mathematical model of mycobacterium-host interaction describes physiology of persistence, J Theor Biol, 376 (2015), 105-117. doi: 10.1016/j.jtbi.2015.03.031. Google Scholar
L. Ramakrishnan, Revisiting the role of the granuloma in tuberculosis, Nat Rev Immunol, 12 (2012), 352-366. doi: 10.1038/nri3211. Google Scholar
D. Russell, Who puts the tubercle in tuberculosis?, Nat Rev Microbiol, 5 (2007), 39-47. doi: 10.1038/nrmicro1538. Google Scholar
A. Saltelli, M. Ratto, S. Tarantola and F. Campolongo, Sensitivity analysis for chemical models, Chem Rev, 105 (2005), 2811-2828. Google Scholar
M. Sandor, J. V. Weinstock and T. A. Wynn, Granulomas in schistosome and mycobacterial infections: A model of local immune responses, Trends Immunol, 24 (2003), 44-52. Google Scholar
R. Shi, Y. Li and S. Tang, A mathematical model with optimal constrols for cellular immunology of tuberculosis, Taiwan J Math, 18 (2014), 575-597. doi: 10.11650/tjm.18.2014.3739. Google Scholar
D. Sud, C. Bigbee, J. L. Flynn and D. E. Kirschner, Contribution of CD8+ T cells to control of Mycobacterium tuberculosis infection, J Immunol, 176 (2006), 4296-4314. Google Scholar
D. F. Tough and J. Sprent, Life span of naive and memory T cells, Stem Cells, 13 (1995), 242-249. doi: 10.1002/stem.5530130305. Google Scholar
M. C. Tsai, S. Chakravarty, G. Zhu, J. Xu, K. Tanaka, C. Koch, J. Tufariello, J. Flynn and J. Chan, Characterization of the tuberculous granuloma in murine and human lungs: cellular composition and relative tissue oxygen tension, Cell Microbiol, 8 (2006), 218-232. doi: 10.1111/j.1462-5822.2005.00612.x. Google Scholar
S. Umekia and Y. Kusunokia, Lifespan of human memory T-cells in the absence of T-cell receptor expression, Immunol Lettt, 62 (1998), 99-104. doi: 10.1016/S0165-2478(98)00037-6. Google Scholar
L. Westera and J. Drylewicz, Closing the gap between T-cell life span estimates from stable isotope-labeling studies in mice and humans, BLOOD, 122 (2013), 2205-2212. doi: 10.1182/blood-2013-03-488411. Google Scholar
J. E. Wigginton and D. E. Kischner, A model to predict cell mediated immune regulatory mechanisms during human infection with Mycobacterium tuberculosis, J Immunol, 166 (2001), 1951-1967. doi: 10.4049/jimmunol.166.3.1951. Google Scholar
Word Health Organization (WHO), Global tuberculosis report 2015,2003. Available from: http://apps.who.int/iris/bitstream/10665/191102/1/9789241565059_eng.pdf. Google Scholar
Word Health Organization (WHO), Global tuberculosis report 2016,2003. Available from: http://apps.who.int/iris/bitstream/10665/250441/1/9789241565394-eng.pdf?ua=1. Google Scholar
M. Zhang, J. Gong, Z. Yang, B. Samten, M. D. Cave and P. F. Barnes, Enhanced capacity of a widespread strain of Mycobacterium tuberculosis to grow in human monocytes, J Infect Dis, 179 (1998), 1213-1217. Google Scholar
M. Zhang, S. Dhandayuthapani and V. Deretic, Molecular basis for the exquisite sensitivity of Mycobacterium tuberculosis to isoniazid, Proc Natl Acad Sci U S A, 93 (1996), 13212-13216. doi: 10.1073/pnas.93.23.13212. Google Scholar
Figure 1. The flow diagram of macrophages, T cells and bacteria
Figure 2. The graph of functions $g_1$ and $g_2$ defined in (20).
Table 1 for $\nu, \gamma_{U} = \bar \gamma\displaystyle{\Lambda_U \over \mu_U}$ and $\mu_{B}$">Figure 3. Standard regression coefficients (SCR) for $R_0 = \frac{\nu}{\gamma_U + \mu_{B}}$ assuming the values given in Table 1 for $\nu, \gamma_{U} = \bar \gamma\displaystyle{\Lambda_U \over \mu_U}$ and $\mu_{B}$
Table 1 for $\bar r, \bar \beta, \displaystyle{\Lambda_U \over \mu_U}, \gamma_{U} = \bar \gamma\displaystyle{\Lambda_U \over \mu_U}$ and $\mu_{B}$.">Figure 4. Standard regression coefficients (SCR) for $R_1 = \frac{\bar r \bar\beta {\Lambda_U \over \mu}}{\gamma_U + \mu_{B}}$ assuming the values given in Table 1 for $\bar r, \bar \beta, \displaystyle{\Lambda_U \over \mu_U}, \gamma_{U} = \bar \gamma\displaystyle{\Lambda_U \over \mu_U}$ and $\mu_{B}$.
Figure 5. The numerical simulations of temporal course for bacteria with ten initial conditions show the stability of the bacteria-present equilibrium $P_2$ and the infection free equilibrium $P_0$ given in (47) when $\sigma = 0.24$, $\sigma_c = 0.319$, $R_0 = 0.4$, $R_0 = 0.34$, $R_1 = 1.5$, $g_1(B^{\max}) = 1.37\times 10^{312}$ and $g_2(B^{\max}) = 1.32\times 10^{942}$.
Figure 6. The numerical simulations of temporal course for bacteria with ten initial conditions show the stability of the bacteria-present equilibria $P_1$ a$P_3$ given in (48) when $\sigma = 2.4\times 10^{-6}$, $\sigma_c = 0.003$, $R_0 = 0.0045$, $R^*_0 = 0.0043$, $R_1 = 0.43$.
Figure 7. The stable infection free equilibrium $P_0$ bifurcates to the stable bacteria-present equilibrium $P_1$ in the value $R_0 = 1-R_1$.
Figure 8. The results suggest forward and backward bifurcations, and a type of S-shaped bifurcation
Table 1. Interpretation and values of the parameters. Data are deduced from the literature (references).
Parameter Description Value Reference
$\Lambda_U$ growth rate of unfected Mtb 600 -1000 day$^{-1}$ [19,23,30]
$\bar\beta$ infection rate of Mtb $2.5*10^{-11}-2.5*10^{-7}$day$^{-1}$ [13,30]
$\bar\alpha_T$ elim. rate of infected Mtb by T cell $2*10^{-5}-3*10^{-5}$ day$^{-1}$ [13,30]
$\mu_U$ nat. death rate of $M_U$ 0028-0.0033 day$^{-1}$ [22,30]
$\mu_I$ nat. death rate of $M_I$ 0.011 day$^{-1}$ [22,35,30]
$\nu$ growth rate of Mtb 0.36 -0.52 day$^{-1}$ [12,20,38]
$\mu_{B}$ natural death rate of Mtb 0.31 -0.52 day$^{-1}$ [39,30]
$\bar \gamma_U$ elim. rate of Mtb by $M_U$ $1.2* 10^{-9} - 1.2*10^{-7}$ day$^{-1}$ [30]
$K$ carrying cap. of Mtb in the gran. $10^8-10^9$ bacteria [7]
$\bar k_I$ growth rate of T cells $8*10^{-3}$ day$^{-1}$ [11]
$T_{max}$ maximum recruitment of T cells 5.000 day$^{-1}$ [11]
$\mu_T$ natural death rate of T cells 0.33 day$^{-1}$ [35,30]
$\bar r$ Average Mtb released by one $M_U$ 0.05-0.2 day$^{-1}$ [30,35]
Tzung-shin Yeh. S-shaped and broken s-shaped bifurcation curves for a multiparameter diffusive logistic problem with holling type-Ⅲ functional response. Communications on Pure & Applied Analysis, 2017, 16 (2) : 645-670. doi: 10.3934/cpaa.2017032
Sabri Bensid, Jesús Ildefonso Díaz. Stability results for discontinuous nonlinear elliptic and parabolic problems with a S-shaped bifurcation branch of stationary solutions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1757-1778. doi: 10.3934/dcdsb.2017105
Shao-Yuan Huang, Shin-Hwa Wang. On S-shaped bifurcation curves for a two-point boundary value problem arising in a theory of thermal explosion. Discrete & Continuous Dynamical Systems, 2015, 35 (10) : 4839-4858. doi: 10.3934/dcds.2015.35.4839
Chih-Yuan Chen, Shin-Hwa Wang, Kuo-Chih Hung. S-shaped bifurcation curves for a combustion problem with general arrhenius reaction-rate laws. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2589-2608. doi: 10.3934/cpaa.2014.13.2589
Alan D. Rendall. Multiple steady states in a mathematical model for interactions between T cells and macrophages. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 769-782. doi: 10.3934/dcdsb.2013.18.769
Xue Dong He, Roy Kouwenberg, Xun Yu Zhou. Inverse S-shaped probability weighting and its impact on investment. Mathematical Control & Related Fields, 2018, 8 (3&4) : 679-706. doi: 10.3934/mcrf.2018029
Suman Ganguli, David Gammack, Denise E. Kirschner. A Metapopulation Model Of Granuloma Formation In The Lung During Infection With Mycobacterium Tuberculosis. Mathematical Biosciences & Engineering, 2005, 2 (3) : 535-560. doi: 10.3934/mbe.2005.2.535
Tomás Caraballo, Renato Colucci, Luca Guerrini. Bifurcation scenarios in an ordinary differential equation with constant and distributed delay: A case study. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2639-2655. doi: 10.3934/dcdsb.2018268
Bernard Dacorogna, Alessandro Ferriero. Regularity and selecting principles for implicit ordinary differential equations. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 87-101. doi: 10.3934/dcdsb.2009.11.87
Zvi Artstein. Averaging of ordinary differential equations with slowly varying averages. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 353-365. doi: 10.3934/dcdsb.2010.14.353
Serge Nicaise. Stability and asymptotic properties of dissipative evolution equations coupled with ordinary differential equations. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021057
Stefano Maset. Conditioning and relative error propagation in linear autonomous ordinary differential equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2879-2909. doi: 10.3934/dcdsb.2018165
W. Sarlet, G. E. Prince, M. Crampin. Generalized submersiveness of second-order ordinary differential equations. Journal of Geometric Mechanics, 2009, 1 (2) : 209-221. doi: 10.3934/jgm.2009.1.209
Aeeman Fatima, F. M. Mahomed, Chaudry Masood Khalique. Conditional symmetries of nonlinear third-order ordinary differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 655-666. doi: 10.3934/dcdss.2018040
Ping Lin, Weihan Wang. Optimal control problems for some ordinary differential equations with behavior of blowup or quenching. Mathematical Control & Related Fields, 2018, 8 (3&4) : 809-828. doi: 10.3934/mcrf.2018036
Jean Mawhin, James R. Ward Jr. Guiding-like functions for periodic or bounded solutions of ordinary differential equations. Discrete & Continuous Dynamical Systems, 2002, 8 (1) : 39-54. doi: 10.3934/dcds.2002.8.39
Hongwei Lou, Weihan Wang. Optimal blowup/quenching time for controlled autonomous ordinary differential equations. Mathematical Control & Related Fields, 2015, 5 (3) : 517-527. doi: 10.3934/mcrf.2015.5.517
Alex Bihlo, James Jackaman, Francis Valiquette. On the development of symmetry-preserving finite element schemes for ordinary differential equations. Journal of Computational Dynamics, 2020, 7 (2) : 339-368. doi: 10.3934/jcd.2020014
Iasson Karafyllis, Lars Grüne. Feedback stabilization methods for the numerical solution of ordinary differential equations. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 283-317. doi: 10.3934/dcdsb.2011.16.283
Bin Wang, Arieh Iserles. Dirichlet series for dynamical systems of first-order ordinary differential equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 281-298. doi: 10.3934/dcdsb.2014.19.281
Eduardo Ibargüen-Mondragón Lourdes Esteva Edith Mariela Burbano-Rosero | CommonCrawl |
Group by: Creators Name | Item Type | No Grouping
Jump to: Conference Proceedings | Conference Paper | Journal Article | Editorials/Short Communications | Patent
UNSPECIFIED (1986) Microbe-mineral interactions of relevance in the hydrometallurgy of complex sulphides. In: International Conference on `Progress in Metallurgical Research: Fundamental and Applied Aspects, IIT-Kanpur, p. 105-511.
Babu, Ramesh (1986) Studies on drop formation at conical tips - a theoretical approach. In: Proceedings of International Conference on `Progress in metallurgical research: fundamental and applied aspects, IIT-Kanpur, p. 279-87, pp. 4918-4927.
Sundaramoorthy, M and Parrack, P and Sasisekharan, V (1986) Application of the precession method to fiber diffraction: structural variations in lithium B-DNA. In: Proceeding Conversation Discipline Biomolecular Stereodynamics, pp. 217-225.
Banerjee, K and Ramakrishnan, KR and Sastry, PS (1986) An SIMD Architecture for Relaxation Labelling. In: Platinum Jubilee Conference on Systems and Signal Processing, December 11-13, 1986, Bangalore.
Ganguly, P (1986) Electrical transport and magnetic properties of oxides with potassium nickel fluoride $(K_2NiF_4)$ structure. In: Adv. Solid State Chem., Proc. INSA Golden Jubilee Symp. Solid State Chem, 1985, New Delhi, pp. 135-158.
Gopalakrishnan, J (1986) Low-temperature synthesis of novel metal oxides by topochemical reactions. In: Advance Solid State Chemistry Proceedings of INSA Golden Jubilee Symposium-Solid State Chem, 1985, New Delhi, pp. 48-66.
Krishnan, V (1986) Biomimetic model reactions of cavity-bearing porphyrins. In: Proceedings of the Indian National Science Academy, Part A: Physical Sciences, 1986, pp. 909-923.
Moudgal, NR and Martin, F and Kotagi, SG and Sairam, MR and Ravindranath, N and Rao, AJ and Murthy, GS (1986) Development of oFSH as a vaccine for the male-A status report on the recent researches carried out using the bonnet monkey (M.radiata). In: Immunological Approaches to Contraception and Promotion of Fertility, 1985, Plenum, New York, pp. 103-110.
Rao, CNR (1986) Developments in solid state chemistry: a partial overview. In: INSA Golden Jubilee Symposium on Solid State Chemistry, 1985, New Delhi, pp. 1-24.
Rao, CNR (1986) Synthesis and reactions of some novel metal oxide systems. In: Indian Academy of National Sciences Part A: Physical Sciences, 1986, pp. 699-714.
Rao, KJ (1986) Chemical bond and the nature of inorganic glasses. In: INSA Golden Jubilee Symposium on Solid State Chemistry, 1985, pp. 176-191.
Rao, KJ and Rao, BG and Damodaran, RV and Selvaraj, U (1986) Novel glasses with octahedral structural groups. In: 14th International Congress on Glass, 1986, Calcutta, pp. 182-189.
Srinivasa, N and Krishnan, V and Ramakrishnan, KR and Rajgopal, K (1986) Image Reconstruction Form Truncated Projections : A Linear Predication Approach. In: 1986 IEEE International Conference on Acoustics, Speech, and Signal Processing. ICASSP '86, April, Tokyo, Vol.11, 1733-1736.
Thathachar, MAL and Sastry, PS (1986) Estimator Algorithms for Learning Automata. In: Platinum Jubilee Conference on Systems and Signal Processing, Dec. 1986, Bangalore.
Thukaram, D and Iyengar, Ramakrishna BS and Parthasarathy, K (1986) Optimum allocation of reactive power in AC/DC power systems. In: Proc. 53rd Annual Research and Development Session, 8-10 May 1986, Bhubaneswar, New Delhi, India.
Thukaram, D and Parthasarathy, K and Iyengar, Ramakrishna BS (1986) Static VAR compensators for unbalanced reactive power demands and harmonic minimization. In: Fourth National Power Systems Conference, Feb. 1986, Varanasi, India.
Gore, AP and Paranjape, Sharayu and Rajarshi, MB and Gadgil, Madhav (1986) Some Methods for Summarizing Survivorship Data in Nonstandard Situations. In: Biometrical Journal, 28 (5). 577 -586.
Adiga, BS and Shankar, P (1986) Fast Public Key Cryptosystem Based On Matrix Rings. In: Electronics Letters, 22 (22). pp. 1182-1183.
Aggarwal, Vijay and Tikekar, VG and Hsu, Lie-Fern (1986) Bottleneck assignment problems under categorization. In: Computers & Operations Research, 13 (1). pp. 11-26.
Ajitkumar, P and Cherayil, Joseph D (1986) Ammonium ions prevent methylation of uridine to ribothymidine in Azotobacter vinelandii tRNA. In: Journal of Biosciences, 10 (2). pp. 267-276.
Akila, R and Jacob, KT and Shukla, AK (1986) Concept of thermodynamic capacity. In: Bulletin of Materials Science, 8 (4). pp. 453-465.
Anand, GV and George, Mathews K (1986) Normal-mode sound propagation in an ocean with sinusoidal surface waves. In: Journal of the Acoustical Society of America, 80 (1). pp. 238-243.
Anantharamaiah, KR and Bhattacharya, D (1986) Ionized Gas towards Galactic Centre-Constraints from Low-Frequency Recombination Lines. In: Journal of Astrophysics & Astronomy, 7 (3). 141 -153.
Ananthraj, S and Varma, KBR and Rao, KJ (1986) Thermal Properties of Silver Pyrophosphate - Anomalously High Glass Heat Capacities. In: Materials Research Bulletin, 21 (11). pp. 1369-1374.
Arivoli, T and Ramkumar, K and Satyam, M (1986) Magnetoresistors based on Composites. In: Journal of Physics D Applied Physics, 19 (9). pp. 183-185.
Arjunan, P and Ramamurthy, V (1986) Selectivity in the Photochemistry of \beta-ionyl and \beta -ionylidene Derivatives in \beta-cyclodextrin: Microsolvent Effect. In: Journal of Photochemistry, 33 (1). pp. 123-134.
Arumugam, S and Khetrapal, CL (1986) Nuclear magnetic resonance spectra of oriented bicyclic systems containing heteroatom(s):the spectrum of 2-thiocoumarin. In: Canadian Journal of Chemistry / Revue canadienne de chimie, 64 (4). pp. 714-716.
Asokan, S and Gopal, ESR and Parthasarathy, G (1986) Pressure-induced polymorphous crystallization in bulk Si20Te80 glass. In: Journal of Materials Science, 21 (2). pp. 625-629.
Asokan, S and Parthasarathy, G and Gopal, ESR (1986) Crystallization Studies on bulk $Si_xTe_{100-x}$ Glasses. In: Journal of Non-Crystalline Solids, 86 (1-2). pp. 48-64.
Asokan, S and Parthasarathy, G and Gopal, ESR (1986) Crystallization studies on bulk SixTe100-x glasses. In: Journal of Non-Crystalline Solids, 86 (1-2). pp. 48-64.
Asokan, S and Parthasarathy, G and Gopal, ESR (1986) Evidence for a new metastable crystalline compound in Ge---Te system. In: Materials Research Bulletin, 21 (2). 217 -224.
Asokan, S and Parthasarathy, G and Subbanna, G N and Gopal, E S R (1986) Electrical transport and crystallization studies of glassy semiconducting Si20Te80 alloy at high pressure. In: Journal of Physics and Chemistry of Solids, 47 (4). 341 -348.
Atre, MV and Mukunda, N (1986) Classical particles with internal structure: general formalism and application to first-order internal spaces. In: Journal of Mathematical Physics, 27 (12). pp. 2908-2919.
Ayyoob, M and Hegde, MS (1986) Chlorination of silver dosed with potassium and barium in presence of oxygen: An X-ray photoelectron spectroscopy study. In: Journal of Catalysis, 97 (2). 516 -526.
Ayyoob, Mohammed and Hegde, Manjanath S (1986) Electron Spectroscopic Studies of Formic Acid Adsorption and Oxidation on Cu and Ag dosed with Barium. In: Journal of the Chemical Society, Faraday Transactions 1: Physical Chemistry in Condensed Phases, 82 . 1651 -1662.
Babau, Ramesh J and Bhatt, Vivekananda M (1986) New reagents 41. Reduction of sulphonyl chlorides and sulphoxideswith aluminum iodide. In: Tetrahedron Letters, 27 (9). pp. 1073-1074.
Babu, Ramesh S (1986) Experimental Studies on Drop Formation at the Tip of Melting Rods. In: Metallurgical and Materials Transactions B, Process Metallurgy and Materials Processing Science, 17 (3). pp. 471-477.
Babu, Ramesh S (1986) An absolute method for the determination of surface tension of liquids using pendent drop profiles. In: Bulletin of Materials Science, 90 (18). 4337 -4340.
Babu, DS and Vedavathy, TS (1986) Improving the optimum generalised stop-and-wait ARQ scheme. In: Electronics Letters, 22 (12). pp. 649-650.
Babu, Ramesh J and Bhatt, Vivekananda M (1986) New Reagents $4^1$. Reduction of Sulphonyl Chlorides and Sulphoxides with Aluminum Iodide. In: Tetrahedron Letters, 27 (9). pp. 1073-1074.
Bagchi, Biman (1986) Debye-Waller factor of the solid from the self-diffusion coefficient at the solid-liquid interface. In: Journal of Chemical Physics, 85 (8). pp. 4667-4668.
Bagchi, Biman (1986) Dynamic structure factor across the liquid-solid interface: appearance of a delta-function elastic peak. In: Chemical Physics Letters, 125 (1). 91 -96.
Bagchi, Biman (1986) Excitation wavelength and viscosity dependence of Landau-Zener electronic transitions in condensed media. In: Chemical Physics Letters, 128 (5-6). pp. 521-527.
Bagchi, Biman and Kirkpatrick, TR (1986) On the kinetics of crystal growth from a supercooled melt. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 96 (6). 465-472.
Bai, BN Pramila and Biswas, SK (1986) Effect of Load on Dry Sliding Wear of Aluminum-Silicon Alloys. In: Tribology Transactions, 29 (1). 116 -120.
Balaji, VN and Rao, Jagannatha M and Rao, Shashidhar N and Dietrich, Stephen W and Sasisekharan, V (1986) Geometry Of Proline And Hydroxyproline .1. An Analysis Of X-Ray Crystal-Structure Data. In: Biochemical and Biophysical Research Communications, 140 (3). 895 -900.
Balakrishnan, M and Chinnaiya, GP and Nair, PG and Rao, Jagannadha A (1986) Studies on serum progesterone levels in Zebu × Holstein heifers during pre- and peripubertal periods. In: Animal Reproduction Science, 11 (1). pp. 11-15.
Balaram, Hemalatha and Sukumar, M and Balaram, P (1986) Stereochemistry of \alpha-Aminoisobutyric Acid Peptides in Solution: Conformations of Decapeptides with a Central Triplet of Contiguous L-Amino Acids. In: Biopolymers, 25 (11). pp. 2209-2223.
Balasubrahmanyam, SN (1986) Anisochrony of O-methylene protons of ethyl ester functions and structural factors in the connected chiral entities. In: Journal of Chemical Sciences, 96 (1-2). pp. 21-58.
Bardi, R and Piazzesi, AM and Toniolo, C and Raj, PA and Raghothama, S and Balaram, P (1986) Solid state and solution conformation of Boc-L-Met-Aib-L-Phe-OMe. Beta-turn conformation of a sequence related to an active chemotactic peptide analog. In: International Journal of Peptide & Protein Research, 27 (3). pp. 229-238.
Bardi, R and Piazzes, AM and Toniolo, C and Raj, Antony P and Raghothama, S and Balaram, P (1986) Conformations of the amino aterminal tetrapeptide of emerimicins and antiamoebins in solution and in the solid state. In: International Journal of Biological Macromolecules, 8 (4). pp. 201-206.
Bardi, R and Piazzesi, AM and Toniolo, C and Antony Raj, P and Raghothama, Srinivasarao and Balaram, Padmanabhan (1986) Conformations of the amino terminal tetrapeptide of emerimicins and antiamoebin in solution and in the solid state. In: International Journal of Biological Macromolecules, 8 (4). pp. 201-206.
Bardi, R and Piazzesi, AM and Toniolo, C and Balaram, Padmanabhan and Sukumar, M (1986) Stereochemistry of peptides containing 1-aminocyclopentane carboxylic acid (Acc5). Solution and solid state conformations of Boc-Acc5-Acc5-NHMe. In: Biopolymers, 25 (9). pp. 1635-1644.
Bardi, R and Piazzesi, AM and Toniolo, C and Sukumar, M and Balaram, P (1986) Stereochemistry of Peptides Containing 1-AminocyclopentanecarboxylicA cid ( Am5): Solution and Solid-state Conformations of Boc-Acc '-Acc'-NHMe. In: Biopolymers, 25 (9). pp. 1635-1644.
Bardi, R and Piazzesi, AM and Toniolo, C and Sukumar, M and Balaram, P (1986) Stereochemistry of Peptides Containing 1-Aminocyclopentanecarboxylic Acid $({Acc}^5)$ : Solution and Solid-state Conformations of $Boc-{Acc}^5-{Acc}^5-NHMe$. In: Biopolymers, 25 (9). pp. 1635-1644.
Bhanuprakash, K and Kulkarni, GV and Chandra, Asish K (1986) On calculations of intermolecular potentials. In: Journal of Computational Chemistry, 7 (6). pp. 731-738.
Bharathi, Devi B and Sarma, VVS (1986) A Fuzzy Approximation Scheme for Sequential Learning in Pattern Recognition. In: IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 16 (5). 668 -679.
Bhashyam, AT and Deshpande, SM and Mukunda, HS and Goyal, G (1986) A Novel Operator-Splitting Technique for One-Dimensional Laminar Premixed Flames. In: Combustion Science and Technology, 46 (3-6). 223 -247.
Bhaskarwar, Ashok N and Kumar, R (1986) Oxidation of sodium sulphide in the presence of fine activated carbon particles in a foam bed contactor. In: Chemical Engineering Science, 41 (2). 399 -404.
Bhat, Vasudeva and Gopalakrishnan, J (1986) HNbWO6 and HTaWO6: Novel oxides related to ReO3 formed by ion exchange of rutile-type LiNbWO6 and LiTaWO6. In: Journal of Solid State Chemistry, 63 (2). pp. 278-283.
Bhat, Vasudeva and Gopalakrishnan, J (1986) A New Method for the Synthesis of Oxide Bronzes of Tungsten, Molybdenum and Vanadium. In: Journal of the Chemical Society, Chemical Communications . pp. 1644-1645.
Bhatia, KL and Gosain, DP and Parthasarathy, G and Gopal, ESR (1986) Bismuth-doped amorphous-germanium sulfidesemiconductors. In: Physical Review B, 34 (12). pp. 8786-8793.
Bhatia, KL and Gosain, DP and Parthasarathy, G and Gopal, ESR (1986) Morphological structure of bismuth-doped n-type amorphous germanium sulphide semiconductors. In: Journal of materials science letters, 5 (12). 1281 -1284.
Bhatia, KL and Gosain, DP and Parthasarathy, G and Gopal, ESR (1986) On the structural features of doped amorphous chalcogenide semiconductors. In: Journal of Non-Crystalline Solids, 86 (1-2). pp. 65-71.
Bhatia, KL and Gosain, DP and Parthasarathy, G and Gopal, ESR (1986) Pressure-induced first-order transition in layered crystalline semiconductor GeSe to a metallic phase. In: Physical Review B, 33 (2). pp. 1492-1494.
Bhatt, MV and BM, Hosur (1986) Electron transfer mechanism for periodic acid oxidation of aromatic substrates. In: Indian Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 25B (10). pp. 1004-1005.
Bhatt, Vivekananda M and Shashidhar, MS (1986) Carbonyl Participation During the Hydrolysis of Aryl Benzenesulphonates. In: Tetrahedron Letters, 27 (19). pp. 2165-2166.
Bhattacharya, D and Srinivasan, G (1986) On the implication of the recently discovered 5 millisecond binary pulsar PSR 1855+09. In: Current science, 55 (7). 327 -330.
Bhattacharyya, K and Das, PK and Rao, Nageshwara B and Ramamurthy, V (1986) Laser flash photolysis study of triplets of cyclobutanethiones. In: Journal of Photochemistry, 32 (3). pp. 331-340.
Bhattacharyya, K and Ramamurthy, V and Das, PK and Sharat, S (1986) Short-lived triplets of aliphatic thioketenes. In: Journal of Photochemistry, 35 (3). pp. 299-309.
Bhattacharyya, Kankan and Das, Paritosh K and Ramamurthy, Vaidhyanathan and Rao, V Pushkara (1986) Triplet-state Photophysics and Transient Photochemistry of Cyclic Enethiones A Laser Flash Photolysis Study. In: Journal of the chemical society-faraday transactions ii, 82 . pp. 135-147.
Biswas, Margaret and Sekharudu, Y Chandra and Rao, VSR (1986) Complex carbohydrates: 1. Conformational studies on some oligosaccharides related to N-glycosyl proteins which interact with concanavalin A. In: International Journal of Biological Macromolecules, 8 (1). pp. 2-8.
Biswas, NN (1986) Foldable compatibility matrix for the folding of programmable logic arrays. In: International Journal of Electronics, 61 (1). pp. 97-103.
Biswas, SK and Mahesh, MS and lyengar, BSR (1986) Simple new PWM patterns for thyristor three-phase AC/DC convertors. In: IEE Proceedings B: Electric Power Applications, 133 (6). 354 -358.
Bokhari, SA and Balakrishnan, N (1986) A Method to Extend the Spectral Iteration Technique. In: IEEE Transations on Antennas and Propagation, AP-34 (1). pp. 51-57.
Bose, DN and Seishu, B and Parthasarathy, G and Gopal, ESR (1986) Doping Dependence of Semiconductor-Metal Transition in InP at High Pressures. In: Proceedings of the Royal Society A: Mathematical, Physical & Engineering Sciences, 405 (1829). pp. 345-353.
Buttrey, DJ and Honig, JM and Rao, CNR (1986) Magnetic properties of quasi-two-dimensional La2NiO4. In: Journal of Solid State Chemistry, 64 (3). 287 -295.
Chakrabarti, A (1986) Cooling of a composite slab. In: Applied Scientific Research, 43 (3). pp. 213-225.
Chakrabarti, A (1986) Diffraction by a Dielectric Half-Plane. In: IEEE Transactions on Antennas and Propagation, 34 (6). 830 -833.
Chakrabarti, A (1986) The sputtering temperature of a cooling cylindrical rod with an insulated core. In: Applied Scientific Research, 43 (2). 107 -113.
Chakrabarti, A and Nalini, VN (1986) Hydrodynamic pressure on a dam with a periodically corrugated reservoir bed. In: Acta Mechanica, 60 (1-2). pp. 91-97.
Chakrabarti, Amaresh (1986) The sputtering temperature of a cooling cylindrical rod with an insulated core. In: Applied Scientific Research, 43 (2). 107 -113.
Chakravarthy, Purandar and Reddy, NM (1986) Theoretical study of a 16-µm CO2 downstream-mixing gasdynamic laser: A two-dimensional approach. In: Applied Physics Letters, 48 (4). 263 -265.
Chakravarty, AR (1986) Chemistry of diruthenium compounds having metal-metal multiple bonds. In: Proceedings of the Indian National Science Academy, Part A: Physical Sciences, 52 (4). pp. 749-763.
Chakravarty, Purandar and Reddy, NM and Reddy, KPJ (1986) A study of the effect of N2 reservoir temperature on a 16 μm CO2-N2 downstream-mixing gasdynamic laser. In: Optics Communications, 58 (2). pp. 130-132.
Chanda, M and Rempel, GL (1986) Cuprous oxide catalyzed air oxidation of thiosulfate and tetrathionate. In: Applied Catalysis, 23 (1). pp. 101-110.
Chandra, H Sharat (1986) Haldane's rule back in the news. In: Nature, 323 (6083). p. 20.
Chandrasekar, A and Nath, G (1986) Time dependent rotational flow of a viscous fluid over an infinite porous disk with a magnetic field. In: International Journal of Engineering Science, 24 (10). pp. 1667-1680.
Chandrasekhar, J and Rao, CNR (1986) Computer Simulation of Quenched Liquids: The Glassy State of Water. In: Chemical Physics Letters, 131 (3). pp. 267-270.
Chandrasekharaiah, HS (1986) Natural Frequencies And Transient Responses Of 3-Phase Rotating Machine Windings. In: IEEE Transactions on Energy Conversion, 1 (3). 167 -173.
Chandrasekharappa, SC and Gopalakrishnan, AS and Jacob, TM (1986) Antibodies specific to deoxythymidine 5'-phosphate. In: Indian Journal of Biochemistry & Biophysics, 23 (4). pp. 233-237.
Chandrasekharappa, SC and Jacob, TM (1986) Purification of Antibodies Specific to a Dinucleotide Using the Hapten Bound to Deae Cellulose as an Affinity Column. In: Immunological Investigations, 15 (1). pp. 1-9.
Chandrashekhara, K and Gopalakrishnan, P (1986) Analysis of an orthotropic cylindrical shell having a transversely isotropic core subjected to axisymmetric load. In: Thin-Walled Structures, 4 (3). pp. 223-237.
Changkakoti, Rupak and Pappu, Sastry V (1986) Study on the pH dependence of diffraction efficiency of phase holograms in dye sensitized dichromated gelatin. In: Applied Optics, 25 (5). pp. 798-801.
Char, Shobha and Gopinathan, Karumathil P (1986) Arginyl-tRNA Synthetase from Mycobacterium smegmatis SN2: Purification and Kinetic Mechanism. In: The Journal of Biochemistry, 100 (2). pp. 349-357.
Chary, BR and Bhat, HL and Chandrasekhar, P and Narayanan, PS (1986) Vibrational spectroscopic studies of the ferroelectric $LiRbSO_4$. In: Journal of Raman Spectroscopy, 17 (1). pp. 59-63.
Chary, BR and Bhat, HL and Chandrasekhar, P and Narayanan, PS (1986) Vibrational spectroscopic studies of the ferroelectric LiRbSO4. In: Journal of Raman Spectroscopy, 17 (1). pp. 59-63.
Chary, BR and Shashikala, MN and Bhat, HL (1986) Pressure effect on the Dielectric Properties of $LiCsSO_4$. In: Current Science, 55 (20). pp. 1021-1223.
Chattopadhyay, K and Aaronson, HI (1986) Interfacial structure and crystallographic studies of transformations in betat' and beta Cu---Zn alloys-II. martensitic formation of alpha1' plates in beta'. In: Acta Metallurgica, 34 (4). pp. 713-720.
Chattopadhyay, K and Ravi, VA and Ranganathan, S (1986) Non-equilibrium trapping in Al---In system. In: Acta Metallurgica, 34 (4). 691 -693.
Chaturvedi, VK and Kurup, CK (1986) Effect of lutein on the transport of Ca2+ across phospholipid bilayer and mitochondrial membrane. In: Biochemistry International, 12 (2). 373 -377.
Chaturvedi, VK and Kurup, Ramakrishna CK (1986) Interaction of lutein with phosphatidylcholine bilayers. In: Biochimica et Biophysica Acta, 860 (2). pp. 286-292.
Chiu, Charles B and Pasupathy, J and Wilson, Sanford J (1986) Determination of baryon magnetic moments from QCD sum rules. In: Physical Review D, 33 (7). pp. 1961-1973.
Choubey, Divaker and Gopinathan, KP (1986) Characterization of beta-lactamase from Mycobacterium smegmatis SN2. In: Current Microbiology, 13 (3). pp. 171-175.
Choubey, Divaker and Gopinathan, KP (1986) Factors affecting the synthesis and distribution of beta-lactamase in Mycobacterium smegmatis SN2 cultures. In: Current Microbiology, 13 (2). 103 -106.
Choudhuri, Arnab Rai (1986) Magnetic helicity as a constraint on coronal dissipation. In: NASA. Goddard Space Flight Center Coronal and Prominence Plasmas . pp. 451-456.
Choudhuri, Arnab Rai (1986) Magnetic energy dissipation in force-free jets. In: Astrophysical Journal, 310 (1). pp. 96-103.
Choudhuri, Arnab Rai (1986) The dynamics of magnetically trapped fluids. I - Implications for umbral dots and penumbral grains. In: Astrophysical Journal, 302 . pp. 809-825.
Conrad, Michael and Brahmachari, SK and Sasisekharan, V (1986) DNA structural variability as a factor in gene expression and evolution. In: Biosystems, 19 (2). pp. 123-126.
Das, Manoj K and Raghothama, S and Balaram, P (1986) Membrane channel forming polypeptides. Molecular conformation and mitochondrial uncoupling activity of antiamoebin,an a-aminoisobutyric acid containing peptide. In: Biochemistry, 25 (22). pp. 7110-7117.
Dasgupta, Chandan and Pandit, Rahul (1986) Kinetics of domain growth: The relevance of two-step quenches. In: Physical Review B, 33 (7). pp. 4752-4757.
Dasgupta, Dipak and Rajagopalan, Malini and Sasisekharan, V (1986) DNA-Binding Characteristics of a Synthetic Analog of Distamycin. In: Biochemical and Biophysical Research Communications, 140 (2). pp. 626-631.
Dattaguru, B and Naidu, ACB and Krishnamurthy, T and Ramamurthy, TS (1986) Development of special fastener elements. In: Computers & Structures, 24 (1). 127 -134.
Deobagkar, DN and Shankar, V and Deobagkar, DD (1986) Separation of 5-methylcytosine-rich DNA using immobilized antibody. In: Enzyme and Microbial Technology, 8 (2). pp. 97-100.
Desayi, P and Ganesan, N (1986) Fracture Behavior of Ferrocement Beams. In: Journal of Structural Engineering, 112 (7). 1509 -1525.
Devi, CD Surma and Nagaraj, M and Nath, G (1986) Axial heat conduction effects in natural convection along a vertical cylinder. In: International Journal of Heat and Mass Transfer, 29 (4). pp. 654-656.
Dhanasekaran, N and Moudgal, NR (1986) Studies on follicular atresia: role of tropic hormone and steroids in regulating cathepsin-D activity of preantral follicles of the immature rat. In: Molecular and Cellular Endocrinology, 44 (1). 77 -84.
Dinesha, KV and Iyer, Suman B and Vishveshwara, Saraswathi (1986) Energy expressions for atomic configurations in the L-S coupling scheme. In: International Journal of Quantum Chemistry, 30 (6). pp. 783-790.
DurgaKumari, B and Adiga, Radhakantha P (1986) Estrogen modulation of retinol-binding protein in immature chicks: Comparison with riboflavin carrier protein. In: Molecular and Cellular Endocrinology, 46 (2). pp. 121-130.
DurgaKumari, B and Adiga, Radhakantha P (1986) Hormonal induction of riboflavin carrier protein in the chicken oviduct and liver: a comparison of kinetics and modulation. In: Molecular and Cellular Endocrinology, 44 (3). 285-292.
Durrant, MC and Hegde, MS and Rao, CNR (1986) Electronic structures of $H_2O.BF_3$ and related n-v addition compounds: a combined EELS-UPS study in vapor phase. In: Journal of Chemical Physics, 85 (11). pp. 6356-6360.
Durrant, MC and Hegde, MS and Rao, CNR (1986) Electronic-structures of h2o.bf3 and related n-v addition-compounds - a combined eels-ups study in vapor-phase. In: Journal of Chemical Physics, 85 (11). 6356 -6360.
Elhawary, ME and Rao, RS and Christensen, GS (1986) Optimal hydrothermal load flow: Formulation and a successive approximation solution for fixed head systems. In: Optimal Control Applications and Methods, 7 (4). pp. 337-354.
Elliott, Stephen R and Rao, CNR and Thomas, John M (1986) The Chemistry of the Noncrystalline State. In: Angewandte Chemie International Edition in English, 25 (1). pp. 31-46.
Francis, Vidyasagar NK and Dwarki, Varavan IJ and Padmanaban, Govindarajan (1986) A comparative study of tbe regulation of cytochrome P-450 and glutathione transferase gene expression in rat liver. In: Nucleic Acids Research, 14 (6). 2497 -2510.
Gangadevi, T and Rao, M Subba and Kutty, TR Narayan (1986) Kinetics of thermal decomposition of barium zirconyl oxalate. In: Monatshefte für Chemie, 117 (1). pp. 21-32.
Ganguli, AK and Gopalakrishnan, J (1986) Magnetic properties of calcium iron manganese oxide $Ca_2Fe_2-\timesMn\timesO5$. In: Journal of Chemical Sciences, 97 ((5-6)). pp. 627-630.
Ganguly, P and Rao, CNR (1986) High $T_c$ superconductivity in oxides derived from lanthanum strontium copper oxide $(La_{1.8}Sr_{0.2}CuO_4)$. In: Proceedings - Indian Academy of Sciences, Chemical Sciences, 97 (5-6). pp. 631-633.
Ganguly, P and Vasanthacharya, NY (1986) Infrared and Mössbauer spectroscopic study of the metal-insulator transition in some oxides of perovskite structure. In: Journal of Solid State Chemistry, 61 (2). pp. 164-170.
Ganguly, P and Vasanthacharya, NY (1986) On the use of oxalates as starting materials for low-temperature preparation of bronzes and other reduced phases. In: Materials Research Bulletin, 21 (4). pp. 479-482.
Gehlot, Vijay and Srikantb, YN (1986) An interpreter for slips-An applicative language based on LAMBDA-Calculus. In: Computer Languages, 11 (1). pp. 1-13.
Ghosal, Dipak and Patnaik, L M (1986) Parallel polygon scan conversion algorithms: Performance evaluation on a shared bus architecture. In: Computers & Graphics, 10 (1). pp. 7-25.
Gopalakrishna, AV and Ganagi, MS (1986) Weak discontinuities in relativisitic MHD. In: Astrophysics and Space Science, 120 (1). pp. 139-149.
Gopalakrishnan, J (1986) Synthesis and structure of some interesting oxides of bismuth. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 96 (6). pp. 449-458.
Gopalakrishnan, M and Patnaik, LM (1986) Integrating voice and data on SALAN: an experimental local area network. In: Computer Communications, 9 (4). pp. 186-194.
Gopalarao, AS and Thukaram, D and Iyengar, Ramakrishna BS (1986) Torque Angle Loop Analysis of Synchronous Machines. In: Electric Machines & Power Systems, 11 (3). pp. 215-227.
Govindarajan, S and Babu, PJ and Patil, KC (1986) Thermal analysis of metal sulfate hydrazinates and hydrazinium metal sulfates. In: Thermochimica Acta, 97 . pp. 287-293.
Govindarajan, Subbiah and Patil, Kashinath C and Manohar, Hattikudur and Werner, Per-Erik (1986) Hydrazinium as a Ligand: Structural, Thermal, Spectroscopic, and Magnetic Studies of Hydrazinium Lanthanide Di-sulphate Monohydrates; Crystal Structure of the Neodymium Compound. In: Journal of the Chemical Society, Dalton Transactions (1). pp. 119-123.
Govindarajan, Subbiah and Patil, Kashinath C and Poojary, Damodara M and Hattikudur, Manohar (1986) Synthesis, Characterization and X-ray Structure of Hexahydrazinium Diuranyl Pentaoxalate Dihydrate, ($N_2H_5)_6(UO_2)_2(C_2O_4)_5.2H_2O$. In: Inorganica Chimica Acta, 120 (1). pp. 103-107.
Govindarajan, Subbiah and Patil, Kashinath C and Poojary, MDamodara and Manohar, Hattikudur (1986) Synthesis, characterization and X-ray structure of hexahydrazinium diuranyl pentaoxalate dihydrate, (N2H5)6(UO2)2(C2O4)5·2H2O. In: Inorganica Chimica Acta, 120 (1). 103 -107.
Govindarao, Venneti MH and Chidambaram, M (1986) Hydrogenation Of alpha-Methylstyrene In A Bcsr: Effect Of Distribution Of Solids. In: Journal of Chemical Engineering of Japan, 19 (3). pp. 243-245.
Gupta, SC (1986) Axisymmetric melting of a long cylinder due to an infinite flux. In: Proceedings Mathematical Sciences, 95 (1). pp. 1-12.
Hajra, JP and Venkatraman, M and Ranganathan, S (1986) Thermodynamics and phase equilibria in the Co-Ni-Mn system. In: Transactions of the Indian Institute of Metals, 39 (3). pp. 211-218.
Hegde, MS and Ayyoob, M (1986) $o^{2-}$ and $o^{1-}$ types of oxygen species on Ni and barium-dosed Ni and Cu surfaces. In: Surface Science Letters, 173 (2-3). L635-L640.
Hegde, MS and Ayyoob, Mohammed (1986) O2- and O1- types of oxygen species on Ni and barium-dosed Ni and Cu surfaces. In: Applied Surface Science, 173 (2-3). L635-L640.
Hiriyanna, KT and Ramakrishnan, TV (1986) Deoxyribonucleic acid replication time in Mycobacterium tuberculosis H37 Rv. In: Archives of Microbiology, 144 (2). pp. 105-109.
Ilangovan, S and Vasu, KI (1986) Electro-hydrometallurgy of chalcopyrites - VII. An appraisal of ferric chloride leaching process. In: Bulletin of Electrochemistry, 2 (6). pp. 611-613.
Indusekhar, H and Kumar, V (1986) Properties of iron related quenched-in levels in p-silicon. In: Physica Status Solidi A-Applied Research, 95 (1). 269 -278.
Indusekhar, H and Kumaran, V and Sengupta, D (1986) Investigation of Deep Defects Due to -Particle Irradiation in n-Silicon. In: Physica Status Solidi A, 93 (2). pp. 645-653.
Iwase, M and Ichise, E and Jacob, KT (1986) Physical chemistry of mixed conducting zirconia ceramics. In: Sprechsaal, 119 (4). 280 -283.
Iyengar, KTS and Pandya, SK (1986) Application of the method of initial functions for the analysis of composite laminated plates. In: Archive of Applied Mechanics, 56 (6). pp. 407-416.
Iyengar, NGR and Umaretiya, JR (1986) Deflection analysis of hybrid laminated composite plates. In: Composite Structures, 5 (1). pp. 15-32.
Iyengar, NGR and Umaretiya, JR (1986) Transverse vibrations of hybrid laminated plates. In: Journal of Sound and Vibration, 104 (3). 425 -435.
Iyengar, RN (1986) A Nonlinear System Under Combined Periodic and Random Excitation. In: Journal of Statistical Physics, 44 (5-6). 907 -920.
Iyer, K Viswanathan and Patnaik, LM (1986) Performance Study of a Centralized Concurrency Control Algorithm for Distributed Database Systems using SIMULA. In: Computer Journal, 29 (2). 118 -126.
Jacob, KT (1986) Solubility and activity of oxygen in liquid nickel in equilibrium with $\alpha -Al_2o_3$Nio.(1 + x)$Al_2o_3$. In: Metallurgical and Materials Transactions B, Process Metallurgy and Materials Processing Science, 17B (4). pp. 763-70.
Jacob, KT and Hajra, JP (1986) Electromagnetic levitation study of sulfur in liquid iron, nickel, and iron-nickel alloys. In: Transactions of the Indian Institute of Metals, 39 (1). pp. 62-69.
Jacob, KT and Iyengar, GNK and Kim, WK (1986) Spinel-Corundum Phase Equilibria in the Systems Mn-Cr-Al-O and Co-Cr-Al-O at 1373 K. In: Journal of the American Ceramic Society, 69 (6). pp. 487-492.
Jacob, KT and Shukla, AK and Akila, Ramachandran and Kale, GM (1986) Gibbs' energy of formation of nickel orthosilicate $(Ni_2SiO_4)$. In: High Temperature Materials and Processes, 7 (2-3). pp. 141-148.
Jacob, KT (1986) Casting of titanium and titanium alloys. In: Defence Science Journal, 36 (2). pp. 121-141.
Jacob, KT (1986) Solubility and activity of oxygen in liquid nickel in equilibrium with -Al2O3 and NiO.(1+x)Al2O3. In: Metallurgical and Materials Transactions B, 17 (4). pp. 763-770.
Jacob, KT and Hajra, JP (1986) Oxygen content of liquid cobalt in equilibrium with CoO.(1+x)Al2O3 and -Al2O3. In: Zeitschrift für Metallkunde, 77 . pp. 673-677.
Jacob, KT and Iyengar, GNK (1986) Thermodynamic study of Fe2O3 Fe2(SO4)3 equilibrium using an oxyanionic electrolyte (Na2SO4 I). In: Metallurgical and Materials Transactions B, 17 (2). pp. 323-329.
Jacob, KT and Iyengar, GNK and Srikanth, S (1986) Phase relations and activities in the Co Ni O system at 1373 K. In: Bulletin of Materials Science, 8 (1). pp. 71-79.
Jacob, KT and Kale, GM and Iyengar, GNK (1986) Oxygen potentials, Gibbs' energies and phase relations in the Cu-Cr-o system. In: Journal of Materials Science, 21 (8). pp. 2753-2758.
Jacob, KT and Kumar, BV (1986) Thermodynamic properties of Cr Mo solid alloys. In: Zeitschrift für Metallkunde, 77 . pp. 207-212.
Jacob, KT and Shukla, AK and Ramachandran, A and Kale, GM (1986) Gibbs energy of formation of nickel orthosilicate (Ni2SiO4). In: High Temperature Materials and Processes, 7 (2-3). pp. 141-148.
Jacob, KT and Waseda, Y and Iwase, M (1986) Sensor for H2S or S2 based on alumina. In: Journal of the American Ceramic Society, 1 (3). pp. 264-270.
James, Jose and Rao, M Subba (1986) Reaction product of lime and silica from rice husk ash. In: Cement and Concrete Research, 16 (1). pp. 67-73.
James, Jose and Rao, M Subba (1986) Reactivity of rice husk ash. In: Cement and Concrete Research, 16 (3). pp. 296-302.
James, Jose and Rao, M Subba (1986) Silica from rice husk through thermal decomposition. In: Thermochimica Acta, 97 . pp. 329-336.
Jhans, H and Honig, JM and Rao, CNR (1986) Optical properties of reduced LiNbO,. In: Journal of Physics C: Solid State Physics, 19 (19). 3649 -3658.
Jiran, E and Jacob, KT (1986) Computation of thermodynamic properties of multicomponent solutions: extension of Toop model. In: Metallurgical and Materials Transactions A, 17 (6). pp. 1102-1104.
Joshi, NV (1986) Evolution of sex ratios in social hymenoptera: consequences of finite brood size. In: Journal of Genetics, 65 (1-2). 55 -64.
Juneja, JM and Abraham, KP and Iyengar, GNK (1986) Thermodynamic study of liquid magnesium-aluminium alloys by vapour pressure measurement using the boiling point method. In: Scripta Metallurgica, 20 (2). pp. 177-180.
Kaliannan, P and Vishveshwara, S and Rao, VSR (1986) Anomeric effect in carbohydrates-an ab initio study on extended model systems. In: Proceedings of Indian Academy of Sciences: Chemical Sciences, 96 (5). pp. 327-339.
Kamal, K and Durvasula, S (1986) Macromechanical behaviour of composite laminates. In: Composite Structures, 5 (4). 309 -318.
Kamal, K and Durvasula, S (1986) Some studies on free vibration of composite laminates. In: Composite Structures, 5 (3). pp. 177-202.
Kamath, P Vishnu and Hegde, MS and Rao, CNR (1986) A novel investigation of vapor-phase charge-transfer complexes of halogens with n-donors by electron energy loss spectroscopy. In: Journal of Physical Chemistry, 90 (10). pp. 1990-1992.
Karbelkar, SN (1986) On the axiomatic approach to the maximum entropy principle of inference. In: Pramana, 26 (4). 301 -310.
Karle, IL and Sukumar, M and Balaram, Padmanabhan (1986) Parallel packing of alpha-helices in crystals of the zervamicin IIA analog Boc-Trp-Ile-Ala-Aib-Ile-Val-Aib-Leu-Aib-Pro-OMe.2H2O. In: Proceedings Of The National Academy Of Sciences Of The United States Of America, 83 (24). pp. 9284-9288.
Karle, Isabella L and Sukumar, Muppalla and Balaram, Padmanabhan (1986) Parallel packing of \alpha-helices in crystals of the zervamicin IIA analog Boc-Trp-Ile-Ala-Aib-Ile-Val-Aib-Leu-Aib-Pro-OMe ${2H}_2O$. In: Proceedings of the National Academy of Sciences of the United States of America, 83 (24). pp. 9284-9288.
Kasturi, TR and Rajasekhar, B and Sivaramakrishnan, R and Reddy, Amruta P and Madhusudhan, G and Prasad, KB and Ganesha, * and Venkatesan, K and Row, Guru TN and Puranik, VG (1986) Reaction of tetrahydropyranyl ether of 1-bromomethyl-2-naphthol with tetrachlorocatechol-structures of novel products. In: Indian Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 25B (11). pp. 1091-1092.
Katti, S and Narasimhamurthy, M and Krishna, G (1986) On the sufficient conditions for the equality of the unassignable polynomial and Davison's fixed polynomial of strongly connected systems. In: IEEE Transactions on Automatic Control, 31 (5). 443-445.
Khan, MI and Sastry, MV and Surolia, Avadhesha (1986) Thermodynamic and kinetic analysis of carbohydrate binding to the basic lectin from winged bean (Psophocarpus tetragonolobus). In: Journal of Biological Chemistry, 261 (7). pp. 3013-3019.
Khandke, Lakshmi and Gullapalli, Sharada and Patole, Milind S and Ramasarma, T (1986) Vanadate-stimulated NADH oxidation by xanthine oxidase: An intrinsic property. In: Archives of Biochemistry and Biophysics, 244 (2). pp. 742-749.
Kishore, K and Begum, A Sameena and Sankaralingam, S (1986) Changes in the calorimetric value and ignition temperature of composite solid propellants during ageing. In: Defence Science Journal, 36 (4). pp. 425-428.
Kishore, K and Dharumaraj, GV and Gayathri, V (1986) Effect of triethanolamine and benzaldehyde on the storage stability of polystyrene - ammonium perchlorate propellant. In: Defence Science Journal, 36 (4). pp. 381-387.
Kishore, K and Sankaralingam, S (1986) Effect of Pressure on Polymer Ignition. In: Journal of Fire Sciences, 4 (2). pp. 94-99.
Kishore, K and Verneke, VR Pai and Pitchaih, K and Sridhara, K (1986) Mechanism of catalytic activity of metal oxides and chromites on ammonium perchlorate pyrolysis. In: Fuel, 65 (8). pp. 1169-1171.
Kishore, Raghuvansh and Balaram, Padmanabhan (1986) Stereochemically Constrained Enkephalin Analogs Containing $\alpha$-Aminoisobutyric Acid and 1-Amino-Cyclopentane-l-Carbolylic Acid. In: NIDA Research Monograph, 69 . pp. 312-331.
Kishore, K and Joseph, Mary and Dharumaraj, Verghese and Vijayshree, MN (1986) The effect of some catalysts on the curing of oxiranes with p-phenylenediamine. In: Journal of Applied Polymer Science, 31 (8). 2829 -2837.
Kishore, K and Mukundan, T (1986) Poly(styrene peroxide): an auto-combustible polymer fuel. In: Nature, 324 . pp. 130-131.
Kishore, K and Rajalingam, P (1986) The Bonding Ability and Bonding Site of a New Ferrocene Based Silicon Compound in Composite Solid Propellants. In: Journal of Polymer Science Part C: Polymer Letters, 24 (9). pp. 471-476.
Kishore, K and Vasanthakumari, R (1986) Crystallization and Melting Behavior of Isotactic Polybutene-1 at High Pressures. In: Journal of Polymer Science: Part A: Polymer Chemistry, 24 . pp. 2011-2019.
Kishore, K and Vasanthakumari, R (1986) Crystallization behaviour of polyethylene and i-polybutene-1 blends. In: Polymer, 27 (3). 337 -343.
Kishore, Kaushal and Mallick, Ishwardas M and Annakutty, Kunnappallil S (1986) Synthesis and spectroscopic data of 3,3',5,5'-tetrabromo-4,4'-diaminodiphenyldibromomethane. In: Journal of Chemical & Engineering Data, 31 (2). pp. 262-264.
Kishore, Kaushal and Pandey, Hrishi K (1986) Indian Sapota Tree Rubber. In: Journal of Polymer Science Part C: Polymer Letters, 24 (8). pp. 393-397.
Kishore, U Sudarsan (1986) Effect of load changes on dynamic coefficient of friction of crystalline and amorphous materials. In: Journal of Materials Science Letters, 5 (2). pp. 198-200.
Krishnamurthy, SS and Cameron, TS and Vincent, BR and Kumaravel, SS (1986) Assignment of phosphorus-31 chemical shifts to isomers of 2,3-dialkoxy-$\lambda 3$-diazadiphosphetidines. Crystal and molecular structure of trans$-[PhNP(OCH_2CF_3)]2$. In: Zeitschrift fuer Naturforschung, Teil B: Anorganische Chemie, Organische Chemie, 41B (9). pp. 1067-1070.
Krishnan, Girish and Shivaprasad, AP (1986) Serial quaternary-to-analogue converters. In: International Journal of Electronics, 61 (4). 531 -538.
Krishnan, D and Patnaik, LM (1986) GEODERM: geometric shape design system using an entity-relationship model. In: Computer-Aided Design, 18 (4). pp. 207-218.
Kulkarni, GV (1986) Cluster approach to chemisorption and electrochemisorption -a critique. In: Indian Journal of Technology, 24 (8). pp. 457-464.
Kulkarni, Gopal R and Murthy, SK (1986) Induction of choline-ethanolamine kinase in chicken liver by 17-\beta -estradiol. In: Indian Journal of Biochemistry & Biophysics, 23 (5). pp. 254-257.
Kumar, MP Subodh and Srikant, YN (1986) Graphical simulation of Petri Nets. In: Computers & Graphics, 10 (3). 225 -228.
Kumar, N (1986) Quantum-Ohmic Resistance Fluctuation In Disordered Conductors - An Invariant Imbedding Approach. In: Pramana, 27 (1-2). pp. 33-42.
Kumar, Sampath TS and Hegde, MS (1986) Electron Spectroscopic Study of Surface Segregation and Oxidation of Cu-In and Cu-Au Alloys. In: Applied Surface Science, 26 (2). pp. 219-229.
Kumar, Sampath TS and Rao, Kameswara L and Hegde, MS (1986) Stabilization Of Geo Phase On n-Ge And Laser Irradiated Ge Films. In: Applied Surface Science, 27 (3). pp. 255-261.
Kumari, Durga B and Adiga, PR (1986) Correlation between riboflavin carrier protein induction and its $mRNA$ activity in estrogen stimulated chicken liver and oviduct. In: Journal of Biosciences, 10 (2). pp. 193-202.
Kumari, M (1986) Unsteady Incompressible Two-Dimensional and Axisymmetric Turbulent Boundary Layer Flows. In: Acta Mechanica, 59 (3-4). 251 -268.
Kumari, M and Nath, G (1986) Unsteady Self-similar Stagnation Point Boundary Layers for Micropolar Fluids. In: Indian Journal of Pure and Applied Mathematics, 17 (2). pp. 231-244.
Kumari, M and Nath, G (1986) Unsteady free Convection MHD Boundary Layer Flow Near a Three-Dimensional Stagnation Point. In: Indian Journal of Pure and Applied Mathematics, 17 (7). pp. 957-968.
Kurny, ASW and Rao, Mohan M and Mallya, RM (1986) Design and construction of a laboratory model ion-nitriding unit. In: Indian Journal of Technology, 24 (10). pp. 671-675.
Kurny, ASW and Mallya, RM and Rao, M Mohan (1986) A study on the nature of the compound layer formed during the ion nitriding of En40B steel. In: Materials Science and Engineering, 78 (1). pp. 95-100.
Kutty, TRN and Devi, L'Gomathi (1986) Photoelectrochemical behavior of titanate perovskite solid solutions. In: Indian Journal of Technology, 24 (7). pp. 391-398.
Kutty, TRN and Murthy, SRN and Anantha, GV (1986) Ree Geochemistry and Petrogenesis of Ultramafic Rocks of Chalk-Hills, Salem. In: Journal of The Geological Society of India, 28 (6). pp. 449-466.
Kutty, TRN (1986) Behaviour of acceptor states in semiconducting BaTiO3 and SrTiO3. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 96 (6). 581 -597.
Kutty, TRN and Devi, Gomathi L and Murugaraj, P (1986) The Change in Oxidation State of Mn Ions in Semiconducting $BaTiO_3$ and $SrTiO_3$ Around the Phase Transition Temperatures. In: Materials Research Bulletin, 21 (9). pp. 1093-1102.
Lagisetty, JS and Das, PK and Kumar, R and Gandhi, KS (1986) Breakage of viscous and non-Newtonian drops in stirred dispersions. In: Chemical Engineering Science, 41 (1). pp. 65-72.
Lakshmanan, VS and Madhavan, Veni CE (1986) Binary decompositions and acyclic schemes. In: Lecture Notes in Computer Science, 241 . pp. 214-238.
Lalitha, R and Kalpana, GV and Ramasarma, T (1986) Inhibition of mevalonate kinase by disulfide compounds. In: Indian Journal of Biochemistry & Biophysics, 23 (4). pp. 204-207.
Lalitha, R and Ramasarma, T (1986) Mevalonate phosphorylation in lemon grass leaves. In: Indian Journal of Biochemistry & Biophysics, 23 (5). pp. 249-253.
Latha, PK and Brahmachari, Samir K (1986) B to Z transitions in DNA and their biological implications. In: Journal of Scientific & Industrial Research, 45 (12). pp. 521-533.
Lord, Eric A (1986) Gauge theory of a group of diffeomorphisms. II. The conformal and de Sitter groups. In: Journal of Mathematical Physics, 27 (12). pp. 3051-3054.
Lord, Eric A and Goswami, P (1986) Gauge theory of a group of diffeomorphisms. I. General principles. In: Journal of Mathematical Physics, 27 (9). 2415 -2422.
Madyastha, K Madhava and Krishnamachary, N (1986) Purification and partial characterization of microsomal cytochrome b555 from the higher plant Catharanthus-Roseus. In: Biochemical and Biophysical Research Communications, 136 (2). 570 -576.
Madyastha, KMadhava and Chadha, Anju (1986) Metabolism of 1,8-Cineole in Rat: Its Effects on Liver and Lung Microsomal Cytochrome P-450 Systems. In: Bulletin of Environmental Contamination and Toxicology, 37 (5). 759 -766.
Madyastha, Madhava K and Chadha, Anju (1986) Metabolism of 1,8-Cineole in Rat: Its Effects on Liver and Lung Microsomal Cytochrome P-450 Systems. In: Bulletin of Environmental Contamination and Toxicology, 37 (1). pp. 759-766.
Mahesh, GV and Ravindranathan, P and Patil, KC (1986) Preparation, characterization and thermal analysis of rare earth and uranyl hydrazinecarboxylate derivatives. In: Journal of Chemical Sciences, 97 (2). pp. 117-123.
Mahesh, GV and Patil, KC (1986) Thermal reactivity of metal acetate hydrazinates. In: Thermochimica Acta, 99 . pp. 153-158.
Majumder, Kumud and Brahmachari, Samir K and Sasisekharan, V (1986) Sequence dependence and role of 5'-phosphate in the B to Z transition. In: FEBS Letters, 198 (2). 240 -244.
Majumder, Kumud and Latha, PK and Brahmachari, Samir K (1986) Use of a volatile buffer at ambient temperature *1: Versatile approach to the purification of self-complementary synthetic deoxyoligonucleotides by reversed-phase high-performance liquid chromatography. In: Journal of Chromatography A, 355 (1). pp. 328-334.
Mangalgiri, PD and Dattaguru, B (1986) A large orthotropic plate with misfit pin under arbitrarily oriented biaxial loading. In: Composite Structures, 6 (4). 271 -281.
Manne, Veeraswamy and Kutty, Krishnan R and Pillarisetti, Subba Rao V (1986) Purification and properties of synephrinase from Arthrobacter synephrinum. In: Archives of Biochemistry and Biophysics, 248 (1). 324 -334.
Marmo, G and Mukunda, N (1986) Symmetries and constants of the motion in the Lagrangian formalism on $TQ$: beyond point transformations. In: Il Nuovo Cimento - B, 92 (1). pp. 1-12.
Mathialagan, N and Rao, Jagannadha A (1986) Gonadotropin-releasing hormone (GnRH) stimulates both secretion and synthesis of human chorionic gonadotropin (hCG) by first trimester human placental minces in vitro. In: Biochemistry International, 13 (5). pp. 757-765.
Mathialagan, N and Rao, Jagannadha A (1986) Gonadotropin-releasing hormone in first trimester human placenta: isolation, partial characterization and in vitro biosynthesis. In: Journal of Biosciences, 10 (4). pp. 429-441.
Mathialagan, N and Rao, Jagannadha A (1986) Plasma levels of gonadotropin releasing hormone during menstrual cycle of Macaca radiata. In: Journal of Biosciences, 10 (4). pp. 423-428.
Mishra, AK and Rangarajan, SK (1986) Theory of electron transfer processes - an overview of concepts (I). In: Indian Journal of Technology, 24 (11). pp. 727-736.
Mohanakrishnan, P and Easwaran, KRK (1986) Theoretical investigations of the two-bond proton-carbon-13 coupling constants. Angular variations of the couplings involving carboxyl carbon. In: Chemical Physics, 104 (3). 409 -414.
Mohanty, Bani P and Subramanian, S and Hajra, JP (1986) Electro slag refining of commercial aluminum. In: Transactions of the Indian Institute of Metals, 39 (6). pp. 646-647.
Mudakavi, JR and Ramaswamy, YS (1986) Extraction-spectrophotometric determination of traces of mercury(II) with bromide and Rhodamine 6G. In: Journal of the Indian Institute of Science, 66 (3). pp. 155-162.
Mukherjee, A and Venkatesha, YV (1986) Digital color reproduction on color television monitors. In: Computer Vision, Graphics, and Image Processing, 36 (1). 114 -132.
Mukhopadhyay, NK and Subbanna, GN and Ranganathan, S and Chattopadhyay, K (1986) An electron microscopic study of quasicrystals in a quaternary alloy : Mg32(Al, Zn, Cu)49. In: Scripta Metallurgica, 20 (4). pp. 525-528.
Mukhopadhyay, PK and Raychaudhuri, AK (1986) Easy to build four-terminal AC bridge. In: Journal of Physics E - Scientific Instruments, 19 (10). 792 -793.
Mukunda, N and Sudarshan, ECG (1986) The three faces of Maxwell�s equations. In: Pramana, 27 (1-2). pp. 1-18.
Muniyappa, K and Radding, CM (1986) The homologous recombination system of phage lambda. Pairing activities of beta protein. In: The American Society for Biochemistry and Molecular Biology, 261 (16). pp. 7472-7478.
Munjal, ML and Prasad, MG (1986) On plane-wave propagation in a uniform pipe in the presence of a mean flow and a temperature gradient. In: Journal of the Acoustical Society of America, 80 (5). 1501 -1506.
Munshi, SK and Murthy, MRN (1986) Strategies for collecting screen-less oscillation data. In: Journal of Applied Crystallography, 19 . pp. 61-62.
Murali, N and Chandrasekhar, K and Kumar, Anil (1986) Use of $45^o$ Pulse Pair as a Filter for Pure-Phase Two-Dimensional NMR Spectroscopy. In: Journal of Magnetic Resonance, 70 (1). pp. 153-156.
Murali, N and Chandrasekhar, K and Kumar, Anil (1986) Use of 45° pulse pair as a filter for pure-phase two-dimensional NMR spectroscopy. In: Journal of Magnetic Resonance, 70 (1). 153 -156.
Murali, N and Kumar, Ami (1986) Multiple-quantum artifacts in single-quantum two-dimensional correlated NMR spectra of strongly coupled spins. In: Chemical Physics Letters, 128 (1). pp. 58-61.
Muralidharan, K and Shaila, MS and Gadagkar, Raghavendra (1986) Evidence for Multiple mating in the primitively eusocial wasp Ropalidia Marginata (Lep.) (Hymenoptera : Vespidae). In: Journal of Genetics, 65 (3). pp. 153-158.
Murthy, MS and Raghavendrachar, P and Sriram, SV (1986) Thermal decomposition of doped calcium hydroxide for chemical energy storage. In: Solar Energy, 36 (1). pp. 53-62.
Murthy, VSR and Kishore, * and Seshan, S (1986) Morphology Of Flake, Ductile And Compacted Graphite. In: Journal of Metals, 38 (12). pp. 24-28.
Murthy, GS and Moudgal, NR (1986) Use of epoxysepharose for protein immobilisation. In: Journal of Biosciences, 10 (3). pp. 351-358.
Murthy, Kumari and Ramesh, Usha and Bhat, SV (1986) EPR Investigations of Phase Transitions in Lithium Potassium Sulfate: $LiKSO_4$. In: Journal of Physics and Chemistry of Solids, 47 (9). pp. 927-931.
Murugaraj, P and Kutty, TRN and Rao, Subba M (1986) Diffuse phase transformations in neodymium-doped $BaTiO_3$ ceramics. In: Journal of Materials Science, 21 (10). pp. 3521-3527.
Mytri, VD and Shivaprasad, AP (1986) Constant factor incremental delta modulator. In: International Journal of Electronics, 61 (1). pp. 129-135.
Mytri, VD and Shivaprasad, AP (1986) Improving the dynamic range of a CVSD coder. In: Electronics Letters, 22 (8). 429 -430.
Mytri, VD and Shivaprasad, AP (1986) Hybrid constant factor incremental delta modulators. In: IEE Proceedings F: Radar & Signal Processing, 133 (6). 522 -525.
Nagaraj, TS and Murthy, BR Srinivasa (1986) Prediction of Compressibility of Overconsolidated Uncemented Soils. In: Journal of Geotechnical and Geoenvironmental Engineering, 112 (4). pp. 484-488.
Naik, Hemamalini and Subramanyam, SV (1986) Non-ohmic conduction and electrical switching under pressure of the charge transfer complexo-tolidine-iodine. In: Pramana, 26 (1). pp. 61-66.
Nair, Nandini and Ramakrishna, CK (1986) A comparative study of the effects of administration of diethylhexyl phthalate on hepatic mitochondria of the rat and the mouse. In: Indian Journal of Biochemistry & Biophysics, 23 (5). pp. 270-273.
Nair, Nandini and Kurup, Ramakrishna CK (1986) Investigations on the Mechanism of the Hypocholesterolemic Action of Diethylhexyl Phthalate in Rats. In: Biochemical Pharmacology, 35 (20). pp. 3441-3447.
Nandy, SK and Patnaik, LM (1986) Linear time geometrical design rule checker based on quadtree representation of VLSI mask layouts. In: Computer-Aided Design, 18 (7). 380 -388.
Nanjundaswamy, KS and Murthy, MN Sankarshana (1986) Low-temperature stabilization of pure Ni(IT)O. In: Materials Chemistry and Physics, 15 (1). pp. 37-44.
Narahari, Y and Viswanadham, N (1986) On the invariants of coloured Petri Nets. In: Lecture Notes in Computer Science, 222 . 330 -345.
Narasimhamurthy, N and Samuelson, AG (1986) Synthesis of Aryl Orthocarbonates. In: Tetrahedron Letters, 27 (8). pp. 991-992.
Narasimhamurthy, N and Samuelson, AG (1986) Thiocarbonyl to carbonyl group transformation using CuCl and NaOH. In: Tetrahedron Letters, 27 (33). pp. 3911-3912.
Narasimhan, S and Vithayathil, PJ (1986) A new reaction of o-benzoquinone with N-acetyl-DL-tryptophan (a 3-substituted indole) and characterization of the product. In: Indian Journal of Biochemistry & Biophysics, 23 (4). pp. 215-219.
Narasimhan, Lakshmi V and Ramachandra, JK and Anvekar, Dinesh K (1986) Design and evaluation of a dual-microcomputer shared memory system with a shared I/O bus. In: Microprocessors and Microsystems, 10 (1). pp. 3-10.
Narasu, Lakshmi M and Gopinathan, KP (1986) Purification of Larvicidal Protein from Bacillus Sphaericus 1593. In: Biochemical and Biophysical Research Communications, 141 (2). pp. 756-761.
Narayana Rao, K and Munjal, ML (1986) Noise reduction with perforated three-duct muffler components. In: Sadhana : Academy Proceedings in Engineering Sciences, 9 (4). pp. 255-269.
Narayanan, AS and Devanathan, R (1986) Note on the Similarity Solutions of Unsteady Jets in Rotating Fluids. In: Acta Mechanica, 60 (3-4). pp. 241-250.
Natarajan, KA and Upadhyaya, Ramesh (1986) Flocculation studies on iron ore fines using synthetic polymeric flocculants. In: Transactions of the Indian Institute of Metals, 39 (6). pp. 627-636.
Nayyar, AH and Scadron, MD and Sinha, KP (1986) Universal jellium BCS picture for Peierls and spin-Peierls phase transitions. In: Physics Letters A, 113 (8). 442 -444.
Padma, Doddaballapur K (1986) A gravimetric procedure for the determination of wet precipitated sulphur, dissolved sulphur, soluble sulphides and hydrogen sulphide. In: Talanta, 33 (6). pp. 550-552.
Padmanabhan, Kaillathe and Dopp, Dietrich and Venkatesan, Kailasam and Ramamurthy, Vaidyanathan (1986) Solid-state Photochemistry of Nitro Compounds: Structure-Reactivity Correlations. In: Journal of the Chemical Society, Perkin Transactions 2 (24). 897 -906.
Pandit, Shashidhara S and Jacob, KT (1986) Vanadium-oxygen equilibrium in liquid cobalt at 1873 K. In: Transactions of the Indian Institute of Metals, 39 (6). pp. 556-561.
Pandita, TK (1986) Evaluation of Thimet 10-G for mutagenicity by 4 different genetic systems. In: Mutation Research, 171 (2-3). pp. 131-138.
Papavinasam, E and Natarajan, S and Shivaprakash, NC (1986) Reinvestigation of the crystal-structure of beta-alanine. In: International Journal of Peptide & Protein Research, 28 (5). pp. 525-528.
Parthasarathy, G and Gopal, ESR and Krishnamurthy, HR and Pandit, R and Sekhar, JA (1986) Quasi Crystalline A1-Mn Alloys: Pressure Induced Crystallization and Structural Studies. In: Current Science, 55 (11). pp. 517-520.
Parthasarathy, G and Asokan, S and Gopal, ESR (1986) Pressure induced polymorphous crystallization in bulk Ge20Te80 glass. In: Physica A: Statistical Mechanics and its Applications, 139 (1-3). pp. 266-268.
Parthasarathy, G and Ramakrishna, R and Asokan, S and Gopal, ESR (1986) Effect of pressure on the electrical resistivity of bulk amorphous Al23Te77 alloy under various stages of crystallization. In: Journal of Materials Science Letters, 5 (8). pp. 809-811.
Patel, Mukul N and Gopinathan, Karumathil P (1986) Lysozyme-Sensitive Bioemulsifier for Immiscible Organophosphorus Pesticides. In: Applied and Environmental Microbiology, 52 (5). pp. 1224-1226.
Patil, KC (1986) Metal-hydrazine complexes as precursors to oxide materials. In: Chemical Sciences, 96 (6). pp. 459-464.
Patnaik, LM and Basu, JK (1986) Two Tools for Interprocess Communication in Distributed Data-Flow Systems. In: Computer Journal, 29 (6). pp. 506-521.
Patnaik, LM and Govindarajan, R and Ramadoss, NS (1986) Design and Performance Evaluation of EXMAN: An EXtended MANchester Data Flow Computer. In: IEEE Transactions on Computers, 35 (3). pp. 229-244.
Patnaik, LM and Shenoy, RS and Krishnan, D (1986) Set theoretic operations on polygons using the scan-grid approach. In: Computer-Aided Design, 18 (5). pp. 275-279.
Patnaik, LM and Sundararaman, K (1986) Performance evaluation of a distributed concurrency control algorithm. In: Computers & Electrical Engineering, 12 (1-2). pp. 73-88.
Patole, Milind S and Kurup, Ramakrishna CK and Ramasarma, T (1986) Reduction of vanadate by a microsomal redox system. In: Biochemical and Biophysical Research Communications, 141 (1). pp. 171-175.
Paul, PKC and Sukumar, M and Bardi, R and Piazzesi, AM and Valle, G and Toniolo, C and BaIaram, P (1986) Stereochemically Constrained Peptides. Theoretical and Experimental Studies on the Conformations of Peptides Containing 1-Aminocyclohexanecarboxylic Acid. In: Journal of the American Chemical Society, 108 (20). pp. 6363-6370.
Paul, PKC and Sukumar, M and Bardi, R and Piazzesi, AM and Valle, G and Toniolo, C and Balaram, Padmanabhan (1986) Stereochemically constrained peptides. Theoretical and experimental studies on the conformations of peptides containing 1-aminocyclohexane carboxylic acid. In: journal of the American chemical Society, 108 (20). pp. 6363-6370.
Pillai, PRSaseendran and Muralidhara, MK and Naidu, PS (1986) Wideband acoustic absorption characteristics of rubberized coir for underwater applications. In: Ultrasonics, 24 (6). 363 -367.
Poojary, M Oamodara and Manohar, Hattikudur (1986) Interaction of Metal Ions with 2'-Deoxyribonucleotides. Crystal and Molecular Structure of a Cobalt(l1) Complex with 2'-Deoxyinosine 5'-Monophosphate. In: Dalton Transactions (2). 309-312.
Prabhakaran, K and Sen, P and Rao, CNR (1986) Hydroxylation of oxygen-covered Cu(110) and Zn(0001) surfaces by interaction with CH3OH, (CH3)2NH, H2S, HCl and other proton donor molecules. In: Surface Science, 169 (2-3). L301-L306.
Prabhakaran, K and Sen, P and Rao, CNR (1986) Studies of Molecular Oxygen Adsorbed on Cu Surfaces. In: Surface Science, 177 (2). L971-L977.
Prabhu, R and Rao, GR and Jamaluddin, M and Ramakrishnan, T (1986) N-[2-naphthyl]-glycine hydrazide, a potent inhibitor of dna-dependent rna-polymerase of mycobacterium-tuberculosis h37rv. In: Journal of Biosciences, 10 (1). 163 -166.
Prasad, Durga M and Sathyanarayana, S (1986) Electrode Kinetics of a Nickel-Cadmium cell and Failure-mode prediction- Constant voltage Charging. In: Indian Journal of Chemical Technology, 24 (7). pp. 361-371.
Prasad, S Narendra and Hegde, Malati (1986) Phenology and seasonality in the tropical deciduous forest of Bandipur, South India. In: Proceedings of the Indian Academy of Sciences - Plant Sciences, 96 (2). 121-133.
Prasad, GL and Adiga, PR (1986) Decarboxylation of arginine and ornithine by arginine decarboxylase purified from cucumber (Cucumis sativus) seedlings. In: Journal of Biosciences, 10 (2). pp. 203-213.
Prasad, GL and Adiga, PR (1986) Purification and characterization of putrescine synthase from cucumber seedlings. A multifunctional enzyme involved in putrescine biosynthesis. In: Journal of Biosciences, 10 (3). pp. 373-391.
Prasad, Madhu and Rao, Radhika Rani and Chaudhuri, Ray AK (1986) A versatile AC mutual inductance bridge. In: Journal of Physics E - Scientific Instruments, 19 (12). pp. 1013-1016.
Prasad, Ramesh GK and Chattopadhyay, K and Rao, Mohan M (1986) Structure-property correlation in dual-phase copper-tin alloys. In: Journal of Materials Science Letters, 5 (10). pp. 991-994.
R, Ramesham and S, Sathyanarayana (1986) Kinetics of corrosion of passive metals. Part III: applicability of the new technique of transient Tafel polarization and its decay. In: Indian Journal of Technology, 24 (8). pp. 536-544.
RS, Subrahmanya (1986) Kinetic currents in polarography. Part II. Spherical diffusion to the DME for pseudo-first order catalyzed processes and modification of Ilkovic equation. In: Journal of the Electrochemical Society of India, 35 (3). pp. 157-162.
Radha, TS and Ramprasad, BS (1986) Speckle based fiber optic sensor for the measurement of the change in refractive index in liquid mixtures. In: Journal of the Electrochemical Society of India, 35 (2). pp. 135-136.
Rague Schleyer, Paul von and Kaufmann, Elmar and Kos, Alexander J and Mayr, Herbert and Chandrasekhard, Jayaraman (1986) Stabilization of the Alleged 'Bishomoaromatic' Bicyclo[3.2.l]octa-2,6-dienyl Anion by Counterion Interactions and by Hyperconjugation. In: Journal of the Chemical Society - Series Chemical Communications (21). 1583 -1585.
Rajasekar, N (1986) Synthesis and Spectroscopic Investigations of Complexes of Lanthanide Nitrates with Isoquinoline-2-oxide. In: Synthesis and Reactivity in Inorganic, Metal-Organic, and Nano-Metal Chemistry, 16 (8). pp. 1109-1119.
Rajashekara, KS and Joseph, Vithyathil and Rajagopalan, V (1986) Protection and Switching-Aid Networks for Transistor Bridge Inverters. In: IEEE Transactions on Industrial Electronics, 33 (2). pp. 185-192.
Raju, Ramaswamy and Jacob, Mathai T (1986) The Rna Binding Subset of Adenosine Antibodies. In: Immunological Investigations, 15 (5). 405 -417.
Rajumon, MK and Hegde, MS and Rao, CNR (1986) Electronic structure and oxidation of aluminium-modified Ni and Cu surfaces. In: Solid State Communications, 60 (3). pp. 267-270.
Ram, Mohan RA and Ganapathi, L and Ganguly, P and Rao, CNR (1986) Evolution of three-dimensional character across the Lan+1NinO3n+1 homologous series with increase in n1. In: Journal of Solid State Chemistry, 63 (2). pp. 139-147.
Ram, RA Mohan and Gopalakrishnan, Jagannatha (1986) Mixed valency in the high-temperature phases of transition metal molybdates,AMoO4 (A=Fe, Co, Ni). In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 96 (5). 291 -296.
Ramadurai, S (1986) Supernova induced star formation. In: Bulletin of the Astronomical Society of India, 14 (4). pp. 207-210.
Ramadurai, S and Thejappa, G (1986) Preferential acceleration of $^3He$ by lower hybrid waves. In: Advances in Space Research, 6 (6). pp. 281-284.
Ramakrishnan, V and Dhas, Arul G and Narayanan, PS (1986) Raman spectra of $NaLa{(MoO_4)}_2$ single crystal. In: Journal of Raman Spectroscopy, 17 (3). pp. 273-275.
Ramakrishnan, V and Dhas, Arul G and Narayanan, PS (1986) Raman Spectra of NaLa(MoO,), Single Crystal. In: Journal of Raman Spectroscopy, 17 (3). pp. 273-275.
Ramamurthy, TS and Krishnamurthy, T and Narayana, K Badari and Vijayakuma, K and Dattaguru, B (1986) Modified crack closure integral method with quarter point elements. In: Mechanics Research Communications, 13 (4). 179 -186.
Ramamurthy, V (1986) Organic Photochemistry in Organized Media. In: Tetrahedron, 42 (21). pp. 5753-5839.
Ramaraj, N and Rajaram, R and Parthasarathy, K (1986) A new analytical approach to optimize a generation schedule. In: Electric Power Systems Research, 11 (2). pp. 147-152.
Ramasesha, S (1986) A Diagrammatic Valence Bond Method for Configuration Interaction Calculations in Atoms and Molecules. In: Chemical Physics Letters, 130 (6). pp. 522-525.
Ramasesha, S (1986) Electron-electron interactions in polyacetylene. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 96 (6). pp. 509-521.
Ramesh, R and Ravikumar, K and Kishore, * (1986) Wear of En 31 steel. In: Transactions of the Indian Institute of Metals, 39 (4). pp. 329-334.
Ramesh, N and Shouche, Yogesh S and Brahmachari, Samir K (1986) Recognition of B and Z forms of DNA by Escherichia coli DNA polymerase I. In: Journal of Molecular Biology, 190 (4). pp. 635-638.
Ramesham, R and Sathyanarayana, S (1986) Kinetics of corrosion of passive metals. Part II: Failure of linear polarization technique. In: Indian Journal of Technology, 24 (8). pp. 529-535.
Rangarajan, SK (1986) High Amplitude periodic signal theory: part II-nonlinear analysis and phenomenological Decoupling. In: Indian Journal of Chemical Technology, 24 (7). pp. 352-356.
Ranjith, K and Narasimhan, R (1986) Asymptotic and finite element analyses of mode III dynamic crack growth at a ductile-brittle interface. In: International Journal of Fracture, 76 (1). pp. 61-77.
Rao, CNR and Roberts, MW and Weightman, P (1986) Studies of Solids and Surfaces by Auger Electron Spectroscopy. In: Philosophical Transactions of the Royal Society of London - Series A: Mathematical and Physical Sciences, 318 (1541). 37 -50.
Rao, Jagannadha A and Moudgal, NR and Hao, Choh Li (1986) \beta -Endorphin: intranasal administration increases the serum prolactin level in monkey. In: International Journal of Peptide & Protein Research, 28 (5). pp. 546-548.
Rao, Krishna GS and Sarma, Prasad MS (1986) Studies in terpenoids. Part LXV. Synthesis of 4-(2-methoxy-5-methylphenyl)-6-methylheptan-2-one, a secosesquiterpene structural analog of sesquichamaenol and himasecolone. In: . Indian Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 25B (7). pp. 752-753.
Rao, M and Yelloji, K and Natarajan, KA (1986) Electrochemical behavior of sulfide minerals under open-circuit conditions. In: Transactions of the Indian Institute of Metals, 39 (6). pp. 582-591.
Rao, Subba GSR and Vijaybhaskar, K and Srikrishna, A (1986) Regioselective synthesis of 1-methylbicyclo[2.2.2]octene derivatives. In: Indian Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 25B (8). pp. 785-786.
Rao, BG and Rao, KJ (1986) Electron spin resonance studies of d5 ions (Fe3+ and Mn2+) in lead oxide-lead halide glasses. In: Chemical Physics, 102 (1-2). pp. 121-132.
Rao, BG and Rao, KJ (1986) The study of oxidation state of manganese in lead oxyhalide glasses by optical spectroscopy. In: Journal of Materials Science Letters, 5 (2). pp. 141-143.
Rao, BG and Vasanthacharya, NY and Rao, KJ (1986) Magnetic susceptibility studies of lead oxyhalide glasses containing transition metal oxides. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 96 (5). pp. 383-388.
Rao, CNR and Ganguly, P (1986) A New Criterion for the Metallicity of Elements. In: Solid State Communications, 57 (1). pp. 5-6.
Rao, CNR and Gopalakrishnan, J and Vidyasagar, K and Ganguli, AK and Ramanan, A and Ganapathi, L (1986) Novel metal oxides prepared by ingenious synthetic routes. In: Journal of Materials Research, 1 (2). pp. 280-294.
Rao, CNR and Rajumon, MK and Prabhakaran, K and Hegde, MS and Kamath, PV (1986) Precursor Species of Carbon Monoxide Before its Dissociation on Aluminium-Promoted Ni and Cu Surfaces. In: Chemical Physics Letters, 129 (2). pp. 130-134.
Rao, Kameswara L and Harshavardhan, Solomon K and Selvarajan, A and Hegde, MS (1986) Novel laser induced image storage by chemical modification of surfaces in in situ textured amorphous Ge films. In: Applied Physics Letters, 49 (13). pp. 826-828.
Rao, Kusuma G (1986) Sensible heat fluxes during the active and break phases of the southwest monsoon over the Indian region. In: Boundary-Layer Meteorology, 36 (3). 283 -294.
Rao, R Vittal and Sukavanam, N (1986) Kac-Akhiezer formula for normal integral operators. In: Journal of Mathematical Analysis and Applications, 114 (2). pp. 458-467.
Rao, R Vittal and Sukavanam, N (1986) Spectral analysis of finite section normal integral operators. In: Journal of Mathematical Analysis and Applications, 115 (1). pp. 23-45.
Rao, Ramachandra A and Deshikachar, KS (1986) MHD Oscillatory Flow of Blood Through Channels of Variable Cross Section. In: International Journal of Engineering Science, 24 (10). pp. 1615-1628.
Rao, Ranga G and Prabhakaran, K and Rao, CNR (1986) Nitrogen Adsorbed on Clean and Promoted Ni Surfaces. In: Surface Science, 176 (1-2). L835-L840.
Rao, S P Sudhakara and Varughese, KI and Manohar, H (1986) Ternary metal complexes of anionic and neutral pyridoxine (vitamin B6) with 2,2'-bipyridine. Syntheses and x-ray structures of (pyridoxinato)bis(2,2'-bipyridyl)cobalt(III) perchlorate and chloro(2,2'-bipyridyl)(pyridoxine)copper(II) perchlorate hydrate. In: Inorganic Chemistry, 25 (6). pp. 734-740.
Rao, SP Sudhakara and Manohar, Hattikudur and Aoki, Katsuyuki and Yamazaki, Hiroshi and Bau, Robert (1986) Novel Oxidation of Pyridoxal in Ternary Metal Complexes. An X-Ray Study of the Products. In: Journal of the Chemical Society - Series Chemical Communications (1). pp. 4-6.
Rao, Sankar M (1986) An overview of climate models. In: Earth and Planetary Sciences, 95 (3). pp. 447-484.
Rao, Sankara K (1986) Plantlets from somatic callus tissue of the East Indian Rosewood (Dalbergia latif olia Roxb.). In: Plant Cell Reports, 5 (3). pp. 199-201.
Rao, Shashidhar N and Sasisekharan, V (1986) Conformations of Dinucleoside Monophosphates in Relation to Duplex DNA Structures. In: Biopolymers, 25 (1). pp. 17-30.
Ravikumar, C and Balakrishnan, N (1986) A low cost microprocessor based multiple pressure measuring system. In: Journal of Microcomputer Applications, 9 . pp. 319-326.
Ravindram, M and Kalvinskas, John J (1986) Coal desulfurization in a fluidized bed reactor. In: Environmental Progress, 5 (4). pp. 264-272.
Ravindranath, N and Moudgal, NR (1986) Antifertility effect of tamoxifen as tested in the female bonnet monkey (Macaca radiata). In: Journal of Biosciences, 10 (1). 167 -170.
Ravindranath, NH and Chanakya, HN (1986) Biomass based energy system for a South Indian village. In: Biomass, 9 (3). 215 -233.
Ravindranathan, P and Patil, KC (1986) A one-step process for the preparation ofγ-Fe2O3. In: Journal of Materials Science Letters, 5 (2). pp. 221-222.
Raviprasad, K and Tenwick, M and Davies, HA and Chattopadhyay, K (1986) The Nature of Ordered Structures in Melt Spun Iron-Silicon Alloys. In: Scripta Metallurgica, 20 (9). pp. 1265-1270.
Ravishankar, Malavalli K and Pappu, Sastry V (1986) Fiber-optic sensor-based refractometer-cum-liquid level indicator. In: Applied Optics, 25 (4). pp. 480-482.
Ray, AK and Dwarakadasa, ES and Raman, KS (1986) Effect of porosity and slag inclusions on the fatigue fracture behaviour of 15CDV6 butt welds. In: Journal of Materials Science Letters, 5 (8). pp. 765-768.
Ray, Arabinda and Mohammad, SK Noor and Kulkarni, GV (1986) Reinvestigation of the antifungal activities of some aromatic N-oxides. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 96 (1-2). pp. 67-71.
Raychaudhuri, AK (1986) Low temperature Properties of glasses-unsolved problems. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 96 (6). pp. 559-564.
Reddy, Bramham A and Gopinathan, KP (1986) Existence of single-strand interruptions in the genomic DNA of mycobacteriophage I3. In: FEMS Microbiology Letters, 37 (2). pp. 163-167.
Reddy, NM and Reddy, KPJ and Prasad, Krishna MR (1986) Theoretical Gain Optimization in $CO_2-N_2-H_2$ Gasdynamic Lasers with Two-Dimensional Wedge Nozzles. In: AIAA Journal, 24 (12). pp. 2045-2046.
Reddy, NM and Reddy, KPJ and Prasad, Krishna MR (1986) Theoretical gain optimization in CO2-N2-H2 gasdynamic lasers with two-dimensional wedge nozzles. In: AIAA Journal, 24 (12). pp. 2045-2046.
Reddy, Bramham A and Gopinathan, Karumathil P (1986) Presence of random single-strand gaps in mycobacteriophage 13 DNA. In: Gene, 44 (2-3). pp. 227-234.
Reddy, KPJ (1986) Stability criteria for single pulse solutions of cw passive mode-locked lasers. In: Optics Communications, 56 (6). 433 -434.
Rukmani, K and Ramakrishna, J (1986) Chlorine-35 Nuclear Quadrupole Resonance Studies in 2,3- and 3,5-Dichloroanisoles. In: Journal of the chemical society-faraday transactions II, 82 (3). pp. 291-298.
Sachdev, PL and Philip, V (1986) Invariance group properties and exact solutions of equations describing time-dependent free surface flows under gravity. In: Quarterly of Applied Mathematics, 43 (4). 465 -482.
Sachdev, PL and Tikekar, VG and Nair, KRC (1986) Evolution and decay of spherical and cylindrical N waves. In: Journal of Fluid Mechanics, 172 . pp. 347-371.
Sachdev, PL and Nair, KRC and Tikekar, VG (1986) Generalized Burgers equations and Euler-Painlev transcendents. I. In: Journal of Mathematical Physics, 27 (6). pp. 1506-1522.
Sadhu, C and Dutta, S and Gopinathan, KP (1986) Presence of mycobacteriophage 13-1ike DNA sequences in the genome of its host Mycobacterium smegmatis. In: Archives of Microbiology, 146 (2). pp. 166-169.
Sahal, Dinkar and Balaram, P (1986) Peptide Models of Electrostatic Interactions in Proteins: $NMR$ Studies on Two \beta - Turn Tetrapeptides Containing $Asp-His$ and $Asp-Lys$ Salt Bridges. In: Biochemistry, 25 (20). pp. 6004-6013.
Sampath, DS and Balaram, P (1986) Rapid procedure for the resolution of racemic gossypol. In: Journal of the Chemical Society, Chemical Communications (9). pp. 649-650.
Sampath, DS and Balaram, Padmanabhan (1986) Resolution of racemic gossypol and interaction of individual enantiomers with serum albumins and model peptides. In: Biochimica et Biophysica Acta, 882 (2). pp. 183-186.
Samuel, Manoharan T and Madhava, Madyastha K (1986) A novel conversion of narcotine into a macrolide. In: Indian Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 25 (3). p. 227.
Sangunni, KS and Ravi, R and Bhat, HL and Narayanan, PS (1986) Switching Process in Ferroelectric Triglycine Selenate. In: Japanese Journal of Applied Physics, 25 (3). 380 -382.
Sankar, G and Vasudevan, S and Rao, CNR (1986) Analysis of EXAFS data of multiphasic heterogeneous catalysts. In: Chemical Physics Letters, 127 (6). 620 -626.
Sankar, G and Vasudevan, S and Rao, CNR (1986) An EXAFS investigation of $Cu-ZnO$ methanol synthesis catalysts. In: Journal of Chemical Physics, 85 (4). pp. 2291-2299.
Sankar, G and Vasudevan, S and Rao, CNR (1986) Extended X-ray Absorption Fine Structure Studies of Bimetallic ${Cu-Ni/ \gamma -$Al_2O_3$}$ Catalysts. In: Journal of Physical Chemistry, 90 (21). pp. 5325-5328.
Sankar, Gopinathan and Rao, Ramachandra CN (1986) Nature of Ni and Cu Species in Reduced Bimetallic $Ni-Cu/Al_20_3$ Catalysts. In: Angewandte Chemie International Edition in English, 25 (8). pp. 753-754.
Sankar, Gopinathan and Rao, Ramachandra CN (1986) Nature of Ni and Cu Species in Reduced Bimetallic Ni-Cu/Alz03 Catalysts. In: Angewandte Chemie (English Edition), 25 (8). 753 -754.
Sarma, Prasad MS and Rao, Krishna GS (1986) Studies in terpenoids. Part LXVIII. Synthesis of sesquiterpenic dimethylisobutyl- and di- and trimethylisopropylindans. In: Indian Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 25B (9). pp. 951-952.
Sarma, Prasad MS and Rao, Krishna GS (1986) Studies in terpenoids. Part LXVII. Sesquiterpenic indans of bicyclonidorellane skeleton: synthesis of 1,6-diisopropylindan and ethyl 2-(6-isopropyl-1-indanyl)propionate. In: Indian Journal of ChemistrySection B: Organic Chemistry Including Medicinal Chemistry, 25B (9). pp. 953-954.
Sarma, BS and Ramakrishna, J (1986) Proton magnetic relaxation in (TMA)2HgBr4 and (TMA)2HgI4. In: Pramana, 26 (3). pp. 263-268.
Sarma, Kandula VN and Sridharan, K and Rao, A Achutha and Sarma, CSS (1986) Computer model for vedavati ground water basin. Part 1. Well field model. In: Sadhana : Academy Proceedings in Engineering Sciences, 9 . pp. 31-42.
Sarode, PR (1986) EXAFS in Niobium Dichalcogenides Intercalated With First-Row Transition Metals. In: Physica Status Solidi A: Applied Research, 98 (2). pp. 391-397.
Sasisekharan, V (1986) A new method for generation of quasi-periodic structures withn fold axes: Application to five and seven folds. In: Pramana, 26 (3). L283-L293.
Sastri, P and Lahiri, AK (1986) "Central Atoms" models for ternary silicate and alumino-silicate melts. In: Metallurgical and Materials Transactions B, Process Metallurgy and Materials Processing Science, 17 (1). pp. 105-110.
Sastry, DH and Murthy, Gunturi S (1986) Impression creep behavior of metals at high temperatures. In: Transactions of the Indian Institute of Metals, 39 (4). pp. 369-379.
Sastry, Krishna MV and Surolia, A (1986) Intrinsic Fluorescence Studies on Saccharide Binding to Artocarpus Integrifolia Lectin. In: Bioscience Reports, 6 (10). pp. 853-860.
Sastry, MV and Banarjee, Probal and Patanjali, Sankhavaram R and Swamy, Joginadha M and Swarnalatha, GV and Surolia, Avadhesha (1986) Analysis of Saccharide Binding to Artocarpus integrifolia Lectin Reveals Specific Recognition of T-antigen (\beta D-Gal(1\rightarrow 3)GalNAc. In: Journal of Biological Chemistry, 261 (25). pp. 11726-11733.
Sathiakumar, S and Biswas, SK and Vithayathil, Joseph (1986) Microprocessor-Based Field-Oriented Control of A CSI-Fed Induction Motor Drive. In: IEEE Transactions on Industrial Electronics, 33 (1). 39 -43.
Sathiakumar, S and Vithayathil, Joseph and Biswas, SK (1986) Microprocessor-Based Field-Oriented Control of A CSI-Fed Induction Motor Drive. In: IEEE Transactions on Industrial Electronics, IE-33 (1). pp. 39-43.
Sathish, S and Chaterjee, S and Awasthi, ON and Gopal, ESR (1986) Electron-electron scattering and ultrasonic attenuation in potassium. In: Journal of Low Temperature Physics, 63 (5-6). pp. 423-429.
Sathyanarayana, S and Ramesham, R (1986) Kinetics of Corrosion of passive metals - part I: Concepts and Theory. In: Indian Journal of Technology, 24 (7). pp. 447-455.
Sathyaprakash, BS and Goswami, P and Sinha, KP (1986) Singularity-free cosmology: A simple model. In: Physical Review D – Particles and Fields, 33 (8). pp. 2196-2200.
Satyabhama, S and Seelan, R Sathiagnana and Padmanaban, G (1986) Expression of Cytochrome P-450 and Albumin Genes in Rat Liver: Effect of Xenobioticst. In: Biochemistry, 25 (16). pp. 4508-4512.
Schleye, Paul von Rague and Spitznagel, Gunther H and Chandrasekhar, Jayaraman (1986) The ethyl, 1- and 2-propyl, and other simple alky carbanions do not exist. In: Tetrahedron Letters, 27 (37). 4411 -4414.
Sekhar, JA and Rajasekharan, T and Rao, Rama P and Parthasarathy, G and Ramkumar, S and Gopal, ESR and Lakshmi, CS and Mallya, RM (1986) Electron and x-ray diffraction studies on Al86Fe14, Al82Fe18 and Al75Fe25 quasicrystals. In: Pramana, 27 (1-2). pp. 267-273.
Sekharudu, Y Chandra and Biswas, Margaret and Rao, VSR (1986) Complex carbohydrates: 2. The modes of binding of complex carbohydrates to concanavalin A - a computer modelling approach. In: International Journal of Biological Macromolecules, 8 (1). pp. 9-19.
Sen, P and Rao, CNR (1986) An eels study of water, methanol, formaldehyde and formic acid adsorbed on clean and oxygen-covered zinc(0001) surfaces. In: Surface Science, 172 (2). pp. 269-280.
Sen, P and Rao, CNR and Thomas, JM (1986) Structure of Alkali Metal Ionic Clusters. In: Journal of Molecular Structure, 146 . pp. 171-174.
Seralathan, M and Rangarajan, SK (1986) Fluctuation phenomena in electrochemistry part I. The formalism. In: Journal of Electroanalytical Chemistry, 208 (1). pp. 13-28.
Seralathan, M and Rangarajan, SK (1986) Fluctuation phenomena in electrochemistry part II. Modelling the noise sources. In: Journal of Electroanalytical Chemistry, 208 (1). pp. 29-56.
Seshan, S and Vijayalakshmi, D (1986) Heat pipes-concepts, materials and applications. In: Energy Conversion and Management, 26 (1). 1 -9.
Shukla, AK and Ramesh, KV and Kannan, AM (1986) Fuel cells: Problems and prospects. In: Journal of Chemical Sciences, 97 (3-4). pp. 513-527.
Simon, R and Sudarshan, ECG and Mukunda, N (1986) Gaussian-Maxwell beams. In: Journal of the Optical Society of America A: Optics and Image Science Vision, 3 (4). pp. 536-540.
Singh, Sharat and Usha, Govindarajan and Tung, Chen Ho and Turro, Nicholas J and Ramamurthy, Vaidhyanathan (1986) Modification of chemical reactivity by cyclodextrins. Observation of moderate effects on Norrish type I and type II photobehavior. In: Journal of Organic Chemistry, 51 (6). pp. 941-944.
Sinha, KP and Sudarshan, ECG and Vigier, JP (1986) Superfluid vacuum carrying real Einstein-de Broglie waves. In: Physics Letters A, 114 (6). 298 -300.
Sita, Lakshmi G and Chattopadhyay, S and Tejavathi, DH (1986) Plant regeneration from shoot callus of rosewood (Dalbergia latifolia Roxb). In: Plant Cell Reports, 5 (4). pp. 266-268.
Soman, KV and Ramakrishnan, C (1986) Identification and analysis of extended strands and beta-sheets in globular proteins. In: International Journal of Biological Macromolecules, 8 (2). pp. 89-96.
Somasekharan, KN and Kalpagam, V (1986) Use of a compensation parameter in the Thermal Decomposition of Copolymers. In: Thermochimica Acta, 107 . pp. 379-382.
Somasundaram, T and Ganguly, P and Rao, CNR (1986) Photoacoustic investigation of phase transitions in solids. In: Journal of Physics C: Solid State Physics, 19 (13). 2137 -2151.
Somasundaram, T and Rao, Sanjay SR and Maheshwari, R (1986) Pigments in Thermophilic fungi. In: Current Science, 55 (19). pp. 957-960.
Sridharan, A and Rao, Sudhakar M and Gajarajan, VS (1986) Influence of adsorbed sulfate on the engineering properties of kaolinite and bentonite. In: India Clay Research, 5 (2). pp. 74-81.
Srinivasan, J and Basu, Biswajit (1986) A numerical study of thermocapillary flow in a rectangular cavity during laser melting. In: International Journal of Heat and Mass Transfer, 29 (4). pp. 563-572.
Srinivasan, R and Usha, S (1986) Auxiliary coils for generating magnetic field gradients for a Faraday magnetometer. In: Journal of physics e-scientific instruments, 19 (11). 930 -932.
Subbanna, GN and Kutty, TRN and Iyer, Anantha GV (1986) Structural intergrowth of brucite in anthophyllite. In: American Mineralogist, 71 . pp. 1198-1200.
Subbanna, GN and Rao, CNR (1986) Metal-Ceramic Composites: A Study Of Small Metal Particles (Divided Metals). In: Materials Research Bulletin, 21 (12). pp. 1465-1471.
Subrahmanya, RS (1986) Kinetic currents in polarography. Part I. Linear diffusion to the DME for pseudo first order catalyzed processes. In: Journal of the Electrochemical Society of India, 35 (3). pp. 151-155.
Subrahmanyam, HN and Subramanyam, SV (1986) Accurate measurement of thermal expansion of solids between 77 K and 350 K by 3-terminal capacitance method. In: Pramana, 27 (5). pp. 647-660.
Subramanyam, SV and Naik, Hemamalini (1986) Nonlinear conduction and electrical switching in one-dimensional conductors. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 96 (6). 499 -508.
Sudha, LV and Sathyanarayana, DN and Manogaran, S (1986) $^{13}C$ NMR spectra of 1,3-dipyridyl- and pyridylphenylthioureas. Chemical shift assignments and conformational implications. In: Spectrochimica Acta, Part A: Molecular and Biomolecular Spectroscopy, 42 (12). pp. 1373-1378.
Sudha, Lalgudi V and Sathyanarayana, Dixit N (1986) Proton and Carbon-13 Nuclear Magnetic Resonance Studies of Conformations of 1,3-Dipyridylthioureas. In: Journal of the Chemical Society, Perkin Transactions 2: Physical Organic Chemistry (1972-1999), 11 . pp. 1647-1650.
Sundararaman, Narayanaswamy and Ravindram, M and Bhatt, MV (1986) Kinetic Modeling of Vapor-Phase Oxidation of 1-Phenylethanol over Thorium Molybdate. In: Industrial & Engineering Chemistry Product Research and Development, 25 (4). pp. 512-517.
Suresh, BS and Padma, DK (1986) Reactivities of Tetrahalosilanes and Silane with Sulphur Trioxide. In: Polyhedron, 5 (10). pp. 1579-1580.
Suresh, BS and Padma, DK (1986) Reactivities of tetrahalosilanes and silane with sulphur trioxide. In: Polyhedron, 5 (10). 1579 -1580.
Suresh, CG and Ramaswamy, Jayanthi and Vijayan, M (1986) X-ray Studies on Crystalline Complexes Involving Amino Acids and Peptides. XIII. Effect of Chirality on Molecular Aggregation: The Crystal Structures of $L$-Arginine $D$-Aspartate and $L$-Arginine $D$-Glutamate Trihydrate. In: Acta Crystallographica, B42 (5). pp. 473-478.
Suresh, CG and Ramaswamy, Jayanthi and Vijayan, M (1986) X-ray Studies on Crystalline Complexes Involving Amino Acids and Peptides. XIII. Effect of Chirality on Molecular Aggregation: The Crystal Structures of L-Arginine D-Aspartate and L-Arginine D-Glutamate Trihydrate. In: Acta Crystallographica Section B Structural Science, 42 (5). pp. 473-478.
Surma Devi, CD and Takhar, HS and Nath, G (1986) Unsteady, three-dimensional, boundary-layer flow due to a stretching surface. In: International Journal of Heat and Mass Transfer, 29 (12). pp. 1996-1999.
Suryanarayana, VVS and Rao, BU and Padayatty, JD (1986) Expression in E. coli of the cloned cDNA for the major antigen of foot and mouth disease virus Asia 1 63/72. In: Journal of Genetics, 65 (1-2). pp. 19-30.
Suryaprakash, N and Khetrapal, CL (1986) The use of $^{13}C$ satellites in the proton NMR spectra of Oriented systems for the Determination of Molecular Structure. In: Magnetic Resonance in Chemistry, 24 (3). pp. 247-250.
Swamy, KC Kumara and Krishnamurthy, SS (1986) Studies of phosphazenes. 28. Reactions of pentachloro- and pentafluoro(triphenylphosphazenyl)cyclotriphosphazenes with sodium methoxide. Mechanistic aspects and their implications for nucleophilic displacement at a tetrahedral phosphorus(V) center. In: Inorganic Chemistry, 25 (7). 920 -928.
Swamy, MJ and Sastry, M V Krishna and Khan, MI and Surolia, A (1986) Thermodynamic and kinetic studies on saccharide binding to soya-bean agglutinin. In: Biochemical Journal, 234 (3). 515 -522.
Syamala, MS and Devanathan, S and Ramamurthy, V (1986) Modification of the Photochemical Behaviour of Organic Molecules by Cyclodextrin: Geometric Isomerization of Stilbenes and Alkyl Cinnamates. In: Journal of Photochemistry, 34 (2). pp. 219-229.
Syamala, MS and Ramamurthy, V (1986) Consequences of Hydrophobic Association in Photoreactions: Photodimerization of Stilbenes in Water. In: Journal of Organic Chemistry, 51 (19). pp. 3712-3715.
Syamala, MS and Reddy, Dasaratha G and Rao, Nageswara B and Ramamurthy, V (1986) Chemistry in Cavities. In: Current Science, 55 (18). pp. 875-886.
Takhar, HS and Devi, CDSurma and Nath, G (1986) MHD Flow with Heat and Mass Transfer Due to a Point Sink. In: Indian Journal of Pure and Applied Mathematics, 17 (10). 1242 -1247.
Thathachar, Mandayam AL and Sastry, PS (1986) Relaxation Labeling with Learning Automata. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, 8 (2). pp. 256-268.
Thirumaleshwar, M and Subramanyam, SV (1986) Exergy analysis of a Gifford-McMahon cycle cryorefrigerator. In: Cryogenics, 26 (4). 248 -251.
Thirumaleshwar, M and Subramanyam, SV (1986) Gifford-McMahon cycle- a theoretical analysis. In: Cryogenics, 26 (3). pp. 177-188.
Thirumaleshwar, M and Subramanyam, SV (1986) Heat balance analysis of single stage Gifford-McMahon cycle cryorefrigerator. In: Cryogenics, 26 (3). pp. 189-195.
Thirumaleshwar, M and Subramanyama, SV (1986) Two stage Gifford-McMahon cycle cryorefrigerator operating at 20 K. In: Cryogenics, 26 (10). 547 -555.
Thomas, Thresia and Nandi, US and Poddar, SK (1986) Thermodynamic characterization of oligo dG. poly rC hybrid helixes. In: Indian Journal of Biochemistry & Biophysics, 23 (4). pp. 192-196.
Thukaram, D and Iyengar, Ramakrishna BS and Parthasarathy, K (1986) An algorithm for optimum control of static VAR compensators to meet phase-wise unbalanced reactive power demands. In: Electric Power Systems Research, 11 (2). pp. 129-137.
Uberoi, C (1986) On the Kelvin-Helmholtz instability of structured plasma layers in the magnetosphere. In: Planetary and Space Science, 34 (12). pp. 1223-1227.
Uberoi, C and Narayanan, Satya A (1986) Effect of Variation of Magnetic Field Direction on Hydromagnetic Surface Waves. In: Plasma Physics and Controlled Fusion, 28 (11). pp. 1635-1643.
Ubgade, R and Sarode, PR (1986) Study of strontium compounds and minerals by X-ray absorption spectroscopy. I. K-edge shifts. In: Physica Status Solidi A, 99 (1). pp. 295-301.
Unni, Emmanual and Rao, MRS (1986) Androgen Binding Protein Levels And Fsh Binding To Testicular Membranes In Vitamin A Deficient Rats And During Subsequent Replenishment With Vitamin A. In: Journal of Steroid Biochemistry, 25 (4). pp. 579-583.
Usha, MG and Rao, Subba M and Kutty, Narayanan TR (1986) Preparation and thermal stability of ammonium alkaline earth trioxalatocobaltate(III) hydrates:$NH_4M^{2+}[Co(C_2O_4)_3].xH_2O$. In: Journal of Thermal Analysis and Calorimetry, 31 (1). pp. 7-14.
Usha, R and Murthy, MRN (1986) Protein structural homology: A metric approach. In: International Journal of Peptide and Protein Research, 28 (4). pp. 364-369.
Usha, G and Rao, BNageswer and Chandrasekhar, Jayaraman and Ramamurthy, V (1986) The Origin of Regioselectivity in a-Cleavage Reactions of Cyclopropenethiones: Potential Role of Pseudo-Jahn-Teller Effect in Substituted Cyclopropenyl Systems. In: Journal of Organic Chemistry, 51 (19). pp. 3630-3635.
Vaidehi, N and Akila, R and Shukla, AK and Jacob, KT (1986) Enhanced Ionic Conduction in Dispersed Solid Electrolyte Systems $CaF_2-Al_2O_3$ and $CaF_2-CeO_2$. In: Materials Research Bulletin, 21 (8). pp. 909-916.
Vaidehi, N and Akila, R and Shukla, AK and Jacob, KT (1986) Enhanced ionic conduction in dispersed solid electrolyte systems CaF2---Al2O3 and CaF2---CeO2. In: Materials Research Bulletin, 21 (8). pp. 909-916.
Vani, VC and Guha, S and Gopal, ESR (1986) Coexistence curve of acetonitrile and cyclohexane liquid system. In: Journal of Chemical Physics, 84 (7). 3999 -4007.
Varalakshmi, K and Savithri, HS and Rao, NA (1986) Identification of amino acid residues essential for enzyme activity of sheep liver 5,10-methylenetetrahydrofolate reductase. In: Biochemical Journal, 236 (1). 295 -298.
Vasantha, R and Nath, G (1986) Second-order boundary layers for steady, incompressible, three-dimensional stagnation point flows. In: International Journal of Heat and Mass Transfer, 29 (12). pp. 1993-1996.
Vasantha, R and Nath, G (1986) Unsteady compressible second-order boundary layers at the stagnation point of two-dimensional and axisymmetric bodies. In: Heat and Mass Transfer, 20 (4). pp. 273-281.
Vedula, S and Ramasesha, CS and Rao, A Achuta and Prasad, B Shyam (1986) Computer-model for vedavati groundwater basin .3. Irrigation potential. In: Sadhana : Academy Proceedings in Engineering Sciences, 9 . pp. 57-68.
Vellareddy, Anantharam and Patanjali, Sankhavaram R and Swam, Joginadha M and Sanadiq, Ashok R and Goldstein, Irwin J and Surolia, Avadhesha (1986) Isolation, Macromolecular Properties, and Combining Site of a Chitooligosaccharide-specific Lectin from the Exudate of Ridge Gourd(Luffaa cutangzda). In: Journal of Biological Chemistry, 261 (31). pp. 14621-14627.
Venkatesh, A and Murthy, Ramana PV and Rao, KP (1986) Finite element analysis of bimodulus composite stiffened thin shells of revolution. In: Computers & Structures, 22 (1). pp. 13-24.
Venkateswara, R and Rao, Sankara S and Vaidyanathan, CS (1986) Phytochemical constituents of cultured cells of Eucalyptus tereticornis SM. In: Plant Cell Reports, 5 (3). pp. 231-233.
Venkateswaran, S and Mallya, RM and Seshadri, MR (1986) Effect of trace elements on the fluidity of eutectic aluminum-silicon alloy using the vacuum suction technique. In: Transactions of the American Foundrymen's Society, 94 . pp. 701-708.
Venu, K and Sastry, VSS and Ramakrishna, J (1986) Proton magnetic resonance study of molecular dynamics in (NH4)2CdI4. In: Chemical Physics, 107 (1). 123 -127.
Verneker, Pai VR and Shaha, B (1986) Dual Role of Metallic Lithium in the Initiation of Vinyl Polymerization. In: Journal of Polymer Science Part C: Polymer Letters, 24 (1). pp. 1-5.
Verneker, VR Pai and Shahat, B (1986) On Coloration of Polyacrylonitrile: A Nuclear Magnetic Resonance Study. In: Macromolecules, 19 (7). 1851 -1856.
Vidyasagar, M and Levy, BC and Viswanadham, N (1986) A Note on the Genericity of Simultaneous Stabilizability and Pole Assignability. In: Circuits, Systems, and Signal Processing, 5 (3). 371 -387.
Vidyasagar, M and Viswanadham, N (1986) Construction of inverses with prescribed zero minors and applications to decentralized stabilization. In: Linear Algebra and its Applications, 83 . pp. 103-115.
Vijayamohanan, K and Shukla, AK and Sathyanarayana, S (1986) Statistical Optimization of Iron Electrodes for Alkaline Storage Batteries. In: Indian Journal of Technology, 24 (7). 430 -434.
Vijayaraju, K and Dwarakadasa, ES (1986) Computer-aided composition-treatment-structure-property correlation studies in steels. In: Bulletin of Materials Science, 8 (2). pp. 193-198.
Vijayaraju, K and Dwarakadasa, ES and Panchapagesan, TS (1986) Role of vacancies in the ductile fracture of commercially pure aluminum. In: Journal of Materials Science Letters, 5 (10). pp. 1000-1002.
Vyas, K and Manohar, H (1986) Solid state acyl migration in salicylamides. An x-ray study of the O- and N-propionyl derivatives. In: Molecular Crystals and Liquid Crystals, 137 (1). pp. 37-43.
Vyas, K and Manohar, H (1986) Solid-State Acyl Migration In Salicylamides - An X-Ray Study Of The Ortho-Propionyl And Normal-Propionyl Derivatives. In: Molecular Crystals and Liquid Crystals, 137 (1-4). pp. 37-43.
Yaparpalvi, R and Das, PK and Mukherjee, AK and Kumar, R (1986) Drop Formation under Pulsed Conditions. In: Chemical Engineering Science, 41 (10). pp. 2547-2553.
Yashonath, S and Rao, CNR (1986) Structural Changes Accompanying the Formation of Isopentane Glass. In: Journal of Physical Chemistry, 90 (12). 2581 -2584.
Yashonath, S and Rao, CNR (1986) An investigation of solid adamantane by a modified isothermal-isobaric ensemble Monte Carlo simulation. In: Journal of Physical Chemistry, 90 (12). 2552 -2554.
Editorials/Short Communications
Jagadeeswaran, P and Cherayil, Joseph D (1986) A general model for the conformational switch in 5S RNA during protein synthesis. In: Journal of Theoretical Biology, 83 (2). pp. 369-375.
Kumar, N and Jayannavar, AM (1986) Resistance fluctuation at the mobility edge. In: Journal of Physics C: Solid State Physics, 19 (4). L85-L89.
Mugeraya, Sridhar and Prabhakar, BR (1986) Measurement of resistivity and dielectric constant of beach-sand minerals. In: Journal of Electrostatics, 18 (1). 109 -112.
Rao, Narasimha K and Vijayalakshmi, D and Baskaran, N (1986) Mechanism of interaction of reversible and irreversible inhibitors with human-liver serine hydroxymethyltransferase. In: Journal of Protein Chemistry, 5 (4). 291 -292.
Ravichandrana, KS and Dwarakadasaa, ES (1986) Some Considerations on the Occurrence of Intergranular Fracture during Fatigue Crack growth in Steels. In: Materials Science and Engineering A, 83 (1). L11 -L16.
Surolia, A and Ramprasad, MP (1986) Immunotoxins to combat AIDS. In: Nature, 322 (6075). pp. 119-120.
Vasantha, R and Pop, I and Nath, G (1986) Non-darcy natural convection over a slender vertical frustum of a cone in a saturated porous medium. In: International Journal of Heat and Mass Transfer, 29 (1). pp. 153-156.
Goodenough, John Bannister and Shukla, Ashok Kumar and Silvapaliteiro, Carlos Antonio da and Jamieson, Keith Roderick and Hamnett, Andrew and Manoharan, Ramasamy (1986) Electrode for reducing oxygen. Patent Number(s) WO 8601642 A1. Patent Assignee(s) National Research Development Corporation . | CommonCrawl |
Application of logarithms to real world problems
The value of a car decreases so that after a period of n year, the value of a car is $\ 50000e^{-0.1n} $.
i) Find the value of a newly bought car
ii) Find the value of the car after 10 years
iii) The car is scraped when its value drops to 10000 dollars. Determine when the car will be scraped.
Let v represent the value of a car after n years
Note that v and n are variables
i) When a car is newly bought, n = 0
$ \begin{aligned}
v &= 50000e^{-0.1n} \\
v &= 50000e^{-0.1*0} \\
v &= 50000 \quad \textbf{note that e to the power of zero is 1}\\
\end{aligned}$
The value of a newly bought car is 50000 dollars
ii) Value of a car after 10 years, n = 10
v &= 50000e^{-0.1*10} \\
v &= 18400 \quad \text{ (to 3 s.f.)}\\
The value of the car after 10 years is 18400 dollars
Technique used: Taking log on both sides
10000 &= 50000e^{-0.1n} \\
\frac{10000}{50000} &= e^{-0.1n} \\
\ln {0.2} &= \ln {e^{-0.1n}\\
-1.609 &= -0.1n \\
n &= 16.1 \\
The car has to be scraped after 16.1 years
The value of a house, V dollars after t years is determined by $\ V = V_0 e^{kt} $, where $\ V_0 $ is the original value of the house and k is a constant.
i) Determine k if the value of the house doubles after 5 years
ii) After how many years would the price of the house be 10 times its original value
Note that V and t are variables. k is a constant. In other words, no matter what the value of V and t is, the value of k does not change
After 5 years, the value of the house is $\ 2V_0 $
2V_0 &= V_0 e^{5k} \\
\frac{2V_0}{V_0} &= e^{5k} \\
2 &= e^{5k} \\
\ln {2} &= \ln {e^{5k}\\
0.6931 &= 5k \\
k &= 0.139 \quad \text{(to 3 s.f.)} \\
When price of house is 10 times its original value, $\ V = 10V_0 $
10V_0 &= V_0 e^{0.139t} \\
\frac{10V_0}{V_0} &= e^{0.139t} \\
10 &= e^{0.139t} \\
\ln {10} &= \ln {e^{0.139t}\\
2.3026 &= 0.139t \\
t &= 16.6 \quad \text{(to 3 s.f.)} \\
After 16.6 years, the price of the house will be 10 times its original value
As you can see, the "Taking log on both sides" technique is commonly used.
Logarithms Part III will show you how to solve equations involving logarithms. | CommonCrawl |
Using estimated probability of pre-diagnosis behavior as a predictor of cancer survival time: an example in esophageal cancer
Paul P. Fahey1,2,
Andrew Page2,
Glenn Stone3 &
Thomas Astell-Burt4
BMC Medical Research Methodology volume 20, Article number: 74 (2020) Cite this article
Information on the associations between pre-diagnosis health behavior and post-diagnosis survival time in esophageal cancer could assist in planning health services but can be difficult to obtain using established study designs. We postulated that, with a large data set, using estimated probability for a behavior as a predictor of survival times could provide useful insight as to the impact of actual behavior.
Data from a national health survey and logistic regression were used to calculate the probability of selected health behaviors from participant's demographic characteristics for each esophageal cancer case within a large cancer registry data base. The associations between survival time and the probability of the health behaviors were investigated using Cox regression.
Observed associations include: a 0.1 increase in the probability of smoking 1 year prior to diagnosis was detrimental to survival (Hazard Ratio (HR) 1.21, 95% CI 1.19,1.23); a 0.1 increase in the probability of hazardous alcohol consumption 10 years prior to diagnosis was associated with decreased survival in squamous cell cancer (HR 1.29, 95% CI 1.07, 1.56) but not adenocarcinoma (HR 1.08, 95% CI 0.94,1.25); a 0.1 increase in the probability of physical activity outside the workplace is protective (HR 0.83, 95% CI 0.81,0.84).
We conclude that probability for health behavior estimated from demographic characteristics can provide an initial assessment of the association between pre-diagnosis health behavior and post-diagnosis health outcomes, allowing some sharing of information across otherwise unrelated data collections.
With an incidence of 9.3/100,000 males and 3.5/100,000 females per year, esophageal cancer led to more than half a million deaths worldwide in 2018 [1]. The majority of these deaths arise from modifiable lifestyle factors. In the US in 2014 it was estimated that 71% of male and 59% of female esophageal cancer deaths arose from modifiable lifestyle factors and that cigarette smoking, alcohol consumption and excess body weight could account for up to 50, 17 and 27% of deaths respectively [2].
While there is considerable documentation of associations between health behavior and onset of esophageal cancer [3], the impact of health behavior on survival times is less well understood [4]. A more thorough understanding of predictors of survival time is needed to assist in anticipating health service needs and for health services planning.
Health behavior prior to a cancer diagnosis is often different from health behavior post-diagnosis. Behavior prior to diagnosis can be influenced by public health activity but post-diagnosis behavior is strongly influenced by the diagnosis itself [5] and by treatment [6, 7]. As esophageal cancer has relatively short survival times (in the US, just 19% of cases survive 5-years [8]), pre-diagnosis behavior could have a strong carry over effect on survival time.
Unfortunately, investigating the effect of pre-diagnosis behavior on post-diagnosis survival can be difficult and expensive. As the disease is relatively rare, a prospective cohort study would be inefficient (on the figures above, surveillance of 100,000 men for 10 years would be expected to yield just 93 new esophageal cancer cases). Retrospective studies which enroll newly diagnosed cancer patients and ask them to recall their prior health behavior still involve considerable expense and are fraught with recall and survivor biases. In one example, an Australian study enrolling newly diagnosed esophageal cancer patients reported that patients with late-stage disease were difficult to enroll and under-represented [9].
Secondary analyses of already existing data can provide alternate, cost-effective opportunities. It is now common for governments to sponsor both regular health behavior surveys and mandatory cancer registries. For those cancer cases who contributed to a survey prior to diagnosis, their health behavior and cancer outcomes can be linked to produce a retrospective cohort. Data linkage avoids recall and survivor biases and is cost efficient (as the required data are already collected, compiled and cleaned).
But data linkage may not be feasible either. Confidentiality is one issue. But more fundamentally, as esophageal cancer is relatively rare, the number of cancer cases who happened to have previously participated in the health survey is likely to be very small. If data linkage cannot be applied, is there any other way in which these rich (and expensive) data sets can be used to help provide insights into the association between pre-diagnosis behavior and post-diagnosis survival times?
Often the only measures in common between cancer registries and national health surveys are the demographic characteristics of participants. It is known that demographically similar people are more likely to display similar health behavior than people from different demographic groups [10]. That is, different demographic groups have a different likelihood for particular behaviors. Probability of behavior calculated from demographic variables, may be a weak indicator of actual behavior, but with large data sets even weak signals are detectable.
This study investigated whether or not useful information on the association between pre-diagnosis health behaviors and post-diagnosis survival times could be obtained by analyzing cancer cases estimated probability of engaging in these behaviors. The analyses used US data and focused mainly on the three modifiable lifestyle factors identified above: cigarette smoking, alcohol consumption and excess body weight.
The data sets
Unit record data on esophageal cancer cases and their outcomes was extracted from the Surveillance, Epidemiology, and End Results Program (SEER) cancer registry [11]. The SEER system is administered by the National Cancer Institute. SEER currently compiles data from cancer registries covering about 28% of the US population across 13 States. Most cancers, including esophageal cancers, are recorded. De-identified unit record data made available for research include demographic measures, medical details of the cancer, treatment and outcomes (including survival time). 95.1% of esophageal cases had positive histology with just 0.4% clinical diagnosis only; the remainder having unknown (2.4%) or other confirmation methods.
Data on health behavior was extracted from the Behavioral Risk Factor Surveillance System (BRFSS) health survey [12]. The BRFSS is an annual national survey of health. It commenced in 1984 and now collects data from more than 400,000 telephone interviews each year covering adult residents of all US States and three Territories. The de-identified unit record information made available for research included demographic and health behavior measures, and State population sampling weights.
Both collections provided access to cleaned, de-identified unit record data at no cost to the researcher. Although both data collections are large, with less than 0.2% of American adults participating in BRFSS and around 4000 esophageal cancer cases being recorded in the SEER data set each year, we could only expect about eight new esophageal cancer cases each year to have participated in the previous BRFSS survey.
This analysis focusses on the 15-year period from 2001 to 2015. Data prior to 2001 are excluded due to changes in the definitions of some health behaviors variables and because earlier data may be less relevant to current behavior and outcomes. 2015 was the most recent year of SEER cancer registry data.
As esophageal cancer is rare in young ages, all cancer cases who were less than 35 years of age are excluded as being atypical. Two hundred one of 57,025 (0.3%) cases are excluded. For the BRFSS health survey, all data records from respondents 25 or more years of age who lived in one of the 13 US States represented in the SEER cancer registries are included. Including the younger respondents allows information on health behavior up to 10 years prior to cancer diagnosis to be retained.
Outcome variable
The outcome of interest is post-diagnosis survival time in months as recorded in the SEER cancer registry data set. That is, all cases with survival less than 30.4 days after diagnosis (including cancers detected post-mortem) have a survival time of 0 months, those who died between 30.4 and 60.8 days have a survival time of 1 month, etc. The maximum possible survival time is 179 months. For those who are still alive and those who are lost to follow-up, survival time is censored at the date of last follow-up.
Health behavior variables
The research focused mainly on measures relating to cigarette smoking, alcohol consumption and excess body weight. The choice of variables was restricted to measures available through the BRFSS health survey. The following variables, all recording self-reported behavior, were included:
Current smoker (yes/no) which includes those who smoke daily or less than daily;
Alcohol - heavy drinking (yes or no), which is defined as more than two standard drinks per day for men and more than one standard drink per day for women in the month prior to survey;
Alcohol - binge drinking (yes or no), which is defined as males reporting having five or more standard drinks or females reporting 4 or more standard drinks on one occasion in the month prior to survey;
Current smoking and alcohol consumption (yes/no), which is defined as both current smoker and an average consumption of ≥1 standard drink of alcohol per day in the past month.
Obese (yes/no) which is BMI ≥ 30 kg/m2
Undertook physical activity or exercise in the past 30 days other than regular job (yes or no)
Demographic variables
As the cancer registry data did not include information on pre-diagnosis health behavior we estimated the probability of each pre-diagnosis health behavior for each cancer case using the available demographic variables.
Of the variables in common between the SEER cancer registry and the BRFSS health surveys we hypothesized that year, age, sex, race, marital status and State of residence could be helpful for predicting health behavior. For example, race is known to be associated with smoking [13] and alcohol dependence [14] in the US. Also, living as married ameliorates social isolation and social isolation is associated with adverse health behaviors such as smoking, higher BMI, and lower desire for exercise [15].
As age was recorded in 5-year age groups in the SEER cancer registry data, we applied the same categories to the BRFSS health survey data. Race was categorized as White; Black; Asian or Pacific Islander; and American Indian or Alaskan native. Participants in the BRFSS health survey who self-reported as mixed race (n = 44,670, 3.1% of total) were omitted as there was no corresponding code in the SEER cancer registry data set. Marital status was categorized as married or living as married; divorced or separated; widowed; and single.
Other factors considered
Post-diagnosis survival time is sensitive to a range of factors, some of which could potentially confound associations with pre-diagnosis health behavior and survival time. For example, the association between health behaviors and incidence of esophageal cancer is known to differ by histological type [3, 16] and these differences appear to carry over into survival time [17, 18]. Therefore, we have conducted sub-group analyses for squamous cell carcinoma (ESCC) and adenocarcinoma (EAC). Also age is associated with survival time [19] and health behavior can change with age. Age, recorded in 5-year age groups but treated as a continuous variable, is included in the final models as a potential confounder.
Somewhat more difficult was how to address cancer stage. Cancer stage at diagnosis is an important predictor of survival time [19] and could perhaps be associated with health behavior, although this association may be an intermediary step between health behavior and survival time rather than a true confounder. For completeness we opted to adjust for cancer stage in our models. Disease stage at diagnosis (clinical assessment) was coded by SEER according to the according to the AJCC Cancer Staging Manual 6th Edition [20].
Recording of cancer stage at diagnosis was incomplete in the SEER cancer registry data; being unavailable from 2001 to 2003 and having 18% missing data across the other years. We have excluded cancer stage prior to 2004 and categorized it into 5 categories (stage I, stage II, stage III, stage IV, not specified) from 2004 onwards.
Other potential confounders of the association between behavior and survival were considered to be of lesser impact or potentially on the disease pathway. For example, while the relationship between smoking history and post-diagnosis survival may differ by gender, the effect may be small. In contrast, the choice between curative or palliative treatment is a strong predictor of survival time but may partially lie on the association pathway. (Smoking, for example, may lead to a higher probability of significant co-morbidities and these in turn influence the decision of curative treatment and, hence, survival time.) Adjustment for variables on the association pathway may remove some of the true association between health behavior and survival time.
Eligible data records
Fifty-six thousand eight hundred twenty-four SEER esophageal cancer cases and 1,450,775 BRFSS health survey respondents met the eligibility criteria. Additional file 1 summarizes the characteristics of the two samples. Among the cancer cases, median time till death was 7 months with median follow-up time of censored observations (18.6%) was 30 months. 52.9% of cases were EAC and 33.7% ESCC. 16.1% of the BRFSS respondents were current smokers and 4.8% were judged to be heavy drinkers of alcohol. The BRFSS respondents included higher proportions of younger people and females than the SEER cases.
The characteristics of eligible cancer registry cases and health survey respondents are summarized using counts and percentages, with the exception of survival time which is summarized using medians, quartiles and maximums.
The main analysis involves three discrete steps. Firstly, the probability of engaging in each health behavior were estimated from the BRFSS health survey data using logistic models; with a separate model for each behavior. Each modelled the probability of having the behavior of interest based on year of survey, age, sex, race, marital status and State of residence. We also allowed for differences in the probability of health behaviors between sexes and between marital statuses at different ages by including age by sex, age by marital status and marital status by sex interaction terms in each logistic model.
For example, if we let i represent an eligible individual from the BRFSS data set and \( \hat{p_i(smoker)} \) represent the estimated probability that person i is a smoker, then the logistic model has the form
$$ logit\left(\hat{p_i(smoker)}\right)={\boldsymbol{x}}_{\boldsymbol{i}}\hat{\boldsymbol{\beta}} $$
$$ {\boldsymbol{x}}_{\boldsymbol{i}}\hat{\boldsymbol{\beta}}=\hat{\beta_0}+\hat{\beta_1}\left({year}_i\right)+\hat{\beta_2}\left({age}_i\right)+\hat{\beta_3}\left({sex}_i\right)+\hat{\beta_{4-6}}\left({race}_i\right)+\hat{\beta_7}\left( marital\ {status}_i\right)+\hat{\beta_{8-19}}\left( State\ of\ {residence}_i\right)+\hat{\beta_{20}}\left({age}_i\right)\left({sex}_i\right)+\hat{\beta_{21}}\left({age}_i\right)\left( marital\ {status}_i\right)+\hat{\beta_{22}}\left({sex}_i\right)\left( marital\ {status}_i\right) $$
and the \( \hat{\beta} \) 's quantify the relationships between the demographic characteristics of the respondents and their likelihood of smoking.
To correct for the complexities in the BRFSS health survey sampling and non-response we weighted the logistic models by the sampling weights provided. In 2011, the BRFSS introduced a new method of calculating sampling weights which improved the weighting of some variables including race and marital status. However, as both systems weight to the State totals, we do not differentiate between the different type of weights in this analysis. We excluded data records with extreme sampling weights: those which fell in either the top or bottom 0.5% of the distribution. To assist the models to converge we use Firth's bias reduced penalized-likelihood when fitting the models; using the logistf package (version 1.23) in R software (version 3.5.2). The fitted models are summarized in Additional file 5.
Year and age category were fitted as numeric variables while sex, race, marital status and State of residence are categorical. Preliminary investigations (not reported) confirmed that a linear model was reasonable for both year and age category. Year is coded as 0 for 2001 through to 14 for 2015 for analysis.
We confirmed that the chosen risk profiling variables were indeed predictors of each health behavior by visual inspection of odds ratios from logistic regression models. To help gauge the predictive ability of each demographic variable we present areas under the curve (AUC) of the receiver operating characteristic (ROC) curve for each predictor alone and for the full logistic model using the pROC package (version 1.13.0) in R software. The higher above 0.5 the AUC, the greater the ability of the model to predict the health behavior.
In the second step of the analysis, for each esophageal cancer case in the SEER cancer registry, we estimated their probability of participating in each health behavior by substituting their demographic characteristics into the logistic predictive model for that behavior.
For example, if we let j represent an eligible cancer case from the SEER data set and xj the set of observed values of the demographic variables for individual j and \( \hat{\boldsymbol{\beta}} \) represent the regression coefficients for the model predicting smoking (eq. 1 above), then we estimated the probability of cancer case j being a smoker as
$$ \hat{p_j(smoker)}=\frac{e^{{\boldsymbol{x}}_{\boldsymbol{j}}\hat{\boldsymbol{\beta}}}}{1+{e}^{{\boldsymbol{x}}_{\boldsymbol{j}}\hat{\boldsymbol{\beta}}}} $$
As we were specifically interested in health behavior prior to diagnosis we trialed three pre-diagnosis time points: 1, 5 and 10 years prior to diagnosis. This entailed substituting diagnosis year minus 1, 5 or 10 as the year variable of the logistic model and 5-age group minus 0, 1 or 2. To avoid extrapolating earlier than the observed data, the 5-year lag analysis was restricted to esophageal cancer cases from 2006 to 2015 and the 10-year lag model was restricted to cases from 2011 to 2015.
In the third step of the analysis, the relationship between the estimated probability of each behavior and survival was investigated using Cox regression models using the survival package (version 2.43–3) in R software. Separate models were fitted for each behavior. Results are presented as hazard ratios (HRs) with associated 95% confidence intervals (CIs) and p-values. Models were fitted with and without correction for age and cancer stage at diagnosis.
For example, the Cox model of survival time of cancer case j relative to their estimated probability of smoking, adjusting for age and disease stage, could be written
$$ S\left(t,x,\beta \right)={\left[{S}_0(t)\right]}^{\mathit{\exp}\left({\beta}_1^{\ast}\left(\hat{p_j(smoker)}\right)+{\beta}_2^{\ast}\left({age}_j\right)+{\beta}_3^{\ast}\left( cancer\ {stage}_j\right)\right)} $$
where \( \hat{p_j(smoker)} \), a number between 0 and 1, is the estimated probability that the SEER cancer case is a smoker from Eq. (2). The * superscript is just to highlight that these β 's are different to the β 's listed in Eq. (1). Under this model \( {e}^{\beta_1^{\ast }} \) is the hazard ratio for the estimated probability of smoking, adjusted for age and disease stage.
Subgroup analyses were performed for ESCC and EAC histological types. Missing values were excluded from analysis.
Each of the risk profile variables were related to each of the health behaviors [see Additional file 2]. For example, the prevalence of smoking decreased over the study period (odds ratio (OR) = 0.98, 95% confidence interval (CI) 0.98–0.98 for each later year); the prevalence of obesity increased over time (OR = 1.03, 95% CI 1.03–1.03 for each additional year); each 5-year increase in age is associated with decreasing prevalence of smoking (OR = 0.90, 95% CI 0.90–0.90) and decreased risk of binge drinking (OR = 0.82, 95% CI 0.82–0.82); females have lower prevalence of smoking (OR = 0.74, 95% CI 0.74,0.74); when compared to those who are married, people who are single have higher prevalence of daily smoking (OR = 2.14, 95% CI 2.14–2.14), risk of binge drinking (OR = 1.90, 95% CI 0.90–0.90) and risk of concurrently smoking and regular drinking (OR = 2.50, 95% CI 2.50–2.50); people classifying as American Indian or Alaskan Native have higher prevalence of daily smoking (OR = 1.69, 95% CI 1.68–1.69) and people classified as black have higher risk of obesity (OR = 1.75, 95% CI 1.75–1.75) than those who are classified as white; residents of Kentucky are more likely to smoke (OR = 2.50, 95% CI 2.49–2.50) residents of Utah are less likely to be heavy drinkers (OR = 0.52, 95% CI 0.52–0.52) than Californians.
Of the fitted logistic models, the model predicting binge drinking (AUC 0.74) appeared most accurate and the model for predicting obesity (AUC 0.59) appeared least accurate.
Table 1 shows the associations between post-diagnosis survival time and probability of each pre-diagnosis health behavior. Each line presents results from separate Cox regression models; for each health behavior. The columns present results from three separate models: the unadjusted model with the probability of behavior 1 year prior to diagnosis as the only predictor; the one-year lag model adjusted for age and cancer stage at diagnosis; and the adjusted model with a 10-year lag. The hazard ratios reported show the impact of a 0.1 increase in the probability of participating in that behavior. Tables 2 and 3 provide the same results for the ESCC and EAC histological types separately. Both adjusted variables (age and cancer stage at diagnosis) are significant predictors of survival [see Additional file 3]. Result for the 5-year lag model [see Additional file 4] are similar to corresponding one-year lag models shown.
Table 1 Association Between Survival Time and Probability of Pre-Diagnosis Health Behavior; All Esophageal Cancers
Table 2 Association Between Survival Time and Probability of Pre-Diagnosis Health Behavior; Esophageal Squamous Cell Carcinomas
Table 3 Association Between Survival Time and Probability of Pre-Diagnosis Health Behavior; Esophageal Adenocarcinomas
Smoking 1 year prior to diagnosis appears to be unrelated to survival until adjustment for age and disease stage at diagnosis. In the adjusted model, each 0.1 increase in the probability of pre-diagnosis smoking is associated with a 20% (HR 1.20, 95% CI 1.18–1.22) increase in post-diagnosis hazard with no discernible difference in results for ESCC and EAC subgroups.
Results for alcohol consumption are mixed. When using behavior 1 year prior to diagnosis as the predictor, a 0.1 increase in the probability of heavy drinking appears to be protective of survival even after adjustment for age and cancer stage at diagnosis (HR 0.82, 95% CI 0.76–0.88). However, when looking at behavior 10 years prior to diagnosis, the adjusted model finds heavy drinking to be detrimental to post-diagnosis survival in ESCC (HR 1.30, 95% CI 1.08–1.57) and with no discernable association in EAC (HR 1.10, 95% CI 0.95–1.26). The pattern of results for binge drinking is quite similar.
A 0.1 increase in the probability of concurrently smoking and drinking ≥1 standard drink per day in the year prior to diagnosis is associated with double the risk of death (HR = 1.93, 95% CI 1.79–2.07), after adjustment for age and cancer stage with no difference between ESCC and EAC.
After adjustment, a 0.1 increase in probability of obese 1 year prior to diagnosis is associated with an apparently trivial increase in post-diagnosis hazard (HR 1.04, 95% CI 1.03–1.06). A slightly larger hazard (HR 1.10, 95% CI 1.07–1.14) was recorded for a 0.1 increase in the probability of obese 10 years prior to diagnosis. A 0.1 increase in the probability of exercise outside employment 1 year prior to diagnosis is associated with improved survival (HR 0.82, 95% CI 0.81–0.84) with little difference between ESCC and EAC.
The results above appear to support of the proposition that demographic-derived estimates of the probability of health behaviors can assist in identifying association between pre-diagnosis health behavior and post-diagnosis survival time in esophageal cancer. The hazard ratios quoted in this paper show the increased hazard of death associated with each additional 0.1 probability of the health behavior of interest. That is, we are reporting the association between the estimated likelihood of engaging in a particular behavior and survival time. This is quite different from the association between the actual health behavior and survival time and more difficult to interpret. Never-the-less, there is consistency between the results of the present study and previously published results: especially in the presence and direction of associations.
We have found that a 0.1 increase in the probability of smoking 1 year prior to diagnosis, adjusted for age and cancer stage at diagnosis, had an estimated HR of 1.20 (95% CI 1.18–1.22) in esophageal cancer survival. This association is consistent with findings from previous meta analyses such as HR 1.41 (95% CI 1.22,1.64) [21] for smoking status at time of diagnosis in mainly ESCC patients and HR 1.19 (95% CI 1.04,1.36) for ever smoking [4] in ESCC (although no evidence of association in EAC). Some more recently published studies found similar statistically significant HRs including HR = 1.28 [22] and HR = 1.34 [23] both from China, and HR = 1.22 from a study across two sites in US and Canada [24]. In contrast, recent results from Japan HR = 0.97 [25] failed to find evidence of association between pre-diagnosis smoking and post-diagnosis survival time. A study from South Africa reported an unadjusted HR = 0.92 [26] but the present study has shown the importance of adjustment for confounders such as age and cancer stage at diagnosis.
The current analyses found that increased probability for 'at risk' alcohol consumption in the year prior to diagnosis were generally protective of survival but that a 0.1 increase in 'at risk' alcohol behavior 10 years prior to diagnosis was detrimental to survival in ESCC (heavy drinking HR 1.30 95% CI 1.08–1.57, binge drinking HR 1.09, 95% CI 1.02–1.17). The 10 year results are consistent with a previous meta-analysis [4] which found that ever drinking alcohol produced a significant increase in hazard (HR 1.36, 95% CI 1.15, 1.61) in ESCC but non-significant HR of 1.08 (95% CI 0.85, 1.37) in EAC although ever drinking and 'at risk' drinking are widely separate. More recent results from China HR = 1.58 [22], HR = 1.45 [23] and Japan HR = 2.37 (95% CI 1.24,4.53) [25] also support the detrimental impact of pre-diagnosis alcohol consumption on post-diagnosis survival.
The unexpectedly protective result for alcohol consumption one-year prior to diagnosis could indicate insufficient adjustment for confounding (such as comorbidities or health symptoms) or weaknesses in the measurement tool (such as biases in the self-reporting of alcohol consumption in standard drinks).
Previous authors have found that pre-diagnosis smoking and alcohol consumption combined produce a disproportionately high risk to post-diagnosis survival (for example, HR 3.84, 95% CI 2.02,7.32 for ESCC [17]). We have also found that a 0.1 increase in the probability of concurrent daily smoking and consuming one or more alcoholic drinks per day 1 year prior to diagnosis, adjusted for age and cancer stage at diagnosis, had a relatively high estimated HR of 1.93 (95% CI 1.79,2.07).
We observed that a 0.1 increase in the probability of obese 1 year prior to diagnosis was associated with slightly higher risk of death adjusted HR = 1.04 (95% CI 1.03,1.06) mainly associated with ESCC (HR 1.07 95% CI 1.04,1.10). The association seems small and the literature on obesity is sparse with mixed findings. One review found pre-diagnosis obesity could be associated with higher risks of death in cancer (specifically breast, prostate and colorectal cancers) [27] but a later study reported that pre-diagnostic obesity increased hazard for all cancers except cancers of the upper digestive tract (obese compared to normal weight HR 0.87, 95% CI 0.62,1.22) [28]. More recently a North American study [24] found recalled obesity in early adulthood was associated with lower survival times than normal weight (HR 1.77, 95% CI 1.25, 2.51). The measure of obesity available in this study may not be optimal.
We found that a 0.1 increase in probability of pre-diagnosis physical activity outside of the workplace was associated with improved survival (adjusted HR = 0.82, 95% CI 0.81,0.84). This is consistent with a recent review [29] which found the relative risk of death between the highest versus lowest category of physical activity to be 0.71 (95% CI 0.57,0.89) for esophageal cancer.
Our analyses using estimated probability for health behaviors has produced results which have some face validity. A strength of this example is that the data sets used are large, public domain and well understood. Any interested researcher can reproduce, refine and/or extend these analyses using the same data sets.
Both the data sets and the analysis technique used have some limitations and weaknesses. In relation to the data sets, there are response biases within the BRFSS [30] which the sampling weights may not have fully addressed. Further, the measures of behavior available are limited and are dictated by the existing data base which was designed for other purposes and is not optimized for our research question.
For the model, estimating the probability of a behavior is less accurate than a direct measure of behavior and conveys less information about that behavior: so will have less power for detecting associations. There may be residual confounding from unmeasured variables (such as education, socio-economic status or comorbidities). Finally, omitting interactions with year may have contributed to the apparent lack of difference in outcomes between behavior one, five and 10 years prior to diagnosis.
The rarer the disease, the less feasible it is to conduct either prospective cohort studies or record linkage (retrospective cohort) studies. Retrospective data collection (including case-control studies) are fraught with recall and survivor biases. Exploiting existing data provides cost-effective opportunities for investigations but may require different methodologies.
Analyses of the associations between estimated probability for pre-diagnosis health behavior (based on demographic characteristics) and survival time in esophageal cancer produced results with some face validity. Expressing associations in units of changes in the probability of the health behavior was cumbersome. However, the required data are already available, allowing relatively quick and inexpensive investigations of possible associations between pre-diagnosis behavior and post-diagnosis outcomes for relatively rare diseases. And of course, most diseases are relatively rare.
The SEER Research Data used in this study are made available to the public at no cost, subject to data-use agreement (https://seer.cancer.gov/data/). The BRFSS data sets used in this study are freely available from https://www.cdc.gov/brfss/index.html.
AUC:
BRFSS:
EAC:
Esophageal adenocarcinoma
ESCC:
Esophageal squamous cell carcinoma
HR:
Hazard ratio
ROC:
Receiver operating characteristic curve
SEER:
Surveillance, Epidemiology, and End Results Program
Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68(6):394–424.
Islami F, Goding Sauer A, Miller KD, Siegel RL, Fedewa SA, Jacobs EJ, et al. Proportion and number of cancer cases and deaths attributable to potentially modifiable risk factors in the United States. CA Cancer J Clin. 2018;68(1):31–54.
Castro C, Peleteiro B, Lunet N. Modifiable factors and esophageal cancer: a systematic review of published meta-analyses. J Gasteroenterol. 2018;53(1):37–51.
Fahey PP, Mallitt K-A, Astell-Burt T, Stone G, Whiteman DC. Impact of pre-diagnosis behavior on risk of death from esophageal cancer: a systematic review and meta-analysis. Cancer Causes Control. 2015;26(10):1365–73.
Toohey K, Pumpa K, Cooke J, Semple S. Do activity patterns and body weight change after a cancer diagnosis? A retrospective cohort study. Int J Health Sci Res. 2016;6(10):110–7.
Demark-Wahnefried W, Aziz NM, Rowland JH, Pinto BM. Riding the crest of the teachable moment: promoting long-term health after the diagnosis of cancer. J Clin Oncol. 2005;23(24):5814.
Rock CL, Doyle C, Demark-Wahnefried W, Meyerhardt J, Courneya KS, Schwartz AL, et al. Nutrition and physical activity guidelines for cancer survivors. CA Cancer J Clin. 2012;62(4):242–74.
Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J Clin. 2019;69(1):7–34.
Smithers BM, Fahey PP, Corish T, Gotley DC, Falk GL, Smith GS, et al. Symptoms, investigations and management of patients with cancer of the oesophagus and gastro-oesophageal junction in Australia. Med J Aust. 2010;193(10):572–7.
Morris LJ, D'Este C, Sargent-Cox K, Anstey KJ. Concurrent lifestyle risk factors: clusters and determinants in an Australian sample. Prev Med. 2016;84:1–5.
Surveillance, Epidemiology, and End Results (SEER) Program. SEER*Stat Database: Mortality - All COD, Aggregated With State, Total U.S. (1969-2016). National Cancer Institute, DCCPS, Surveillance Research Program, Bethesda.
Centers for Disease Control and Prevention (CDA). Behavioral Risk Factor Surveillance System Survey Data. Atlanta: U.S. Department of Health and Human Servies, Centers for Disease Control and Prevention; 2001–2014.
Jamal A. Current cigarette smoking among adults—United States, 2005–2015. MMWR Morb Mortal Wkly Rep. 2016;65:1205.
Gilman SE, Breslau J, Conron KJ, Koenen KC, Subramanian S, Zaslavsky A. Education and race-ethnicity differences in the lifetime risk of alcohol dependence. J Epidemiol Community Health. 2008;62(3):224–30.
Lauder W, Mummery K, Jones M, Caperchione C. A comparison of health behaviours in lonely and non-lonely populations. Psychol Health Med. 2006;11(2):233–45.
Steevens J, Schouten LJ, Goldbohm RA, van den Brandt PA. Alcohol consumption, cigarette smoking and risk of subtypes of oesophageal and gastric cancer: a prospective cohort study. Gut. 2010;59(01):39–48.
Thrift AP, Nagle CM, Fahey PP, Russell A, Smithers BM, Watson DI, et al. The influence of prediagnostic demographic and lifestyle factors on esophageal squamous cell carcinoma survival. Int J Cancer. 2012;131(5):E759–E68.
Thrift AP, Nagle CM, Fahey PP, Smithers BM, Watson DI, Whiteman DC. Predictors of survival among patients diagnosed with adenocarcinoma of the esophagus and gastroesophageal junction. Cancer Causes Control. 2012;23(4):555–64.
Njei B, McCarty TR, Birk JW. Trends in esophageal cancer survival in United States adults from 1973 to 2009: a SEER database analysis. J Gastroenterol Hepatol. 2016;31(6):1141–6.
Greene FL, Page DL, leming ID, Fritz AG, Balch CM, Haller DG, et al. AJCC Cancer Staging Manual. 6th ed. Berlin: Springer-Verlag; 2003.
Kuang J-j, Z-m J, Y-x C, W-p Y, Yang Q, Wang H-z, et al. Smoking exposure and survival of patients with esophagus cancer: a systematic review and meta-analysis. Gastroenterol Res Pract. 2016;2016:1.
Ma Q, Liu W, Jia R, Long H, Zhang L, Lin P, et al. Alcohol and survival in ESCC: Prediagnosis alcohol consumption and postoperative survival in lymph node-negative esophageal carcinoma patients. Oncotarget. 2016;7(25):38857.
Sun P, Zhang F, Chen C, Ren C, Bi X-W, Yang H, et al. Prognostic impact of body mass index stratified by smoking status in patients with esophageal squamous cell carcinoma. Onco Targets Ther. 2016;9:6389.
Spreafico A, Coate L, Zhai R, Xu W, Chen Z-F, Chen Z, et al. Early adulthood body mass index, cumulative smoking, and esophageal adenocarcinoma survival. Cancer Epidemiol. 2017;47:28–34.
Okada E, Ukawa S, Nakamura K, Hirata M, Nagai A, Matsuda K, et al. Demographic and lifestyle factors and survival among patients with esophageal and gastric cancer: The Biobank Japan Project. J Epidemiol. 2017;27(Supplement_III):S29–35.
Dandara C, Robertson B, Dzobo K, Moodley L, Parker MI. Patient and tumour characteristics as prognostic markers for oesophageal cancer: a retrospective analysis of a cohort of patients at Groote Schuur hospital. Eur J Cardiothorac Surg. 2015;49(2):629–34.
Parekh N, Chandran U, Bandera EV. Obesity in cancer survival. Annu Rev Nutr. 2012;32:311–42.
Reichle K, Peter RS, Concin H, Nagel G. Associations of pre-diagnostic body mass index with overall and cancer-specific mortality in a large Austrian cohort. Cancer Causes Control. 2015;26(11):1643–52.
Lynch BM, Leitzmann MF. An evaluation of the evidence relating to physical inactivity, sedentary behavior, and cancer incidence and mortality. Curr Epidemiol Rep. 2017;4(3):221–31.
Schneider KL, Clark MA, Rakowski W, Lapane KL. Evaluating the impact of non-response bias in the behavioral risk factor Surveillance system (BRFSS). J Epidemiol Community Health. 2012;66(4):290–5.
School of Health Sciences, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia
Paul P. Fahey
Translational Health Research Institute, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia
Paul P. Fahey & Andrew Page
School of Computer, Data and Mathematical Sciences, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia
Glenn Stone
Population Wellbeing and Environment Research Lab (PowerLab), School of Health and Society, Faculty of Social Sciences, University of Wollongong, Wollongong, NSW, 2522, Australia
Thomas Astell-Burt
PF conducted all analyses and writing. AP, GS and TA-B provided regular and substantial input in the conception, methods of analysis and interpretation of results, and reviewed and improved a number of drafts of this paper. All authors have read and approved the final manuscript.
Correspondence to Paul P. Fahey.
The project was approved by the Western Sydney University Human Research Ethics Committee (H12305). Consent to participate is not applicable.
The authors declare they have no competing interests.
Table S1. Disease Characteristics and Outcomes of Eligible SEER Cancer Registry Esophagael Cancer Cases and BRFSS Health Survey Respondents 2001–2015.
Table S2. Associations Between the Selected Demographic Variables and Health Behaviors in the BRFSS Health Survey Data Set.
Table S3. Relationship Between Adjusted Variables and Survival Time.
Table S4. Association Between Survival Time and 5-Year Pre-Diagnosis Health Behavior.
Table S5. Logistic Regression Models Predicting Health Behaviors from Demographic Variables.
Fahey, P.P., Page, A., Stone, G. et al. Using estimated probability of pre-diagnosis behavior as a predictor of cancer survival time: an example in esophageal cancer. BMC Med Res Methodol 20, 74 (2020). https://doi.org/10.1186/s12874-020-00957-5 | CommonCrawl |
For a diatomic molecule, what is the specific heat per mole at constant pressure/volume?
At high temperatures, the specific heat at constant volume $\text{C}_{v}$ has three degrees of freedom from rotation, two from translation, and two from vibration.
That means $\text{C}_{v}=\frac{7}{2}\text{R}$ by the Equipartition Theorem.
However, I recall the Mayer formula, which states $\text{C}_{p}=\text{C}_{v}+\text{R}$.
The ratio of specific heats for a diatomic molecule is usually $\gamma=\text{C}_{p}/\text{C}_{v}=7/5$.
What is then the specific heat at constant pressure? Normally this value is $7/5$ for diatomic molecules?
thermodynamics degrees-of-freedom
ShanZhengYangShanZhengYang
"At high temperatures, the specific heat at constant volume $C_v$ has three degrees of freedom from rotation, two from translation, and two from vibration." I can't understand this line. $C_v$ is a physical quantity not a dynamical system. So how can it have a degrees of freedom?? You can say the degrees of freedom of an atom or molecule is something but it is wrong if you say the degrees of freedom of some physical quantity(like temperature, specific heat etc.) is something. Degrees of freedom is the number of independent coordinates necessary for specifying the position and configuration in space of a dynamical system.
Now to answer your question, we know that the energy per mole of the system is $\frac{1}{2} fRT$. where $f$= degrees of freedom the gas.
$\therefore$ molar heat capacity, $C_v=(\frac{dE}{dT})_v=\frac{d}{dT}(\frac{1}{2}fRT)_v=\frac{1}{2}fR$
Now, $C_p=C_v+R=\frac{1}{2}fR+R=R(1+ \frac{f}{2})$
$\therefore$ $\gamma=1+ \frac{2}{f}$
Now for a diaatomic gas:-
A diaatomic gas has three translation(along x,y,z asis) and two rotational(about y and z axis) degrees of freedom. i.e. total degrees of freedom is $5$.
Hence $C_v=\frac{1}{2}fR=\frac{5}{2}R$ and $C_p=R(1+ \frac{f}{2})=R(1+ \frac{5}{2})=\frac{7}{2}R$
Rajesh SardarRajesh Sardar
A diatomic molecule will have 7 degrees of freedom at high temperatures. However, the ratio of specific heats that you cited is for diatomic molecules around room temperatures, which have 5 degrees of freedom.
PhysikaPhysika
In particular, when the thermal energy $k_B T$ is smaller than the spacing between the quantum energy levels, the contribution of the vibrational and rotational degrees of freedom will fall. At room temperature, the contribution of the vibrational degree of freedom of a diatomic molecule is often 0, and so $C_v$ will be $R/2$ lower than expected.
Furthermore, since a rotation about the bond between the two atoms in a diatomic molecule is not really a rotation, there are actually only 6 degrees of freedom for a diatomic molecule at high temperatures: 3 translational, 2 rotational, and 1 vibrational. When you take away the vibrational degree of freedom at lower temperatures, only 5 remain, and you get $C_v = 5/2R$ and $C_p = C_v + R = 7/2R$.
eyqseyqs
Molecules are quite different from the monatomic gases like helium and argon. With monatomic gases, thermal energy comprises only translational motions. Translational motions are ordinary, whole-body movements in 3D space whereby particles move about and exchange energy in collisions—like rubber balls in a vigorously shaken container (see animation here [19]). These simple movements in the three dimensions of space mean individual atoms have three translational degrees of freedom. A degree of freedom is any form of energy in which heat transferred into an object can be stored. This can be in translational kinetic energy, rotational kinetic energy, or other forms such as potential energy in vibrational modes. Only three translational degrees of freedom (corresponding to the three independent directions in space) are available for any individual atom, whether it is free, as a monatomic molecule, or bound into a polyatomic molecule.
As to rotation about an atom's axis (again, whether the atom is bound or free), its energy of rotation is proportional to the moment of inertia for the atom, which is extremely small compared to moments of inertia of collections of atoms. This is because almost all of the mass of a single atom is concentrated in its nucleus, which has a radius too small to give a significant moment of inertia. In contrast, the spacing of quantum energy levels for a rotating object is inversely proportional to its moment of inertia, and so this spacing becomes very large for objects with very small moments of inertia. For these reasons, the contribution from rotation of atoms on their axes is essentially zero in monatomic gases, because the energy spacing of the associated quantum levels is too large for significant thermal energy to be stored in rotation of systems with such small moments of inertia. For similar reasons, axial rotation around bonds joining atoms in diatomic gases (or along the linear axis in a linear molecule of any length) can also be neglected as a possible "degree of freedom" as well, since such rotation is similar to rotation of monatomic atoms, and so occurs about an axis with a moment of inertia too small to be able to store significant heat energy
Cobra KingCobra King
Not the answer you're looking for? Browse other questions tagged thermodynamics degrees-of-freedom or ask your own question.
What are the six degrees of freedom of the atoms in a solid?
The "potential energy" degree of freedom?
Does the equipartition theorem for a diatomic gas apply to the three rotations if the temperature is high enough?
Number of degrees of fredom in diatomic molecule model | CommonCrawl |
High Power Laser Science and Engineering (1)
Infection Control & Hospital Epidemiology (1)
Influence of laser polarization on collective electron dynamics in ultraintense laser–foil interactions
HEDP and HPL 2016
Bruno Gonzalez-Izquierdo, Ross J. Gray, Martin King, Robbie Wilson, Rachel J. Dance, Haydn Powell, David A. MacLellan, John McCreadie, Nicholas M. H. Butler, Steve Hawkes, James S. Green, Chris D. Murphy, Luca C. Stockhausen, David C. Carroll, Nicola Booth, Graeme G. Scott, Marco Borghesi, David Neely, Paul McKenna
Journal: High Power Laser Science and Engineering / Volume 4 / 2016
Published online by Cambridge University Press: 27 September 2016, e33
The collective response of electrons in an ultrathin foil target irradiated by an ultraintense ( ${\sim}6\times 10^{20}~\text{W}~\text{cm}^{-2}$ ) laser pulse is investigated experimentally and via 3D particle-in-cell simulations. It is shown that if the target is sufficiently thin that the laser induces significant radiation pressure, but not thin enough to become relativistically transparent to the laser light, the resulting relativistic electron beam is elliptical, with the major axis of the ellipse directed along the laser polarization axis. When the target thickness is decreased such that it becomes relativistically transparent early in the interaction with the laser pulse, diffraction of the transmitted laser light occurs through a so called 'relativistic plasma aperture', inducing structure in the spatial-intensity profile of the beam of energetic electrons. It is shown that the electron beam profile can be modified by variation of the target thickness and degree of ellipticity in the laser polarization.
By Brittany L. Anderson-Montoya, Heather R. Bailey, Carryl L. Baldwin, Daphne Bavelier, Jameson D. Beach, Jeffrey S. Bedwell, Kevin B. Bennett, Richard A. Block, Deborah A. Boehm-Davis, Corey J. Bohil, David B. Boles, Avinoam Borowsky, Jessica Bramlett, Allison A. Brennan, J. Christopher Brill, Matthew S. Cain, Meredith Carroll, Roberto Champney, Kait Clark, Nancy J. Cooke, Lori M. Curtindale, Clare Davies, Patricia R. DeLucia, Andrew E. Deptula, Michael B. Dillard, Colin D. Drury, Christopher Edman, James T. Enns, Sara Irina Fabrikant, Victor S. Finomore, Arthur D. Fisk, John M. Flach, Matthew E. Funke, Andre Garcia, Adam Gazzaley, Douglas J. Gillan, Rebecca A. Grier, Simen Hagen, Kelly Hale, Diane F. Halpern, Peter A. Hancock, Deborah L. Harm, Mary Hegarty, Laurie M. Heller, Nicole D. Helton, William S. Helton, Robert R. Hoffman, Jerred Holt, Xiaogang Hu, Richard J. Jagacinski, Keith S. Jones, Astrid M. L. Kappers, Simon Kemp, Robert C. Kennedy, Robert S. Kennedy, Alan Kingstone, Ioana Koglbauer, Norman E. Lane, Robert D. Latzman, Cynthia Laurie-Rose, Patricia Lee, Richard Lowe, Valerie Lugo, Poornima Madhavan, Leonard S. Mark, Gerald Matthews, Jyoti Mishra, Stephen R. Mitroff, Tracy L. Mitzner, Alexander M. Morison, Taylor Murphy, Takamichi Nakamoto, John G. Neuhoff, Karl M. Newell, Tal Oron-Gilad, Raja Parasuraman, Tiffany A. Pempek, Robert W. Proctor, Katie A. Ragsdale, Anil K. Raj, Millard F. Reschke, Evan F. Risko, Matthew Rizzo, Wendy A. Rogers, Jesse Q. Sargent, Mark W. Scerbo, Natasha B. Schwartz, F. Jacob Seagull, Cory-Ann Smarr, L. James Smart, Kay Stanney, James Staszewski, Clayton L. Stephenson, Mary E. Stuart, Breanna E. Studenka, Joel Suss, Leedjia Svec, James L. Szalma, James Tanaka, James Thompson, Wouter M. Bergmann Tiest, Lauren A. Vassiliades, Michael A. Vidulich, Paul Ward, Joel S. Warm, David A. Washburn, Christopher D. Wickens, Scott J. Wood, David D. Woods, Motonori Yamaguchi, Lin Ye, Jeffrey M. Zacks
Edited by Robert R. Hoffman, Peter A. Hancock, University of Central Florida, Mark W. Scerbo, Old Dominion University, Virginia, Raja Parasuraman, George Mason University, Virginia, James L. Szalma, University of Central Florida
Book: The Cambridge Handbook of Applied Perception Research
Published online: 05 July 2015, pp xi-xiv
Predictors of Hospitals with Endemic Community-Associated Methicillin-Resistant Staphylococcus aureus
Courtney R. Murphy, Lyndsey O. Hudson, Brian G. Spratt, Kristen Elkins, Leah Terpstra, Adrijana Gombosev, Christopher Nguyen, Paul Hannah, Richard Alexander, Mark C. Enright, Susan S. Huang
Journal: Infection Control & Hospital Epidemiology / Volume 34 / Issue 6 / June 2013
Objective.
We sought to identify hospital characteristics associated with community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) carriage among inpatients.
Prospective cohort study.
Orange County, California.
Thirty hospitals in a single county.
Methods.
We collected clinical MRSA isolates from inpatients in 30 of 31 hospitals in Orange County, California, from October 2008 through April 2010. We characterized isolates by spa typing to identify CA-MRSA strains. Using California's mandatory hospitalization data set, we identified hospital-level predictors of CA-MRSA isolation.
CA-MRSA strains represented 1,033 (46%) of 2,246 of MRSA isolates. By hospital, the median percentage of CA-MRSA isolates was 46% (range, 14%–81%). In multivariate models, CA-MRSA isolation was associated with smaller hospitals (odds ratio [OR], 0.97, or 3% decreased odds of CA-MRSA isolation per 1,000 annual admissions; P<.001), hospitals with more Medicaid-insured patients (OR, 1.2; P = .002), and hospitals with more patients with low comorbidity scores (OR, 1.3; P< .001). Results were similar when restricted to isolates from patients with hospital-onset infection.
Among 30 hospitals, CA-MRSA comprised nearly half of MRSA isolates. There was substantial variability in CA-MRSA penetration across hospitals, with more CA-MRSA in smaller hospitals with healthier but socially disadvantaged patient populations. Additional research is needed to determine whether infection control strategies can be successful in targeting CA-MRSA influx.
Parental depression and the challenge of preventing mental illness in children
Paul G. Ramchandani, Susannah E. Murphy
Journal: The British Journal of Psychiatry / Volume 202 / Issue 2 / February 2013
Parental depression is a risk factor for psychiatric problems in children and adolescents. Exciting scientific developments have elucidated potential early mechanisms of intergenerational risk transmission and new models of intervention may help to prevent some childhood problems. However, caution is needed in interpreting such associations as causal and in targeting interventions appropriately.
By Adele Abrahamsen, H. Clark Barrett, William Bechtel, Nick Chater, Andy Clark, Keith Frankish, Aaron B. Hoffman, Ray Jackendoff, Laura A. Libby, William G. Lycan, Gregory L. Murphy, Mike Oaksford, Casey O'Callaghan, Elisabeth Pacherie, Jesse Prinz, William M. Ramsey, Charan Ranganath, Sara J. Shettleworth, Dominic Standage, Neil Stewart, Paul Thagard, Thomas Trappenberg, Barbara Von Eckardt, Ling Wong
Edited by Keith Frankish, The Open University, Milton Keynes, William Ramsey, University of Nevada, Las Vegas
Book: The Cambridge Handbook of Cognitive Science
Print publication: 19 July 2012, pp -
By Antony R. Absalom, Lorenz Breuer, Christoph S. Burkhart , Rowan M. Burnstein, Ian Calder, Jonathan P. Coles, Amanda Cox, Marek Czosnyka, Armagan Dagal, Judith Dinsmore, Derek Duane, Kristin Engelhard, Ari Ercole, Rik Fox, Sabrina G. Galloway, Arnab Ghosh, Arun K. Gupta, Nicholas Hirsch, Robin Howard, Peter Hutchinson, Nicole C. Keong, Martin Köhrmann, Arthur M. Lam, Andrea Lavinio, Brian P. Lemkuil, Luca Longhi , Craig D. McClain , Robert Macfarlane, Basil F. Matta , Stephan A. Mayer, David K. Menon, Andrew W. Michell , Dick Moberg, Paul G. Murphy , Clara Poon, Amit Prakash , Frank Rasulo, Fred Rincon, Stefan Schwab, Martin Smith, Sulpicio G. Soriano, Luzius A. Steiner, Nino Stocchetti , Stephan P. Strebel , Jane Sturgess , Magnus Teig, Tonny Veenith , Christian Werner, Christian Zweifel
Edited by Basil F. Matta, David K. Menon, Martin Smith
Book: Core Topics in Neuroanaesthesia and Neurointensive Care
Print publication: 13 October 2011, pp vii-x
30 - Death and organ donation in neurocritical care
from Section 4 - Neurointensive care
By Paul G. Murphy
Is obsessive–compulsive disorder an anxiety disorder, and what, if any, are spectrum conditions? A family study perspective
O. J. Bienvenu, J. F. Samuels, L. A. Wuyek, K.-Y. Liang, Y. Wang, M. A. Grados, B. A. Cullen, M. A. Riddle, B. D. Greenberg, S. A. Rasmussen, A. J. Fyer, A. Pinto, S. L. Rauch, D. L. Pauls, J. T. McCracken, J. Piacentini, D. L. Murphy, J. A. Knowles, G. Nestadt
Journal: Psychological Medicine / Volume 42 / Issue 1 / January 2012
Published online by Cambridge University Press: 13 May 2011, pp. 1-13
Experts have proposed removing obsessive–compulsive disorder (OCD) from the anxiety disorders section and grouping it with putatively related conditions in DSM-5. The current study uses co-morbidity and familiality data to inform these issues.
Case family data from the OCD Collaborative Genetics Study (382 OCD-affected probands and 974 of their first-degree relatives) were compared with control family data from the Johns Hopkins OCD Family Study (73 non-OCD-affected probands and 233 of their first-degree relatives).
Anxiety disorders (especially agoraphobia and generalized anxiety disorder), cluster C personality disorders (especially obsessive–compulsive and avoidant), tic disorders, somatoform disorders (hypochondriasis and body dysmorphic disorder), grooming disorders (especially trichotillomania and pathological skin picking) and mood disorders (especially unipolar depressive disorders) were more common in case than control probands; however, the prevalences of eating disorders (anorexia and bulimia nervosa), other impulse-control disorders (pathological gambling, pyromania, kleptomania) and substance dependence (alcohol or drug) did not differ between the groups. The same general pattern was evident in relatives of case versus control probands. Results in relatives did not differ markedly when adjusted for demographic variables and proband diagnosis of the same disorder, though the strength of associations was lower when adjusted for OCD in relatives. Nevertheless, several anxiety, depressive and putative OCD-related conditions remained significantly more common in case than control relatives when adjusting for all of these variables simultaneously.
On the basis of co-morbidity and familiality, OCD appears related both to anxiety disorders and to some conditions currently classified in other sections of DSM-IV.
Ceramic Waste Form for Residues from Molten Salt Oxidation of Mixed Wastes
Richard A. Van Konynenburg, Robert W. Hopper, Joseph A. Rard, Frederick J. Ryerson, Douglas L. Phinney, Ian D. Hutcheon, Paul G. Curtis
A ceramic waste form based on Synroc-D is under development for the incorporation of the mineral residues from molten salt oxidation treatment of mixed low-level wastes. Samples containing as many as 32 chemical elements have been fabricated, characterized, and leach-tested. Universal Treatment Standards have been satisfied for all regulated elements except two (lead and vanadium). Efforts are underway to further improve chemical durability.
Chapter 1 - Anaesthesia for patients with pituitary disease
from Section 1 - Perioperative care of patients with endocrine disease
Edited by George M. Hall, St George's Hospital, London, Jennifer M. Hunter, University of Liverpool, Mark S. Cooper, University of Birmingham
Book: Core Topics in Endocrinology in Anaesthesia and Critical Care
Print publication: 01 April 2010, pp 1-13
By Steven Ball, Simon V. Baudouin, Jane K. Beattie, Ann E. Black, Mark S. Cooper, Peter A. Farling, A. B. Johan Groeneveld, George M. Hall, Jennifer M. Hunter, Saheed Khan, Angus McEwan, Philip R. Michael, Brian Mullan, Paul G. Murphy, Grainne Nicholson, Pauline M. O' Neil, Christopher J. R. Parker, Barbara Philips, Charles S. Reilly, Heidi J. Robertshaw, Neville Robinson, Mark E. Seubert, Martin Smith, David J. Vaughan, Nigel R. Webster, Saffron Whitehead
Print publication: 01 April 2010, pp vii-viii
Obsessive–compulsive disorder: subclassification based on co-morbidity
G. Nestadt, C. Z. Di, M. A. Riddle, M. A. Grados, B. D. Greenberg, A. J. Fyer, J. T. McCracken, S. L. Rauch, D. L. Murphy, S. A. Rasmussen, B. Cullen, A. Pinto, J. A. Knowles, J. Piacentini, D. L. Pauls, O. J. Bienvenu, Y. Wang, K. Y. Liang, J. F. Samuels, K. Bandeen Roche
Journal: Psychological Medicine / Volume 39 / Issue 9 / September 2009
Published online by Cambridge University Press: 02 December 2008, pp. 1491-1501
Obsessive–compulsive disorder (OCD) is probably an etiologically heterogeneous condition. Many patients manifest other psychiatric syndromes. This study investigated the relationship between OCD and co-morbid conditions to identify subtypes.
Seven hundred and six individuals with OCD were assessed in the OCD Collaborative Genetics Study (OCGS). Multi-level latent class analysis was conducted based on the presence of eight co-morbid psychiatric conditions [generalized anxiety disorder (GAD), major depression, panic disorder (PD), separation anxiety disorder (SAD), tics, mania, somatization disorders (Som) and grooming disorders (GrD)]. The relationship of the derived classes to specific clinical characteristics was investigated.
Two and three classes of OCD syndromes emerge from the analyses. The two-class solution describes lesser and greater co-morbidity classes and the more descriptive three-class solution is characterized by: (1) an OCD simplex class, in which major depressive disorder (MDD) is the most frequent additional disorder; (2) an OCD co-morbid tic-related class, in which tics are prominent and affective syndromes are considerably rarer; and (3) an OCD co-morbid affective-related class in which PD and affective syndromes are highly represented. The OCD co-morbid tic-related class is predominantly male and characterized by high conscientiousness. The OCD co-morbid affective-related class is predominantly female, has a young age at onset, obsessive–compulsive personality disorder (OCPD) features, high scores on the 'taboo' factor of OCD symptoms, and low conscientiousness.
OCD can be classified into three classes based on co-morbidity. Membership within a class is differentially associated with other clinical characteristics. These classes, if replicated, should have important implications for research and clinical endeavors.
The Effects of Twins, Parity and Age at First Birth on Cancer Risk in Swedish Women
Rachel E. Neale, Steven Darlington, Michael F. G. Murphy, Paul B. S. Silcocks, David M. Purdie, Mats Talbäck
Journal: Twin Research and Human Genetics / Volume 8 / Issue 2 / 01 April 2005
Published online by Cambridge University Press: 21 February 2012, pp. 156-162
Print publication: 01 April 2005
The effect of reproductive history on the risk of cervical, colorectal and thyroid cancers and melanoma has been explored but the results to date are inconsistent. We aimed to examine in a record- linkage cohort study the risk of developing these cancers, as well as breast, ovarian and endometrial cancers, among mothers who had given birth to twins compared with those who had only singleton pregnancies. Women who delivered a baby in Sweden between 1961 and 1996 and who were 15 years or younger in 1961 were selected from the Swedish civil birth register and linked with the Swedish cancer registry. We used Poisson regression to assess associations between reproductive factors and cancer. Twinning was associated with reduced risks of breast, colorectal, ovarian and uterine cancers, although no relative risks were statistically significant. The delivery of twins did not increase the risk of any cancers studied. Increasing numbers of maternities were associated with significantly reduced risks of all tumors except thyroid cancer. We found positive associations between a later age at first birth and breast cancer and melanoma, while there were inverse associations with cervix, ovarian, uterine and colorectal cancers. These findings lend weight to the hypothesis that hormonal factors influence the etiology of colorectal cancer in women, but argue against any strong effect of hormones on the development of melanoma or tumors of the thyroid. | CommonCrawl |
Effects of Bacterial Organic Selenium, Selenium Yeast and Sodium Selenite on Antioxidant Enzymes Activity, Serum Biochemical Parameters, and Selenium Concentration in Lohman Brown-Classic Hens
A. I. Muhammad, A. M. Dalia, T. C. Loh, H. Akit, A. A. Samsudin
published 30 Nov, 2021
Read the published version in Veterinary Research Communications →
posted 29 Sep, 2021
This study compares the effects of sodium selenite, selenium yeast, and enriched bacterial organic selenium protein on antioxidant enzyme activity, serum biochemical profiles, and egg yolk, serum, and tissue selenium concentration in laying hens. In a 112-d experiment, 144 Lohman Brown Classic hens, 23-wks old were divided into four equal groups, each has six replicates. They were assigned to 4 treatments: 1) a basal diet (Con), 2) Con plus 0.3 mg/kg feed sodium selenite (SS); 3) Con plus 0.3 mg/kg feed Se-yeast (SY): 4) Con plus 0.3 mg/kg feed bacterial enriched organic Se protein (ADS18) from Stenotrophomonas maltophilia bacteria. On d 116, hens were euthanized (slaughtered) to obtain blood (serum), liver organ, and breast tissue to measure antioxidant enzyme activity, biochemical profiles, and selenium concentration. The results show that antioxidant enzyme activity of hens was increased when fed bacterial organic Se (ADS18), resulting in a significant (P < 0.05) increase in serum GSH-Px, SOD, and CAT activity compared to other treatment groups. However, ADS18 and SY supplementation increase (P < 0.05) hepatic TAC, GSH-Px, and CAT activity, unlike the SS and Con group. Similarly, dietary Se treatment reduced total cholesterol and serum triglycerides concentrations significantly (P < 0.05) compared to the Con group. At 16 and 18 weeks, selenium concentration in hen egg yolks supplemented with dietary Se was higher (P < 0.05) than in Con, with similar patterns in breast tissue and serum. Supplementation with bacterial organic Se (ADS18) improved antioxidant enzyme activity, decreased total serum cholesterol and serum lipids, and increased Se deposition in egg yolk, tissue, and serum. Hence, organic Se may be considered a viable source of Se in laying hens.
Veterinary Epidemiology
Antioxidant enzymes
Bacterial organic selenium
Serum biochemical parameters
Selenium concentrations
In natural sciences, the term "antioxidant" is increasingly common as it gains attention because of its health advantages (Huang et al. 2005). Synthetic or natural substances applied to products to retard or avert their degradation by the action of oxygen in the air are a more bio-logically applicable concept of antioxidants (Sugiharto 2019; Cimrin et al. 2020). Dietary antioxidants are substances in food that, as described by the Institute of Medicine (Meyers 2000), significantly scavenge and reduce or inhibit the unfavorable effects of reactive species (oxidants), like oxygen or nitrogen species (ROS or RNS), prevent certain diseases, and promote normal physiological functions in living being (Salehi et al. 2018; Aziz et al. 2019). Dietary antioxidants primarily consist of free radicals, reactive oxygen species, metal chelators, enzyme inhibitors, and antioxidant enzyme cofactors (Huang et al. 2005). In a biological system, oxidation is promoted mainly by a host of redox enzymes. Nonenzymatic lipid oxidation, however, can still occur and ultimately result in cell oxidative stress (Kurutas 2016). Biological antioxidants, therefore, include enzymatic antioxidants (like glutathione peroxidase, catalase, and superoxide dismutase) and nonenzymatic antioxidants such as Vitamin E (Aksoz et al. 2020; Gouta et al. 2021) and Vitamin C (Chiaiese et al. 2019; Giuffrè 2019; Saracila et al. 2020), oxidative enzyme inhibitors (aspirin, cyclooxygenase, ibuprofen), antioxidant enzyme cofactors (Se, Coenzyme Q10), meta chelators (EDTA) and scavenge reactive oxygen/nitrogen species (ROS/RNS) (Huang et al. 2005; Kurutas 2016). Biological antioxidants, according to (Halliwell 1990), are substances that "protect, prevent, or minimize the level of oxidative damage of biomolecules when present at minute concentrations compared to the biomolecules they protect. To produce high-quality livestock products, it is therefore important to use dietary antioxidants as they are capable of reducing lipid peroxidation in serum lipid profile, increases the antioxidant status and its concentration in the animal products, and providing benefits to both animals and humans health (Surai and Dvorska 2002).
Dietary selenium is important in animal nutrition especially the organic form which is highly available in animal tissues compared to inorganic sources for different physiological functions. Selenium required for various physiological functions in animals is a cofactor of selenoproteins (e.g., glutathione peroxidase) that reduces peroxides to alcohol and water (Dalia et al., 2017). Selenium is commonly supplemented in animal nutrition in an inorganic salt or organic form. Because of its biochemical and physiological functions in animals, organic Se is strongly retained in animal tissues relative to inorganic Se sources (Surai and Dvorska 2002; Canoǧullari et al. 2010). Different sources of Se in animal tissues have different metabolic effects, according to studies published in the literature (Yuan et al. 2012; Boiago et al. 2014). Dietary supplementation of an antioxidant such as selenium can have a positive effect on the blood (biochemical or haematological) profile of animals. Blood is one of the most accurate markers of an animal's health status, and it can be affected by several factors including nutrition, disease, animal status, environmental, and climatic (Shi et al. 2018).
Blood biochemical index, in particular, may provide details about the animal's nutritional conditions (Mu et al. 2019) and health status (Reda et al. 2020), with aspartate aminotransferase (AST), alkaline phosphatase (ALP), alanine aminotransferase (ALT) and bilirubin, creatinine, uric acid, gamma-glutamyl transferase (GGT) as markers of liver and kidney oxidative injury (Abdel-Daim et al. 2020). Lower concentrations of these biochemical indices are related to enhanced antioxidant status and, as a result, the safer condition of kidneys and liver. The administration of biologically or chemically synthesized nano-Se to growing rabbits and laying hens decreased serum urea, triglycerides (TG), glutamyl transferase (GGT), albumin (ALB), and glutamate pyruvate transaminase (GPT), and enhanced the antioxidant markers (Sheiha et al. 2020; Zhou et al. 2021). Similarly, plasma creatinine, the activity of AST enzyme, and plasma total cholesterol, and plasma LDL concentrations all decreased in organic Se supplemented rabbits (Ayyat et al. 2018; Abdel-Azeem et al. 2019). Furthermore, when breeder quails were fed 0.5 to 2.5 g/kg DL-methionine, lipid profile markers, and lactate dehydrogenase (LDH) activity were reduced (Reda et al. 2020). Broiler birds fed 0.1 to 0.5 mg/kg of feed nano-Se reduces blood ALB concentrations (Ahmadi et al. 2018), influenced oxidation resistance (Chen et al. 2013). Se supplementation raises total protein, albumin, total cholesterol, and TG in laying hens during the hot season while decreasing liver enzymes (ALT) and thyroid hormone (thyroxin) (Abd El-Hack et al. 2017).
The mechanisms through which Se functions are implemented exclusively by Se-containing proteins (Mangiapane et al. 2014; Wrobel et al. 2016). Selenocysteine is the major part of which, after absorption into selenoproteins, Se exercises its biological function in a system (Mahima et al. 2014). As a result, selenoprotein concentrations and selenoprotein mRNA yield are influenced by Se supply. In chicken, more than 25 different selenoproteins have been identified, all of which play important roles in the catalytic action of the enzyme. The synthesis of selenoproteins is influenced by nutritional levels of Se supplementation in the diet (Zhang et al. 2013; Dalia et al., 2017). A large number of studies have found a connection between dietary Se supplementation and selenoprotein expression in animal tissues. Furthermore, laying hens with a Se supplemented diet had a significant elevation in antioxidant capacity (Han et al. 2017). Many studies have shown that supplementing the diet of laying hens with Se improves their antioxidant capacity (glutathione peroxidase, superoxide dismutase, and catalase) (Han et al. 2017; Sun et al. 2020). Supplementation with Se improves immune and antioxidant status (Sun et al. 2020), increases the content of selenium in eggs (Liu et al. 2020b), and prevents clinical problems due to deficiency of Se (Nabi et al. 2020). Mineral utilization is primarily determined by their bioaccumulation and retention (Li et al. 2018). The quantity and form of ingested Se determine how it is absorbed and stored in the body (Payne and Southern 2005).
Dietary Se supplementation increased Se deposition and concentration in eggs (Pan et al. 2011; Meng et al. 2019; Liu et al. 2020b). Selenium-fortified eggs can therefore be produced by supplementing the diet of hens with selenium. Compared to the widely used inorganic sodium selenite, organic Se has been reported to increases Se deposition in eggs and improve the quality of eggs (Liu et al. 2020a; Nabi et al. 2020). The efficacy of the organic form of selenium was due to its greater utilization and absorption compared to other selenite sources (Utterback et al. 2005; Delezie et al. 2014; Han et al. 2017). Liu et al. (2020c) reported that the addition of 0.5 mg/kg SY increases egg yolk Se compare to 0.3 and 0.5 mg/kg SS and 0.3 mg/kg SY. Furthermore, hens fed nano-Se and Se-yeast had a significant Se deposition in their egg, liver, and kidney (Meng et al. 2020).
To produce organic Se, different strains of microorganisms can be used in the microbial reduction pathway. Stenotrophomonas maltophilia (ADS18) has been linked to organic Se-containing proteins that can be used as Se sources in poultry (Dalia et al., 2017; Dalia et al. 2018). In laying hens, dietary Se (yeast or bacteria) improved antioxidant capacity, increased serum biochemical markers, and boosted Se deposition efficiency (meat, eggs, and blood) (Mohapatra et al., 2014; Han et al., 2017; Nasiri et al., 2019; Wang et al., 2019; Lu et al., 2020; Timur and Utlu 2020; Muhammad et al., 2021). Although Se may help the antioxidant system, there is little scientific evidence on the effect of this new organic Se source on layers. No published research on the effect of bacterial organic Se from the ADS18 source (Dalia et al. 2017), on antioxidant enzyme activity, blood biochemical parameters, and selenium concentration in layers has been recorded. In this study, the antioxidant enzyme activity, blood biochemical parameters, and selenium concentration in laying hens were examined utilizing bacterial organic Se as an alternative organic Se source with other selenium sources.
This study was reviewed and approved by the Institutional Animal Care and Use Committee of University Putra Malaysia (UPM/IACUC/AUP-R063/2018). All procedures were performed under the guidelines and regulations for the administration affairs concerning experimental animals as stipulated.
Animals Experimental Design And Diets
A total of 144 23-wk-old Lohman Brown classic hens (1702 ± 183 g) were divided into four equal groups, each 36 hens reared in a ventilated henhouse and two-tier stainless-steel cages with one hen per cage at Ladang 15, Universiti Putra Malaysia. The cage measured 30 cm in width, 50 cm in depth, and 40 cm in height. Except for Se, which was supplemented as 0.3 mg/kg feed according to (Surai 2006), a basal diet for laying hens was prepared according to NRC (National Research Council) (NRC 1994) guidelines (Table 1). Three supplemented diets were designated as control, basal diet + 0.3 mg/kg feed sodium selenite (SS), Se-yeast (SY), and bacterial organic Se (ADS18), respectively. The production and extraction of the bacterial Se content are described by (Dalia et al., 2017). The experimental diets were formulated with FeedLIVE software and adhered to the nutrient requirements of the Lohman management guide (2018), with hens limited to 120 grams per day. During the experimental process, the hens were fed a day (07:00–08:00) and had ad libitum access to water and treatment diets at an ambient temperature of approximately 30 ± 5°C during the experimental phase. With the light beginning at 17:00 local time and following the Lohman Brown-Classic, (2018), a sixteen-hour light and eight-h dark lightning schedule were exercised. The feeding trial lasted for 16 weeks, with a four-week adaptation period.
Ingredient composition and analyzed nutrient concentration of the basal diet (on a dry matter basis).
Soybean Meal 48%
Wheat Pollard
DL-Methionine
Dicalcium Phosphate (18%)
Mineral Mixa
Vitamin Mixb
Antioxidantc
Toxin Binderd
Analysed composition
Metabolizable energy Kcal/Kg
Crude protein (%)
Crude fat (%)
Fibre (%)
Total phosphorus (%)
Available phosphorus for poultry (%)
a Mineral premix supplied (per kg of diet): Cu2+ 15 mg, Zn2+ 120 mg, Fe2+ 120 mg, Mn2+ 150 mg, iodine 1.5 mg, and cobalt 0.4 mg.
b Vitamin premix supplied (per kg of diet): Vitamin A (retinyl acetate) 10.32 mg, vitamin E (DL-tocopherol acetate) 90 mg, cholecalciferol 0.250 mg, vitamin K 6 mg, cobalamin 0.07 mg, thiamine 7 mg, riboflavin 22 mg, niacin 120 mg, folic acid 3 mg, biotin 0.04 mg, pantothenic acid 35 mg and pyridoxine 12 mg.
c Antioxidant contains butylated hydroxyanisole (BHA).
d Toxin binder contains natural hydrated sodium calcium aluminum silicates to reduce the exposure of feed to mycotoxins.
e The Se content measured using ICP.MS.
f Feed live International Software (Nonthaburi, Thailand) was used to formulate the diets.
Slaughtering, Blood, And Tissue Collection
To collect blood and tissue samples, twenty-four hens were randomly selected from the four treatments (one from each replicate) and slaughtered according to Halal procedures, as defined in the Malaysian Standard (Malaysia 2009). Blood samples (10 ml) were taken from each hen's jugular vein and collected in BD Vacutainer® Plus Plastic Serum Tubes (Becton Dickinson, New Jersey, USA) during slaughtering. Blood samples were centrifuged at 3,000 x g at 4°C for 10 min, and the resultant supernatant (serum) was separated and stored at − 80°C for biochemical serum and antioxidant capacity analysis (Humam et al. 2021). For the antioxidant activity assay, a portion of liver tissue was sliced and snapped frozen in liquid nitrogen before being stored at − 80°C. A portion of the breast muscle sample was snapped frozen in liquid nitrogen and stored at − 80°C for further assays.
Determination Of Serum And Tissue Antioxidant Enzymes Activity
The serum and liver were tested for total antioxidant capacity (T-AOC), glutathione peroxidase activity (GSH-Px), and total superoxide dismutase activity (T-SOD), catalase activity (CAT). Phosphate-buffered saline (PBS) was used to homogenize liver tissue on ice, then centrifuged at 3,000 x g for 10 min at 4°C to extract supernatant for enzyme assays (Dalia et al., 2017; Humam et al. 2021).
Total antioxidant capacity (T-AOC) was measured from serum and liver using the QuantiChrom™ Antioxidant Assay Kit (DTAC-100, BioAssay Systems, Hayward, USA), following the manufacturer's instructions. The assay measures the total antioxidant capacity of the sample's antioxidant in which Cu2+ is reduced by antioxidant to Cu+, a colored complex with a dye is formed with a resulted Cu+, and the intensity of the color was proportional with the total antioxidant capacity present in the sample. The detection range of the kit was from 1.5 to 1000 µM Trolox equivalents. Briefly, 5 µL of the standard with 245 µL distilled water (1 mM Trolox) was prepared, and 20 µL each of standards and samples was transferred into 96-well plate following the serial concentration and separate wells for samples, respectively. Working reagent for sample and standard was mixed, for each assay well, containing 100 µL Reagent A and 8 µL of Reagent B. 100 µL working reagent was added to all assay wells, gently tap to mix and incubate at room temperature for 10 min. Finally, the absorbance of the TAC was read at 570 nm using a microplate reader (Multiskan Go, Thermo Scientific, Waltham, Massachusetts, USA) (Dalia et al., 2017; Humam et al. 2021). The standard curve was used to calculate the TAC activity in the serum and liver samples.
2.3.2. Glutathione peroxidase (GPx) activity was performed in serum and liver samples using EnzyChrom™ Glutathione Peroxidase Assay Kit (EGPX-100, BioAssay Systems, Hayward, USA) following the manufacturer's instructions. The assay directly measures NADPH consumption in the enzyme-coupled reactions. The measured reduction in optical density at 340 nm is directly proportional to the enzyme activity present in the sample. The detection range of the kit was 40 to 800 U/L GPx. Briefly, 10 µL of the samples added with 90 µL of working reagent (80 µL assay buffer, 5 µL glutathione, 3 µL HADPH (35 mM), and 2 µL GR enzyme) were transferred into the 96-microplate well and gently tap the plate for the mixture. A 100 µL of the substrate solution was added immediately to each sample including control wells. The optical density of the samples and standards absorbance measurement was immediately measured at time zero (OD0), and at 4 min (OD4) using a microplate reader (Multiskan Go, Thermo Scientific, Waltham, Massachusetts, USA) at 340 nm (Dalia et al., 2017; Humam et al. 2021). The standard curve was a plot using NADPH standards. The standard curve was used to calculate the GPx activity in the serum and liver samples.
$$GSH-Px Activity (U/L)= \frac{\varDelta ODS- \varDelta ODB}{Slope \left(mM-1\right)*4 \left(\text{m}\text{i}\text{n}\right)}*1000*n$$
where;
ΔODS = (OD0 – OD4) for the samples.
ΔODB = (OD0 – OD4) for the background control.
The factor 1000 converts mmoles to µmoles.
N is the sample dilution factor.
Superoxide dismutase (SOD) activity was performed for serum and liver using EnzyChrom™ Superoxide Dismutase Assay Kit (ESOD-100, BioAssay Systems, Hayward, USA) following the manufacturer's instructions. The assay relies on the addition of xanthine oxidase (XO) to the samples as a source of superoxide (O2−). The O2− forms a colored product as it interacts with a specific (WST-1) dye. The sample's SOD activity, which acts as a superoxide scavenger, scavenges the O2−, thus lowering the color intensity. The kit has a detection range of 0.05 to 3 U/mL SOD. A microplate reader (Multiskan Go, Thermo Scientific, Waltham, Massachusetts, USA) set to 440 nm was used to calculate the color intensity indicating SOD activity in a sample. The concentration of SOD in the samples was calculated using the standard curve.
Catalase (CAT) activity was measured from serum and liver using the EnzyChrom™ Catalase Assay Kit (ECAT-100, BioAssay Systems, Hayward, USA), following the manufacturer's instructions. The assay measures the degradation of H2O2 using a redox dye, and the detection range of the kit was 0.2 to 5 U/L CAT. As described, 10 µL of the sample, positive control, and assay buffer as blank plus 90 µL of substrate buffer (50 µM) were loaded into 96 micro-plate wells, gently shaken, and incubated at room temperature for 30 min. While waiting for incubation time, the standard curve was prepared by mixing 40 µL of the 4.8 mM H2O2 reagent with 440 µL of distilled water in the serial concentration, then 10 µL of the standard solution with 90 µL of assay buffer were placed into standard wells. At the end of incubation time, 100 µL of detection reagent was mixed in each well and incubated for 10 min at room temperature. A microplate reader (Multiskan Go, Thermo Scientific, Waltham, Massachusetts, USA) was used to read the optical density of CAT at 570 nm (Dalia et al., 2017; Humam et al. 2021). The CAT activity in the serum and liver samples was calculated using the standard curve.
Blood Biochemical Assay
The biochemical parameters measured were activities of enzymes (AST, ALP, ALT, and GGT), and concentration of metabolites creatinine, cholesterol, triglycerides, LDL, high-density lipoprotein (HDL), very low-density lipoprotein (VLDL) and lactate dehydrogenase (LDH), total protein, total bilirubin, ALB, globulin, and urea. All constituents were measured using an Auto-blood biochemical analyzer (Automatic Analyser 902, Hitachi, Germany), except serum globulin and albumin/globulin ratio (A/G), which were extrapolated as follows: G = total protein – albumin, A/G = albumin/globulin (Abdel-Azeem et al. 2019). All samples were run in replicates.
Determination of egg yolk, breast tissue, and serum selenium concentration
Eggs were collected, broken, and fractionated into albumen and yolk. They were capped in a container, frozen (> 24 h) at -80°C, and lyophilized at -50°C for 72 h (Labconco FreeZone plus 6, USA), and stored in -20°C until further analysis (Lipiec et al. 2010; Tufarelli et al. 2016). The lyophilized samples were ground to powder using porcelain mortar and pestle and stored at 4°C. All chemicals/reagents were of analytical grade purchased from Sigma-Aldrich (Saint-Quentin Fallavier, France) and were used throughout the analysis unless stated differently with the dilutions prepared daily. The procedure of total selenium determination of egg yolk as described (Lipiec et al. 2010), tissues (Jagtap and Maher 2016) was followed with modifications. Briefly, approximately 0.5 mL serum and lyophilized 0.5 g egg yolk and breast tissue each were weighed into 10 ml Teflon digestion vessels (A. I. Scientific, Australia) and 5 ml of concentrated HNO3 (Sigma-Aldrich, USA) and 3 mL H2O2 (Emsure® ISO, Merck) added. Digestion was carried out in DigiPrep (SCP Science, Courtaboeuf, France) to heat the samples for 4 h at 100°C starting at a lower temperature (65°C) for 30 min (approximately) and raises gradually. The vessel was allowed to cool for 60 min at room temperature (25°C) after digestion and then diluted with distilled or deionized water in a polyethylene vial to a final volume of 10 ml. Total Se concentration in diluted digest was determined with a Perkin Elmer DRC-e ICP-MS with calibrations performed every 20 samples. The quantification (external calibration) was carried out by preparing standard of five different concentrations (0.0, 0.2, 0.4, 0.6, 0.8 and 1.0 mg/L). The ICP MS collision was pressurized with hydrogen. All the samples were injected via a Micromist nebulizer fitted with a Scott double-pass spray chamber for determination (Lipiec et al. 2010).
All data analyses were performed using the Statistical Analysis System (SAS) 9.4 Version (SAS Institute, Cary, North Carolina, USA) in a completely randomized design. The data were analyzed by the General Linear Model (GLM) procedure of SAS and Duncan Multiple Range Test was used to separate means. The significant differences between the treatments each with six replicates were established at a P-value < 0.05 level. In all figures and tables, the results were presented as mean ± SEM.
Antioxidant enzyme activity in serum and liver
The effects of different dietary Se supplementation on the antioxidant enzyme activities of TAC, GSH-Px, SOD, and CAT in laying hen serum and liver (Fig. 1) were summarized. The Se-yeast group had significantly (P < 0.05) higher serum TAC activity than did the ADS18, SS, and basal diet groups. Furthermore, supplementation with bacterial organic Se of ADS18 resulted in a significant (P < 0.05) increase in serum SOD, and CAT activity when compared to other groups, but GSH-Px activity was similar to the Se-yeast group. The results of GSH-Px activity in SS and non-supplemented hens were not significantly (P > 0.05) different. The Se-yeast, SS, and control groups did not differ (P > 0.05) significantly in terms of CAT activity. When compared to basal diets, bacterial organic Se (ADS18), Se-yeast and SS supplementation result in a significant (P < 0.05) increase in hepatic TAC, GSH-Px, and CAT activity. Furthermore, hepatic GSH-Px activity was decreased in the Con group, and it differed significantly (P < 0.05) from the ADS18 and Se-yeast groups, but was similar to the SS group. ADS18 and Se-yeast-fed hens had increased (P < 0.05) liver CAT activity than sodium selenite or control treatments. Dietary Se had no effect (P > 0.05) on laying hens' hepatic SOD activity in any of the treatment groups. Despite the lack of a regular trend, Se supplementation in any form (inorganic or organic) was associated with a significant (P < 0.05) increase in liver antioxidant indicators when compared to the basal diet.
Table 2 summarizes the effect of dietary Se treatments on serum biochemical parameters in 39-weeks-old laying hens. There were no significant (P > 0.05) differences in plasma proteins (total protein, serum albumin, globulin, albumin globulin ratio), kidney markers (gamma-glutamyl transpeptidase, total bilirubin, creatinine, urea), or uric acid between treatment groups. Hens fed ADS18, Se-yeast, or SS diets showed a decrease (P < 0.05) serum AST and ALP activities. There was no significant difference (p > 0.05) in serum ALT activities, as the dietary treatment groups values were all less than four units per liter (U/L).
The ADS18, Se-yeast, or SS-supplemented groups had significantly (P < 0.05) reduced serum total cholesterol concentrations than did the control group. Serum triglycerides and VLDL were significantly (P < 0.05) lower in the ADS18 and Se-yeast dietary Se treatment groups than in the SS and control groups, while the SS and SY groups differ significantly (P < 0.05) in HDL. However, there was no significant (P > 0.05) differences in LDL and LDH concentrations in either of the dietary treatment groups. There was no significant difference (P > 0.05) in serum uric acid concentrations between the dietary treatment groups.
Serum biochemical profiles of laying hens fed different selenium sources
Experimental diets
Total Protein (g/L)
51.10 ± 2.10
Albumin (g/L)
Globulin (g/L)
Albumin Globulin Ratio
0.480 ± 0.025
Kidneys functions
GGT (U/L)
16.333 ± 3.92
Total bilirubin (umol/L)
1.767 ± 0.19
1.72 ± 0.12
Creatinine (umol/L)
Urea (mmol/L)
Liver functions
AST (U/L)
270.33 ± 16.12a
254.67 ± 13.12ab
225.00 ± 13.34b
219.00 ± 7.74b
ALP (U/L)
ALT (U/L)
< 4
Plasma lipids
Cholesterol (mmol/L)
3.517 ± 0.201a
3.167 ± 0.145ab
2.833 ± 0.233bc
2.617 ± 0.075c
Triglyceride (mmol/L)
14.685 ± 1.175a
7.863 ± 1.325b
LDL (mmol/L)
HDL (mmol/L)
VLDL (mmol/L)
2.94 ± 0.23a
1.57 ± 0.27b
LDH (U/L)
448.17 ± 33.73
Antioxidative status
Uric Acid (umol/L)
153.0 ± 22.89
The enzyme unit (µmol/min) is a measure of enzyme catalytic activity. GGT gamma-glutamyl transpeptidase, AST aspartate aminotransferase, ALP alkaline phosphatase, ALT alanine aminotransferase, LDL low-density lipoprotein cholesterol, HDL high-density lipoprotein cholesterol, VLDL very-low-density lipoprotein cholesterol, LDH lactate dehydrogenase. Data represent mean ± SD of six replicates of six hens. a−c Means vary significantly within a row with different superscripts (P < 0.05). NA is not statistically analyzed or not available.
Egg Yolk, Breast Tissue, And Serum Selenium Concentration
As a study baseline, the egg yolk Se concentration in each treatment was measured three days after commencing the treatment diets and found to be comparable (P > 0.05) in all groups. However, selenium concentrations in egg yolks of hens supplemented with dietary Se were higher (P < 0.05) than in hens fed a basal diet at the end (16-wks) of the experimental period (Fig. 2a) and 14 days post-storage (4°C ± 2) after the experiment (18-wks) (Fig. 2b). For fresh and stored egg yolk, organic Se supplemented hens showed greater yolk Se contents than inorganic SS and the basal diet. Except for the SS group, which had significantly higher (P < 0.05) yolk Se concentrations than did hens fed the control at the end of the experiment (16-wks) and 14-days post stored eggs (18-wks), no significant (P < 0.05) differences were found between the ADS18 and Se-yeast egg yolk Se concentrations at 16-wks (fresh) and 18-wks (stored). Between 16 and 18 weeks, the concentrations of egg yolk Se in all treatment groups were statistically identical, indicating that storage had no effect on egg yolk Se concentration and that there was a significant (P < 0.05) difference over the experimental period (Fig. 2b).
Se concentrations in breast tissue and serum increased (P < 0.05) in hens given dietary Se compared to the non-supplemented group (Table 3). Organic Se (ADS18 or Se-yeast) treatment groups had no significant (P > 0.05) difference in breast meat Se concentrations, but they were significantly higher (P < 0.05) than those for inorganic and non-supplemented hens. Similarly, hens provided bacterial organic supplementation had the highest (P < 0.05) serum Se concentration compared to non-supplemented hens. The serum Se concentration of the hens was ADS18 > Se-yeast > SS and control or basal diet, in that order.
Breast tissue and serum selenium concentration of laying hens fed organic and inorganic Se sources
Experimental diets*
Breast muscle (µg/g)
< .0001
Serum (µg/ml)
0.044 ± 0.000d
*Con = control, SS = Sodium selenite; SY = Selenium yeast; ADS18 = Bacterial enriched organic Se. a − c Values within the same column with different superscript letters differ (P < 0.05) significantly.
Animal wellbeing may be achieved by enhanced antioxidant capacity (Li et al. 2018). CAT, SOD, and GSH-Px, and lactoferrin, carotene, vitamin C, glutathione (GSH) as non-enzymatic constituents, are antioxidant enzymes metabolites in physiological antioxidant systems (Eşrefoǧlu 2009). The main selenium-dependent enzymes are; glutathione peroxidases (GSH-Px) which catalyze H2O2 and ROS to water (Behne and Kyriakopoulos 2001), superoxide dismutase (SOD) catalyzes superoxide anion to H2O2 and molecular O2 (Okado-Matsumoto and Fridovich 2001), and catalase (CAT) catalyze hydrogen peroxide decomposition to yield water and oxygen, thus, protecting cells from oxidative damage (Nandi et al. 2019). Hence, dietary Se supplementation enhanced antioxidant capacity in animals (Surai and Dvorska 2002).
In general, the efficacy of organic Se over bioavailability and tissue retention is superior to that of inorganic Se. Minerals' utilization is dependent on their bioaccumulation and retention (Li et al. 2018). Similarly, compared to inorganic, dietary organic, and Nano-Se supplementation could improve the concentration of breast muscles, liver, and serum Se (Mohapatra et al. 2014; Mohamed et al. 2020), possibly resulting in greater activity of GSH-Px. Additive supplementation (e.g. Se) improves the activity of antioxidant enzymes in chickens through antioxidant capacity (Mohapatra et al. 2014; Markovic et al. 2018).
In the present study, organic Se bacteria (ADS18) and yeast (Se-yeast) have demonstrated stronger antioxidant activity in laying hens serum and liver compared to inorganic (sodium selenite) Se and non-supplemented hens, in line with previous findings. The serum TAC value was significantly higher in the nano selenium or Se-yeast groups than in the control group, and Se-yeast supplementation also enhanced serum CAT and SOD activity in Brown Hy-line hens (Meng et al. 2020). Moreover, compared to positive control groups of local Chinese yellow male chickens infected with Eimeria tenella, (Mengistu et al. 2020) reported higher serum SOD and GSH-Px1 activities with Se-enriched probiotics. Xia et al. (2020) observed a linear and quadratic increase in liver GSH-Px1 and SOD activity in breeder ducks with increased dietary Se levels. In T-2 toxin (T-2) or HT-2 toxin (HT-2)-induced cytotoxicity and oxidative stress broiler hepatocytes, Yang et al. (2019) observed a significant increase in hepatic GSH-Px, SOD, and CAT activity that was activated by toxins with 1 µM DL-Selenomethionine. Also, relative to those laying hens fed with the basal diet, Meng et al. (2019) reported an improvement in serum GSH-Px, T-AOC, and CAT activities in the nano-Se or sodium selenite group. Dalia et al. (2017) found the highest serum GSH-Px activity and CAT liver with bacterial organic Se supplementation of ADS18, respectively. Li et al. (2017) reported increased serum and breast GSH-Px activity with Se-yeast, Met-Se, and Nano-Se dietary supplementation compared with the SS group. Selenium is an indispensable constituent of the GSH-Px enzyme, actively involved in oxidative damage defense (Rotruck et al., 1973; Fernández-Lázaro et al., 2020).
In this study, the enzyme's activity was significantly enhanced by Se supplementation (organic) in serum and liver. The response of external stimuli and free radicals' metabolism capacity in organisms can be assayed by T-AOC (Huma et al., 2019). The body's total antioxidant capacity can be measured through TAC values (Zhang et al. 2011), with low or higher T-AOC suggesting oxidative stress or susceptibility to oxidative damage, respectively (Meng et al. 2020). Consequently, dietary supplementation with organic Se of ADS18 bacteria or Se-yeast could promote the antioxidant capacity of laying hens, thereby ensuring that egg-laying efficiency is preserved. The potential reason was that ADS18 or Se-yeast contains organic Se, which is much less harmful and more bioavailable and effectively preserved in the tissues of the body.
Furthermore, hens supplemented with Se (regardless of Se form) improves all the measured antioxidant indexes except SOD which was not affected by dietary Se treatments in the liver. In summary, with the addition of bacterial organic Se, the serum and liver TAC, GSH-Px, CAT, and SOD activities produced by the cells to prevent the occurrence of oxidative damage (Xu et al. 2016; Yang et al. 2019), were further enhanced, indicating that the bacterial organic Se of ADS18 could partially reduce oxidative damage by regulating the activities of enzymes (antioxidases). Although catalase and superoxide dismutase are not Se-dependent enzymes for their functions, the presence of Se in animal rations can influence their activities via thyroid hormone metabolism (Meng et al. 2020; Mohamed et al. 2020).
Markers of nutritional conditions in growing animals may be serum biochemical parameters (Mu et al. 2019). The maintenance of plasma osmotic pressure, provision of energy, repairing the worn-out tissue, carrier, and transporter of nutrients to sustain body tissue protein, active balance of cells is the role of albumin protein, which is synthesized in the liver (Surai 2002). Liu et al. (2020a) found no major variations in albumin, total protein, or blood urea nitrogen after adding 0.3 and 0.5 mg/kg addition of sodium selenite and selenium yeast, respectively. Similarly, Hossein Zadeh et al. (2018) did not note any effect on the blood constituents of either organic or inorganic forms of Se.
Different Se sources supplementation did not affect blood albumin, total protein, globulin, or the albumin globulin ratio in this study. Besides, as kidney function makers, gamma-glutamyl transpeptidase, total bilirubin, creatinine, and urea have not been affected and are per the previous reports (Kumar et al. 2008; Alimohamady et al. 2013; Sethy et al. 2015). Se has a greater impact on serum biochemical parameters, with a significant impact on lipid metabolism and a lesser impact on liver functions. The decreased total cholesterol, triglycerides, and VLDL observed as a result of supplementation with Se-yeast or ADS18 showed that the organic form of Se could play an anabolic role in fat deposition than the inorganic source of Se (Jeyanthi 2010; Sheoran 2017). Besides, the composition of fatty acids in the whole body could be modulated by supplementation Se via organic forms in yeast or bacteria. In a research study conducted by Dhingra and Bansal (2006) and Yang et al. (2010), they reported dietary Se supplementation plays a role in increasing the activity of LDL receptor, but, reduces the expression of 3-hydroxy 3-methylgluatryl coenzyme A (HMG-CoA) reductase in rat and also invariably decrease serum LDL and cholesterol.
The results of this research were inconsistent with the study by Abdel-Azeem et al. (2019) and Amer et al. (2018), which showed the hypolipidemic effect of organic selenium (Se-yeast) in wean male rabbits by significantly reducing serum total cholesterol and LDL-cholesterol. In an in vitro study with Wistar rats, Urbankova et al. (2021) reported that Se deficiency tends to result in increased total cholesterol, LDL, and a significant decrease in HDL concentrations. Antioxidants' hypocholesterolemic activity may be due to oxysterols' inhibition of sterol biosynthesis (Revilla et al. 2009; Hozzein et al. 2020). Consequently, the antioxidant effect is principally attributed to selenoenzymes, glutathione peroxidase (GPX's), and thioredoxin reductase. In studies with growing pullets, Jegede et al. (2012) showed that, compared to CuSO4, supplementary trace (Cu-P) minerals reduced plasma cholesterol, LDL, and triglycerides. Kim et al. (1992) showed that the mechanism of Cu is to control cholesterol biosynthesis by reducing hepatic glutathione concentration and changes the hepatic GSH: GSSG (oxidized glutathione) ratio, thereby, increased the activity of 3-hydroxyl-3-methylglutaryl Co-(HMG-CoA) reductase. Glutathione plays an important role in regulating cholesterol biosynthesis via HMG-CoA reductase stimulation (Konjufca et al. 1997), which is the primary enzyme of cholesterol biosynthesis, and in turn, decreases plasma cholesterol concentration. The above-mentioned pathway can explain the reduction in plasma cholesterol by supplementing the organic form of Se.
Organic Se supplementation (Se-yeast or ADS18) has demonstrated a significant decrease in triglyceride concentration relative to inorganic and non-supplemented hens in the present study. These findings were consistent with the results of Jegede et al. (2012), who reported a significant decrease in triglyceride concentration in growing pullets supplemented with Cu-P compared to CuSO4. Moreover, dietary Se appears to have a major effect on aspartate aminotransferase (AST), alkaline phosphatase (ALP), but no effect on alanine aminotransferase (ALT) in the present study. Sizova et al. (2021) observed a substantial increase in ALT activity in broilers fed organic zinc on days 35 and 42, compared to control, though AST did not change significantly. Broiler chickens fed 0.3 ppm organic Se (Perić et al. 2009), 0.5 and 1.0 mg Se per Kg had significantly lower ALT and AST enzyme activity (Biswas et al., 2011).
In contrast, none of the blood constituents ALT, AST, TP, albumin, urea, and creatinine are affected by either inorganic (0.5 and 0.15 mg Se) or organic (0.35 mg Se) (Okunlola et al. 2015). Blood enzymes (ALT, AST, ALP, LDH) cause oxidative damage to the liver and kidneys, which can be reduced, imparted, and enhanced through redox status to protect against oxidative damage (Zhang et al. 2018).
The concentration of Se in egg yolk, breast muscle, and serum increased after dietary Se supplementation with Se, according to this study. Avian eggs are ideal vectors used to study the absorption and retention of microminerals, including Se at varying dosages and forms (Pan et al. 2007; Delezie et al. 2014). Dietary supplementation with Se increased egg yolk Se in the current study, which is consistent with previous studies (Liu et al. 2020b; Zhang et al. 2020). Lu et al. (2020) found higher Se concentrations in eggs and breast tissue of laying hens fed 0.1 to 0.4 mg/kg of Se from Se-enriched yeast than in eggs and breast tissue of hens fed SS or basal diet. Also, Liu et al. (2020a) found that 0.5 mg/kg of Se-yeast resulted in higher Se deposition in egg yolk than did sodium selenite in laying hens. According to Zhang et al. (2020), adding Se-yeast to the diets of laying hens helps to increase Se deposition in eggs. Likewise, hens fed with hydroxy selenomethionine and Se-yeast had higher concentration yolk Se than when fed the SS and basal diets (Moslehi et al. 2019).
Dietary Se supplementation with Vitamin E, Se, and their blend significantly increased the concentration of Se in breast tissue and certain laying hens' organs (Çelebi 2019). Similar results were found in eggs and breast meat Se concentration in laying hens (Lu et al. 2019; Lv et al. 2019), serum, liver, and muscle Se in growing lambs fed different concentrations (0.2 to 1.4 mg/kg DM) and Se sources (Se-met or Se-yeast and SS) (Paiva et al. 2019). Hens fed organic Se showed higher egg Se content compared to inorganic Se in response to the effectiveness of organic Se over inorganic form (Skřivan et al. 2006; Chantiratikul et al. 2008).
The difference in Se deposition in egg yolk between inorganic and organic Se sources may be due to their dissimilar metabolizable pathways, as SS cannot be completely metabolized to SeMet in poultry, which could explain the current findings (Sunde et al. 2016). Sodium selenite has a lower absorption rate and a higher excretion than organic Se (Mahan and Parrett 1996). Organic Se is actively absorbed and is dependent on the metabolization and integration of mainly organic selenomethionine with methionine into egg proteins and tissues (Čobanová et al. 2011; Surai and Kochish 2019). Selenoproteins from the liver are incorporated as part of egg yolk synthesis, whereas the uterine tubes incorporate selenomethionine as a part of egg white synthesis (Mahan and Kim 1996; Lv et al. 2019). The higher Se deposition in hens' eggs fed Se-yeast might perhaps be connected to the upregulation of methionine (Met) metabolism gene glycine N-methyltransferase (GNMT) in the liver (Meng et al. 2019). With different Se levels and time, there is a positive increase in egg Se concentration (Lu et al. 2019).
As hypothesized, with dietary Se supplementation, the concentration of Se in breast tissue and serum of laying hens was increased, and organic Se (ADS18 or Se-yeast) was more efficient compared to inorganic Se. Paiva et al. (2019) relate serum Se concentration to time and dose-dependent regardless of Se form. Application of 0.30 mg/kg selenomethionine in broiler breeder diets results in higher serum and tissue Se deposition than other sources of Se (Li et al. 2018). Also, Han et al. (2017) reported a higher Se concentration from different Se sources in serum and organs of layers fed 0.3 mg/kg Se. Moreover, layer chicks fed a 0.3 mg/kg diet of both nano-Se and sodium selenite showed a significant increase in Se concentrations in tissues, organs, and serum (Mohapatra et al., 2014). Because SeMet is available in an organic form of Se and is closely linked to its bioavailability and assimilation (Briens et al. 2013; Mohapatra et al., 2014), the present findings are justified.
The significant differences found between the treatment groups in serum and breast tissue may be due to dissimilar metabolic pathways, as Se can be incorporated into selenoprotein as selenocysteine from inorganic and organic sources, while SeMet is incorporated in a nonspecific direction as methionine (Surai and Kochish 2019). Inorganic Se compounds are mainly used to synthesize selenoproteins and not replenish Se deposits in tissues in the Se metabolism pathway (Moslehi et al. 2019). The addition of organic Se (Se-yeast or SeMet) into the diet is connected with a significant increase of Se level in laying hens tissue (Invernizzi et al. 2013; Jing et al. 2015). Therefore, the quantity of Se uptake and its absorption in egg, tissue, and blood of laying hens is determined by the Se chemical form in organic sources (Jing et al. 2015). However, more research is required to explore the complete metabolic pathway of organic (Se-yeast or ADS18) sources.
In conclusion, dietary Se supplementation, particularly organic forms (ADS18 or Se-Yeast), enhances hen serum and hepatic antioxidant enzyme activity while lowering total serum cholesterol and serum triglycerides concentrations. Moreover, hens given dietary Se treatments exhibited greater selenium concentration in their egg yolks, serum, and breast tissue. In comparison to inorganic (sodium selenite) Se, 0.3 mg/kg of enriched bacterial protein from ADS18 Se improves antioxidant enzymes activity, serum biochemical parameters, and Se concentrations in laying hens, making it a significant alternative source of Se.
A.I.M designed and conduct the animal experiments and all the laboratory analyses, analyzed and interpreted data, and drafted the manuscript. A.A.S. designed, supervised, and administrated the overall research project, A.M.D, T.C.L, and H. A participated in the whole preparation of the manuscript. All authors read and approved the final manuscript.
A.I. was a recipient of a scholarship from Tertiary Education Trust Funds (TETFund) and Federal University Dutse, Jigawa State Nigeria.
Code availability
All experiential steps were implemented according to the Local Experimental Animal Care Panel and permitted by the Institutional Animal Care and Use Committee of University Putra Malaysia (UPM/IACUC/AUP-R063/2018).
This study was financed by the Fundamental Research Grant Scheme (FRGS 5524272) granted by the Malaysian Ministry of Higher Education.
Consent to participate
Consent to publish
All authors give consent for publication.
Abd El-Hack ME, Mahrose K, Askar AA, et al (2017) Single and combined impacts of vitamin a and selenium in diet on productive performance, egg quality, and some blood parameters of laying hens during hot season. Biol Trace Elem Res 177:169–179. https://doi.org/10.1007/s12011-016-0862-5
Abdel-Azeem NM, Abdel-Rahman SM, Amin HF, et al (2019) Effect of dietary organic selenium supplementation on growth performance, carcass characteristics and antioxidative status of growing rabbits. J World's Poult Res 9:16–25. https://doi.org/10.36380/scil.2019.wvj3
Abdel-Daim MM, Dawood MAO, Aleya L, Alkahtani S (2020) Effects of fucoidan on the hematic indicators and antioxidative responses of Nile tilapia (Oreochromis niloticus) fed diets contaminated with aflatoxin B1. Environ Sci Pollut Res 27:12579–12586. https://doi.org/10.1007/s11356-020-07854-w
Ahmadi M, Ahmadian A, Seidavi AR (2018) Effect of different levels of nano-selenium on performance, blood parameters, immunity and carcass characteristics of broiler chickens. Poult Sci J 6:99–108. https://doi.org/10.22069/psj.2018.13815.1276
Aksoz E, Korkut O, Aksit D, Gokbulut C (2020) Vitamin E (α-, β + γ- and δ-tocopherol) levels in plant oils. Flavour Fragr J 35:504–510. https://doi.org/10.1002/ffj.3585
Alimohamady R, Aliarabi H, Bahari A, Dezfoulian AH (2013) Influence of different amounts and sources of selenium supplementation on performance, some blood parameters, and nutrient digestibility in lambs. Biol Trace Elem Res 154:45–54. https://doi.org/10.1007/s12011-013-9698-4
Amer SA, Omar AE, Abd El-Hack ME (2018) Effects of selenium- and chromium-enriched diets on growth performance, lipid profile, and mineral concentration in different tissues of growing rabbits. Biol Trace Elem Res 187:92–99. https://doi.org/10.1007/s12011-018-1356-4
Ayyat MS, Al-Sagheer AA, Abd El-Latif KM, Khalil BA (2018) Organic selenium, probiotics, and prebiotics effects on growth, blood biochemistry, and carcass traits of growing rabbits during summer and winter seasons. Biol Trace Elem Res 186:162–173. https://doi.org/10.1007/s12011-018-1293-2
Aziz, M. A., Diab, A. S., & Mohammed AA (2019) Antioxidant categories and mode of action. In: Shalaby E (ed) Antioxidants, IntechOpen. IntechOpen, London, United Kingdom, London, United Kingdom, pp 3–22
Behne, D. and Kyriakopoulos A (2001) Mammalian selenium-containing proteins. Annu RevNutr 21:453–73
Biswas, A., Ahmed, M., Bharti, V. K., & Singh SB (2011) Effect of antioxidants on physio-biochemical and hematological parameters in broiler chicken at high altitude. Asian-Australasian J Anim Sci 24:246–249. https://doi.org/10.5713/ajas.2011.10060
Boiago MM, Borba H, Leonel FR, et al (2014) Sources and levels of selenium on breast meat quality of broilers. Ciência Rural 44:1692–1698. https://doi.org/10.1590/0103-8478cr20131256
Briens M, Mercier Y, Rouffineau F, et al (2013) Comparative study of a new organic selenium source vs. seleno-yeast and mineral selenium sources on muscle selenium enrichment and selenium digestibility in broiler chickens. Br J Nutr 110:617–624. https://doi.org/10.1017/S0007114512005545
Canoǧullari S, Ayaşan T, Baylan M, Çopur G (2010) The effect of organic and inorganic selenium supplementation on egg production parameters and egg selenium content of laying Japanese quail. Kafkas Univ Vet Fak Derg 16:743–749. https://doi.org/10.9775/kvfd.2009.1560
Çelebi Ş (2019) Effect of dietary vitamin e, selenium and their combination on concentration of selenium, MDA, and antioxidant enzyme activities in some tissues of laying hens. Pak J Zool 51:1155–1161. https://doi.org/10.17582/journal.pjz/2019.51.3.1155.1161
Chantiratikul A, Chinrasri O, Chantiratikul P (2008) Effect of sodium selenite and zinc- l-selenomethionine on performance and selenium concentrations in eggs of laying hens. Asian Australas J Anim Sci 21:1048–1052. https://doi.org/https://doi.org/10.5713/ajas.2008.70576
Chen G, Wu J, Li C (2013) The effect of different selenium levels on production performance and biochemical parameters of broilers. Ital J Anim Sci 12:486–491. https://doi.org/10.4081/ijas.2013.e79
Chiaiese P, Corrado G, Minutolo M, et al (2019) Transcriptional regulation of ascorbic acid during fruit ripening in pepper (Capsicum annuum) varieties with low and high antioxidants content. Plants 8:1–12. https://doi.org/10.3390/plants8070206
Cimrin T, Tunca RI, Avsaroglu MD, et al (2020) Effects of an antibiotic and two phytogenic substances (cinnamaldehyde and 1,8-cineole) on yolk fatty acid profile and storage period-associated egg lipid peroxidation level. Rev Bras Zootec 49:1–10. https://doi.org/10.37496/rbz4920190270
Čobanová K, Petrovič V, Mellen M, et al (2011) Effects of dietary form of selenium on its distribution in eggs. Biol Trace Elem Res 144:736–746. https://doi.org/10.1007/s12011-011-9125-7
Dalia, A. M., Loh, T. C., Sazili, A. Q., Jahromi, M. F., & Samsudin AA (2017) The effect of dietary bacterial organic selenium on growth performance, antioxidant capacity, and Selenoproteins gene expression in broiler chickens. BMC Vet Res 13:254. https://doi.org/10.1186/s12917-017-1159-4
Dalia AM, Loh TC, Sazili AQ, et al (2018) Effects of vitamin E, inorganic selenium, bacterial organic selenium, and their combinations on immunity response in broiler chickens. BMC Vet Res 14:1–10. https://doi.org/10.1186/s12917-018-1578-x
Dalia AM, Loh TC, Sazili AQ, et al (2017) Characterization and Identification of Organic Selenium-enriched Bacteria Isolated from Rumen Fluid and Hot Spring Water. Microbiol Biotechnol Lett 45:343–353
Delezie E, Rovers M, Van Der Aa A, et al (2014) Comparing responses to different selenium sources and dosages in laying hens. Poult Sci 93:3083–3090. https://doi.org/10.3382/ps.2014-04301
Dhingra S, Bansal MP (2006) Attenuation of LDL receptor gene expression by selenium deficiency during hypercholesterolemia. Mol Cell Biochem 282:75–82. https://doi.org/10.1007/s11010-006-1266-1
Eşrefoǧlu M (2009) Cell injury and death: Oxidative stress and antioxidant defense system: Review. Turkiye Klin J Med Sci 29:1660–1676
Fernández-Lázaro, D., Fernandez-Lazaro, C. I., Mielgo-Ayuso, J., Navascués, L. J., Córdova Martínez, A., & Seco-Calvo J (2020) The role of selenium mineral trace element in exercise: antioxidant defense system, muscle performance, hormone response, and athletic performance. A Systematic Review. Nutrients 12:1790
Giuffrè AM (2019) Bergamot (Citrus bergamia, Risso): The effects of cultivar and harvest date on functional properties of juice and cloudy juice. Antioxidants 8:221. https://doi.org/10.3390/antiox8070221
Gouta H, Laaribi I, Ksia E, et al (2021) Physical properties, biochemical and antioxidant contents of new promising Tunisian almond genotypes: Traits stability, quality aspects and post-harvest attributes. J Food Compos Anal 98:103840. https://doi.org/10.1016/j.jfca.2021.103840
Halliwell B (1990) How to characterize a biological antioxidant. Free Radic Res Commun 9:1–32
Han XJ, Qin P, Li WX, et al (2017) Effect of sodium selenite and selenium yeast on performance, egg quality, antioxidant capacity, and selenium deposition of laying hens. Poult Sci 96:3973–3980. https://doi.org/10.3382/ps/pex216
Hossein Zadeh M, Kermanshahi H, Sanjabi MR, et al (2018) Comparison of different selenium sources on performance, serum attributes and cellular immunity in broiler chickens. Poult Sci J 6:191–203. https://doi.org/10.22069/psj.2018.15232.1341
Hozzein WN, Saleh AM, Habeeb TH, et al (2020) CO2 treatment improves the hypocholesterolemic and antioxidant properties of fenugreek seeds. Food Chem 308:125661. https://doi.org/10.1016/j.foodchem.2019.125661
Huang D, Boxin OU, Prior RL (2005) The chemistry behind antioxidant capacity assays. J Agric Food Chem 53:1841–1856. https://doi.org/10.1021/jf030723c
Huma N, Sajid A, Khalid A, Wardah H, Moazama B, Shakeela P, Sadia M SM (2019) Toxic effect of insecticides mixtures on antioxidant enzymes in different organs of fish, Labeo rohita. Pak J Zool 51:1355–1361. https://doi.org/10.17582/journal.pjz/2019.51.4.1355.1361
Humam AM, Loh TC, Foo HL, et al (2021) Supplementation of postbiotic RI11 improves antioxidant enzyme activity, upregulated gut barrier genes, and reduced cytokine, acute phase protein, and heat shock protein 70 gene expression levels in heat-stressed broilers. Poult Sci. https://doi.org/10.1016/j.psj.2020.12.011
Invernizzi G, Agazzi A, Ferroni M, et al (2013) Effects of inclusion of selenium-enriched yeast in the diet of laying hens on performance, eggshell quality, and selenium tissue deposition. Ital J Anim Sci 12:1–8. https://doi.org/10.4081/ijas.2013.e1
Jagtap R, Maher W (2016) Determination of selenium species in biota with an emphasis on animal tissues by HPLC – ICP-MS. Microchem J 124:422–529. https://doi.org/10.1016/j.microc.2015.07.014
Jegede A V., Oduguwa OO, Oso AO, et al (2012) Growth performance, blood characteristics and plasma lipids of growing pullet fed dietary concentrations of organic and inorganic copper sources. Livest Sci 145:298–302. https://doi.org/10.1016/j.livsci.2012.02.011
Jeyanthi GK and GP (2010) The effect of supplementation of diet with vitamin-e and selenium and their combinations on the performance and lipid profiles of layer chickens. Int J Pharma Bio Sci 1:1–11
Jing CL, Dong XF, Wang ZM, et al (2015) Comparative study of DL-selenomethionine vs sodium selenite and seleno-yeast on antioxidant activity and selenium status in laying hens. Poult Sci 94:965–975. https://doi.org/10.3382/ps/pev045
Kim, S., Chao, P. Y., & Allen KG (1992) Inhibition of elevated hepatic glutathione abolishes copper deficiency cholesterolemia. FASEB J 6:2467–2471.
Konjufca VH, Pesti GM, Bakalli RI (1997) Modulation of Cholesterol Levels in Broiler Meat by Dietary Garlic and Copper. Poult Sci 76:1264–1271. https://doi.org/10.1093/ps/76.9.1264
Kumar N, Garg AK, Mudgal V, et al (2008) Effect of different levels of selenium supplementation on growth rate, nutrient utilization, blood metabolic profile, and immune response in Lambs. Biol Trace Elem Res 126:44–56. https://doi.org/10.1007/s12011-008-8214-8
Kurutas EB (2016) The importance of antioxidants which play the role in cellular response against oxidative / nitrosative stress: current state. Nutr J 15:1–22. https://doi.org/10.1186/s12937-016-0186-5
Labunskyy VM, Hatfield DL, Gladyshev VN (2014) Selenoproteins: Molecular pathways and physiological roles. Physiol Rev 94:739–777. https://doi.org/10.1152/physrev.00039.2013
Li JL, Zhang L, Yang ZY, et al (2017) Effects of different selenium sources on growth performance, antioxidant capacity and meat quality of local chinese subei chickens. Biol Trace Elem Res 181:340–346. https://doi.org/10.1007/s12011-017-1049-4
Li KX, Wang JS, Yuan D, et al (2018) Effects of different selenium sources and levels on antioxidant status in broiler breeders. Asian-Australasian J Anim Sci 31:1939–1945. https://doi.org/10.5713/ajas.18.0226
Lipiec E, Siara G, Bierla K, et al (2010) Determination of selenomethionine, selenocysteine, and inorganic selenium in eggs by HPLC-inductively coupled plasma mass spectrometry. Anal Bioanal Chem 397:731–741. https://doi.org/10.1007/s00216-010-3544-8
Liu H, Yu Q, Fang C, et al (2020a) Effect of selenium source and level on performance, egg quality, egg selenium content, and serum biochemical parameters in laying hens. Foods 9:68. https://doi.org/10.3390/foods9010068
Liu H, Yu Q, Tang X, et al (2020b) Effect of selenium on performance, egg quality, egg selenium content and serum antioxidant capacity in laying hens. Pakistan J Zool 52:635–640
Liu H, Yu Q, Tang X, et al (2020c) Effect of selenium on performance, egg quality, egg selenium content and serum antioxidant capacity in laying hens. Pakistan J Zool 52:635–640. https://doi.org/10.17582/journal.pjz/20190424040448
Lu J, Qu L, Ma M, et al (2020) Efficacy evaluation of selenium-enriched yeast in laying hens: effects on performance, egg quality, organ development, and selenium deposition. Poult Sci 99:6267–6277. https://doi.org/10.1016/j.psj.2020.07.041
Lu J, Qu L, Shen MM, et al (2019) Effects of high-dose selenium-enriched yeast on laying performance, egg quality, clinical blood parameters, organ development, and selenium deposition in laying hens. Poult Sci 98:2522–2530. https://doi.org/10.3382/ps/pey597
Lv L, Li L, Zhang R, et al (2019) Effects of dietary supplementation of selenium enriched yeast on egg selenium content and egg production of North China hens. Pak J Zool 51:49–55. https://doi.org/10.17582/journal.pjz/2019.51.1.49.55
Lohmann Brown-Classic Management Guide, Lohmann Tierzucht Lohmann(2018)
Mahan DC, Kim YY (1996) Effect of inorganic or organic selenium at two dietary levels on reproductive performance and tissue selenium concentrations in first-parity gilts and their progeny. J Anim Sci 74:2711–2718
Mahan DC, Parrett NA (1996) Evaluating the efficacy of selenium-enriched yeast and sodium selenite on tissue selenium retention and serum glutathione peroxidase activity in grower and finisher swine. J Anim Sci 2967–2974
Mahima, Amit Kumar Verma, Amit Kumar, Anu Rahal VK and DR (2012) Inorganic versus organic selenium supplementation: A review. Pakistan J Biol Sci 15:418–425. https://doi.org/10.3923/pjbs.2012.418.425
Malaysia S (2009) Halal Food-Production, Preparation, Handling and Storage-General Guidelines (Second Revision)
Mangiapane E, Pessione A, Pessione E (2014) Selenium and selenoproteins: An overview on different biological systems. Curr Protein Pept Sci 15:598–607. https://doi.org/10.2174/1389203715666140608151134
Markovic R, Ciric J, Starcevic M, et al (2018) Effects of selenium source and level in diet on glutathione peroxidase activity, tissue selenium distribution, and growth performance in poultry. Anim Heal Res Rev 19:166–176. https://doi.org/10.1017/S1466252318000105
Meng T, Liu Y lin, Xie C yan, et al (2019) Effects of different selenium sources on laying performance, egg selenium concentration, and antioxidant capacity in laying hens. Biol Trace Elem Res 189:548-555. https://doi.org/10.1007/s12011-018-1490-z
Meng TT, Lin X, Xie CY, et al (2020) Nanoselenium and selenium yeast have minimal differences on egg production and se deposition in laying hens. Biol Trace Elem Res 1–8. https://doi.org/10.1007/s12011-020-02349-8
Mengistu BM, Bitsue HK, Huang K (2020) The Effects of selenium-enriched probiotics on growth performance, oocysts shedding, intestinal cecal lesion scores, antioxidant capacity, and mrna gene expression in chickens infected with Eimeria tenella. Biol Trace Elem Res 1–14. https://doi.org/10.1007/s12011-020-02118-7
Meyers L (2000) Establishment of dietary reference intakes for dietary antioxidants
Mohamed DA, Sazili AQ, Chwen LT, Samsudin AA (2020) Effect of microbiota-selenoprotein on meat selenium content and meat quality of broiler chickens. Animals 10:1–11. https://doi.org/10.3390/ani10060981
Mohapatra, P., Swain, R. K., Mishra, S. K., Behera, T., Swain, P., Mishra, S. S., Behura, N. C., Sabat, S. C., Sethy, K., Dhama, K. and Jayasankar P (2014) Effects of dietary nano-selenium on tissue selenium deposition, antioxidant status and immune functions in layer chicks. Int J Pharmacol 10:160–167. https://doi.org/10.3923/ijp.2014.160.167
Mohapatra P, Swain RK, Mishra SK, et al (2014) Effects of dietary nano-selenium supplementation on the performance of layer grower birds. Asian J. Anim. Vet. Adv. 9:641–652
Moslehi H, Navidshad B, Sharifi SD, Aghjegheshlagh FM (2019) Effects of selenium and flaxseed on selenium content and antioxidant properties of eggs and immune response in hens. South African J Anim Sci 49:770–780. https://doi.org/10.4314/sajas.v49i4.19
Mu Y, Zhang K, Bai S, et al (2019) Effects of vitamin E supplementation on performance, serum biochemical parameters and fatty acid composition of egg yolk in laying hens fed a diet containing ageing corn. J Anim Physiol Anim Nutr (Berl) 103:135–145. https://doi.org/10.1111/jpn.13017
Muhammad, A. I., Mohamed, D. A. A., Chwen, L. T., Akit, H. and Samsudin AA (2021) Effect of sodium selenite, selenium yeast, and bacterial protein on chicken egg yolk color, antioxidant profiles, and oxidative stability. Foods 10:1–20. https://doi.org/10.3390/foods10040871
Nabi F, Arain MA, Hassan F, et al (2020) Nutraceutical role of selenium nanoparticles in poultry nutrition: a review. World's Poult Sci J 00:1–13. https://doi.org/10.1080/00439339.2020.1789535
Nandi A, Yan LJ, Jana CK, Das N (2019) Role of catalase in oxidative stress- and age-associated degenerative diseases. Oxid Med Cell Longev 2019:. https://doi.org/10.1155/2019/9613090
Nasiri K, Kazemi-Fard M, Rezaei M, Yousefi S (2019) Supplementation of sodium selenite and methionine on concentration of selenium in egg and serum, antioxidant enzymes activity and immune response of iranian native broiler breeders. Poult Sci J 7:119–129. https://doi.org/10.22069/psj.2019.16438.1427
NRC (1994) Nutrient Requirements of Poultry. Ninth Revised Edition, 1994. Washington, DC: The National Academies Press.
Okado-Matsumoto A, Fridovich I (2001) Subcellular distribution of superoxide dismutases (SOD) in rat liver. J Biol Chem 276:38388–38393. https://doi.org/10.1074/jbc.M105395200
Okunlola D., AkandeT.O, Nuga H. (2015) Haematological and serum characteristics of broiler birds fed diets supplemented with varying levels of selenium powder. J Biol Agric Healthc 5:107–111
Paiva FA, Netto AS, Corrêa LB, et al (2019) Organic selenium supplementation increases muscle selenium content in growing lambs compared to inorganic source. Small Rumin Res 175:57–64. https://doi.org/10.1016/j.smallrumres.2019.04.008
Pan C, Huang K, Zhao Y, et al (2007) Effect of selenium source and level in hen's diet on tissue selenium deposition and egg selenium concentrations. J Agric Food Chem 55:1027–1032. https://doi.org/10.1021/jf062010a
Pan C, Zhao Y, Liao SF, et al (2011) Effect of selenium-enriched probiotics on laying performance, egg quality, egg selenium content, and egg glutathione peroxidase activity. J Agric Food Chem 59:11424–11431. https://doi.org/10.1021/jf202014k
Payne RL, Southern LL (2005) Comparison of inorganic and organic selenium sources for broilers. Poult Sci 84:898–902. https://doi.org/10.1093/ps/84.6.898
Perić L, Milošević N, Žikić D, et al (2009) Effect of selenium sources on performance and meat characteristics of broiler chickens. J Appl Poult Res 18:403–409. https://doi.org/10.3382/japr.2008-00017
Reda FM, Swelum AA, Hussein EOS, Elnesr SS (2020) Effects of varying dietary dl-methionine levels on productive and reproductive performance, egg quality, and blood biochemical parameters of quail breeders. Animals 10:1–12
Revilla E, Maria CS, Miramontes E, et al (2009) Nutraceutical composition, antioxidant activity and hypocholesterolemic effect of a water-soluble enzymatic extract from rice bran. Food Res Int 42:387–393. https://doi.org/10.1016/j.foodres.2009.01.010
Rotruck, J. T., Pope, A. L., Ganther, H. E., Swanson, A. B., Hafeman, D. G., & Hoekstra W (1973) Selenium: Biochemical role as a component of glutathione peroxidase. Science (80- ) 179:588–590. https://doi.org/10.1126/science.179.4073.588
Salehi B, Martorell M, Arbiser JL, et al (2018) Antioxidants: Positive or negative actors? Biomolecules 8:1–11. https://doi.org/10.3390/biom8040124
Saracila M, Panaite T, Tabuc C, et al (2020) Dietary ascorbic acid and chromium supplementation for broilers reared under thermoneutral conditions vs. high heat stress. Lucr Științifice-Universitatea Științe Agric şi Med Vet Ser Zooteh 73:41–47
Sethy K, Dass RS, Garg AK, et al (2015) Effect of different selenium sources (Selenium yeast and Sodium selenite) on haematology, blood chemistry and thyroid hormones in male goats (Capra hircus). Indian J Anim Res 49:788–792. https://doi.org/10.18805/ijar.7040
Sheiha AM, Abdelnour SA, Abd El-Hack ME, et al (2020) Effects of dietary biological or chemical-synthesized nano-selenium supplementation on growing rabbits exposed to thermal stress. Animals 10:1–16. https://doi.org/10.3390/ani10030430
Sheoran N (2017) Organic Minerals in Poultry. Adv Res 12:1–10. https://doi.org/10.9734/AIR/2017/37878
Shi L, Ren Y, Zhang C, et al (2018) Effects of organic selenium (Se-enriched yeast) supplementation in gestation diet on antioxidant status, hormone profile and haemato-biochemical parameters in Taihang Black Goats. Anim Feed Sci Technol 238:57–65. https://doi.org/10.1016/j.anifeedsci.2018.02.004
Sizova E, Miroshnikov S, Ayasan T (2021) Efficiency and safety of using different sources of zinc in poultry nutrition. IOP Conf Ser Earth Environ Sci 624:012043. https://doi.org/10.1088/1755-1315/624/1/012043
Skřivan M, Šimáně J, Dlouhá G, Doucha J (2006) Effect of dietary sodium selenite, Se-enriched yeast and Se-enriched Chlorella on egg Se concentration, physical parameters of eggs and laying hen production. Czech J Anim Sci 51:163–167. https://doi.org/10.17221/3924-cjas
Sugiharto S (2019) A review of filamentous fungi in broiler production. Ann Agric Sci 64:1–8. https://doi.org/10.1016/j.aoas.2019.05.005
Sun X, Yue S, Qiao Y, et al (2020) Dietary supplementation with Selenium-enriched earthworm powder improves anti-oxidative ability and immunity of laying hens. Poult Sci 99:5344–5349. https://doi.org/10.1016/j.psj.2020.07.030
Sunde RA, Li JL, Taylor RM (2016) Insights for setting of nutrient requirements, gleaned by comparison of selenium status biomarkers in Turkeys and chickens versus rats, mice, and lambs. Adv Nutr 7:1129–1138. https://doi.org/10.3945/an.116.012872
Surai, P. F. and Dvorska JE (2002) Strategies to enhance antioxidant protection and implications for the wellbeing of companion animals. In: Jacques TL and K (ed) Nutritional biotechnology in the feed and food industries. Proceedings of Alltech's Eighteenth Annual Symposium Edited, Nottingham University Press Manor Farm, Church Lane, Thrumpton Nottingham, NG11 0AX, United Kingdom, p 504
Surai PF (2006) Selenium in Nutrition and Health, First publ. Nottingham University Press Manor Farm, Main Street, Thrumpton Nottingham NG11 0AX, United Kingdom, United Kingdom
Surai PF (2002) Selenium. In: Surai PF (ed) Natural Antioxidants in Avian Nutrition and Reproduction, First. Nottingham University Press, Nottingham University, UK Manor Farm, Main Street, Thrumpton Nottingham, NG11 0AX, United Kingdom, pp 233–283
Surai PF, Kochish II (2019) Nutritional modulation of the antioxidant capacities in poultry: The case of selenium. Poult Sci 98:4231–4239. https://doi.org/10.3382/ps/pey406
TİMUR C, UTLU N (2020) Influence of vitamin E and organic selenium supplementation on antioxidant enzymes activities in blood and egg samples of laying hens. J Inst Sci Technol 10:694–701. https://doi.org/10.21597/jist.544969
Tufarelli V, Ceci E, Laudadio V (2016) 2-Hydroxy-4-Methylselenobutanoic acid as new organic selenium dietary supplement to produce selenium-enriched eggs. Biol Trace Elem Res 171:453–458. https://doi.org/10.1007/s12011-015-0548-4
Urbankova L, Skalickova S, Pribilova M, et al (2021) Effects of sub-lethal doses of selenium nanoparticles on the health status of rats. Toxics 9:28. https://doi.org/10.3390/toxics9020028
Utterback PL, Parsons CM, Yoon I, Butler J (2005) Effect of supplementing selenium yeast in diets of laying hens on egg selenium content. Poult Sci 84:1900–1901. https://doi.org/10.1093/ps/84.12.1900
Wang G, Liu LJ, Tao WJ, et al (2019) Effects of replacing inorganic trace minerals with organic trace minerals on the production performance, blood profiles, and antioxidant status of broiler breeders. Poult Sci 98:2888–2895. https://doi.org/10.3382/ps/pez035
Wrobel JK, Power R, Toborek M (2016) Biological activity of selenium: Revisited. IUBMB Life 68:97–105. https://doi.org/10.1002/iub.1466
Xia WG, Chen W, Abouelezz KFM, et al (2020) The effects of dietary Se on productive and reproductive performance, tibial quality, and antioxidant capacity in laying duck breeders. Poult Sci 99:3971–3978. https://doi.org/10.1016/j.psj.2020.04.006
Xu XR, Yu HT, Yang Y, et al (2016) Quercetin phospholipid complex significantly protects against oxidative injury in ARPE-19 cells associated with activation of Nrf2 pathway. Eur J Pharmacol 770:1–8. https://doi.org/10.1016/j.ejphar.2015.11.050
Yang KC, Lee LT, Lee YS, et al (2010) Serum selenium concentration is associated with metabolic factors in the elderly: A cross-sectional study. Nutr Metab 7:20–22. https://doi.org/10.1186/1743-7075-7-38
Yang L, Tu D, Wang N, et al (2019) The protective effects of DL-Selenomethionine against T-2/HT-2 toxins-induced cytotoxicity and oxidative stress in broiler hepatocytes. Toxicol Vitr 54:137–146. https://doi.org/10.1016/j.tiv.2018.09.016
Yuan D, Zhan XA, Wang YX (2012) Effect of selenium sources on the expression of cellular glutathione peroxidase and cytoplasmic thioredoxin reductase in the liver and kidney of broiler breeders and their offspring. Poult Sci 91:936–942. https://doi.org/10.3382/ps.2011-01921
Zhang Q, Chen L, Guo K, et al (2013) Effects of different selenium levels on gene expression of a subset of selenoproteins and antioxidative capacity in mice. Biol Trace Elem Res 154:255–261. https://doi.org/10.1007/s12011-013-9710-z
Zhang R, Liu Y, Xing L, et al (2018) The protective role of selenium against cadmium-induced hepatotoxicity in laying hens: Expression of HSPS and inflammation-related genes and modulation of elements homeostasis. Ecotoxicol Environ Saf 159:205–212. https://doi.org/10.1016/j.ecoenv.2018.05.016
Zhang X, Tian L, Zhai S, et al (2020) Effects of selenium-enriched yeast on performance, egg quality, antioxidant balance, and egg selenium content in laying ducks. Front Vet Sci 7:1–10. https://doi.org/10.3389/fvets.2020.00591
Zhang Y, Zhu S, Wang X, et al (2011) The effect of dietary selenium levels on growth performance, antioxidant capacity and glutathione peroxidase 1 (GSHPx1) mRNA expression in growing meat rabbits. Anim Feed Sci Technol 169:259–264. https://doi.org/10.1016/j.anifeedsci.2011.07.006
Zhou W, Miao S, Zhu M, et al (2021) Effect of sns. Biol Trace Elem Res. https://doi.org/10.1007/s12011-020-02532-x
Editorial decision: Minor revisions | CommonCrawl |
2S-nP spectroscopy
1S-3S direct frequency comb spectroscopy
1S-2S absolute frequency
1S-2S isotope shift
2S hyperfine splitting
Frequency reference
Hänsch Group Home
News: Shrinking the proton again!
Our most recent results from laser spectroscopy of the 2S-4P transition in atomic hydrogen will be published in Science in the October, 6th issue. After more than six years of work, we have succeeded in measuring the transition frequency with an uncertainty of 2.3 kHz, corresponding to a relative uncertainty of 4 parts in 1012. This is the second-best frequency measurement in hydrogen after our previous measurement of the 1S-2S transition. From these two measurements, we derive new values for the Rydberg constant and the proton root mean square (RMS) radius, {$R_\infty=10973731.568076(96)\,\mathrm{m}^{-1}$} and {$r_\mathrm{p}=0.8335(95)\,\mathrm{fm}$}, respectively. Our results are in excellent agreement with the results from laser spectroscopy of muonic hydrogen, but are 5% smaller than and disagree by 3.3 standard deviations with the hydrogen world data. More...
Beyer, A., Maisenbacher, L., Matveev, A., Pohl, R., Khabarova, K., Grinin, A., Lamour, T., Yost, D. C., Hänsch, T. W., Kolachevsky, N., Udem, Th.
The Rydberg constant and proton size from atomic hydrogen.
Science, 358:79, DOI: 10.1126/science.aah6677.
Supplementary materials.
Nature: "Proton-size puzzle deepens"
Science: "The proton radius revisited"
Science News: "Proton size still perplexes despite a new measurement"
NZZ: "Der Protonenradius ist und bleibt ein Rätsel" (in German)
Spektrum.de: "Wie groß ist das Proton wirklich?" (in German)
Nature Physics: "Proton puzzle: Agreement in disagreement"
Physik Journal: "Radius und Interferenz" (in German)
Shrinking the proton again! (english)
Und wieder schrumpft das Proton! (deutsch)
Corresponding author: Lothar Maisenbacher
Prof. Dr. Thomas Udem
Photo showing the vacuum chamber used to measure the 2S-4P transition frequency in atomic hydrogen. The purple glow in the back stems from the microwave discharge that dissociates hydrogen molecules into hydrogen atoms. The blue light in the front is fluorescence of the vacuum viewport from the ultraviolet laser that excites the atoms to the 2S state. The turquoise blue glow is stray light from the laser system used to measure the frequency of the 2S-4P transition. (Credit: Axel Beyer / MPQ)
Welcome to the hydrogen spectroscopy project
We are part of Prof. T. W. Hänsch's laser spectroscopy division at the Max Planck Institute of Quantum Optics (MPQ) in Garching near Munich, Germany.
Precision spectroscopy of atomic hydrogen - the most simple natural atomic system - has been one of the key tools for tests of fundamental theories eversince the dawn of modern physics. Besides fueling our basic understanding of light, matter and their interaction, it has been motivating advances in nonlinear laser spectroscopy and optical frequency metrology for more than three decades now, including the invention of the laser frequency comb technique highlighted in the citation for the 2005 Nobel Prize in physics. In particular, measurements of the 1S-2S two-photon transition frequency in hydrogen and deuterium served as a corner stone in tests of bound-state quantum electrodynamics (QED), the extraction of the Rydberg constant and the determination of fundamental constants, such as the proton root mean square (RMS) charge radius and the deuteron structure radius. In addition, our measurements were among the first laboratory experiments to set stringent limits to possible slow time variations of the fine structure constant, the fundamental parameter that scales the electromagnetic interaction.
Until a few years ago, the primary challenge in hydrogen spectroscopy has been the precise measurement of the frequency of laser light. Since the advent of the laser frequency comb technique, the challenge has moved to the understanding and control of systematic line shifts and distortions. Especially velocity-dependent effects are serious for our very light atoms that cannot be easily laser cooled. We have made major advances towards future improved measurements of the 1S-2S transition frequency. In particular, we are now routinely achieving sub-Hz line widths with diode laser systems at 972 nm and developed new tools and experimental schemes for enhanced statistics and reduced systematic uncertainties.
One of our current projects is hydrogen 2S-nP spectroscopy, aiming for a new determination of the Rydberg constant and the proton RMS charge radius from precision spectroscopy of atomic hydrogen by probing transition frequencies starting from the meta-stable 2S state to higher lying P-states. These results can shed new light on the "proton size puzzle", i.e. the discrepancy of the results for the proton RMS charge radius obtained from muonic hydrogen in 2010 on the one hand and electronic hydrogen and elastic electron-proton scattering on the other hand. So far, no satisfactory explanation has be found and suggested explanations span the entire spectrum from experimental errors up to physics beyond the standard model.
We are also working on direct frequency comb spectroscopy of two-photon transitions in hydrogen. Using a frequency comb as spectroscopy laser combines the advantages of pulsed lasers, i.e. high peak powers and efficient harmonic generation, with the advantages of CW lasers, i.e. a narrow line width and precisely defined frequency. This allows to excite ultraviolet (UV) transitions such as the 1S-3S transition at 205 nm without sacrificing precision. This experiments also serves as a test bed for spectroscopy with even shorter wavelengths in the extreme UV region. | CommonCrawl |
Proceedings of the American Mathematical Society
Published by the American Mathematical Society, the Proceedings of the American Mathematical Society (PROC) is devoted to research articles of the highest quality in all areas of pure and applied mathematics.
The 2020 MCQ for Proceedings of the American Mathematical Society is 0.85.
Journals Home eContent Search About PROC Editorial Board Author and Submission Information Journal Policies Subscription Information
Invariants related to the Bergman kernel of a bounded domain in $\textbf {C}^{n}$
by Tadayoshi Kanemaru PDF
Proc. Amer. Math. Soc. 92 (1984), 198-200 Request permission
In this paper we introduce biholomorphic invariants using the Bergman kernel function of a bounded domain in ${{\mathbf {C}}^n}$.
Stefan Bergman, The Kernel Function and Conformal Mapping, Mathematical Surveys, No. 5, American Mathematical Society, New York, N. Y., 1950. MR 0038439, DOI 10.1090/surv/005
J. Burbea, Minimum methods in Hilbert spaces with kernel function, Ph. D. Thesis, Stanford Univ., Stanford, Calif., 1971.
Tadayoshi Kanemaru, Invariant metrics on a bounded domain in $\textbf {C}^{n}$, Mem. Fac. Ed. Kumamoto Univ. Natur. Sci. 30 (1981), 1–4. MR 638971
Shigeo Ozaki and Yoshiaki Tashiro, Mapping function onto the representative domain and its properties, Sci. Rep. Tokyo Kyoiku Daigaku Sect. A 10 (1969), 164–167 (1969). MR 262544
Maciej Skwarczyński, Biholomorphic invariants related to the Bergman function, Dissertationes Math. (Rozprawy Mat.) 173 (1980), 59. MR 575756
Retrieve articles in Proceedings of the American Mathematical Society with MSC: 32H10, 32H05
Retrieve articles in all journals with MSC: 32H10, 32H05
© Copyright 1984 American Mathematical Society
Journal: Proc. Amer. Math. Soc. 92 (1984), 198-200
MSC: Primary 32H10; Secondary 32H05
DOI: https://doi.org/10.1090/S0002-9939-1984-0754702-9
MathSciNet review: 754702 | CommonCrawl |
catch22: CAnonical Time-series CHaracteristics
Selected through highly comparative time-series analysis
Carl H. Lubba ORCID: orcid.org/0000-0003-0230-40351 na1,
Sarab S. Sethi ORCID: orcid.org/0000-0002-5939-04322 na1,
Philip Knaute2,
Simon R. Schultz ORCID: orcid.org/0000-0002-6794-58131,
Ben D. Fulcher ORCID: orcid.org/0000-0002-3003-40553 &
Nick S. Jones ORCID: orcid.org/0000-0002-4083-972X2
Data Mining and Knowledge Discovery volume 33, pages 1821–1852 (2019)Cite this article
Capturing the dynamical properties of time series concisely as interpretable feature vectors can enable efficient clustering and classification for time-series applications across science and industry. Selecting an appropriate feature-based representation of time series for a given application can be achieved through systematic comparison across a comprehensive time-series feature library, such as those in the hctsa toolbox. However, this approach is computationally expensive and involves evaluating many similar features, limiting the widespread adoption of feature-based representations of time series for real-world applications. In this work, we introduce a method to infer small sets of time-series features that (i) exhibit strong classification performance across a given collection of time-series problems, and (ii) are minimally redundant. Applying our method to a set of 93 time-series classification datasets (containing over 147,000 time series) and using a filtered version of the hctsa feature library (4791 features), we introduce a set of 22 CAnonical Time-series CHaracteristics, catch22, tailored to the dynamics typically encountered in time-series data-mining tasks. This dimensionality reduction, from 4791 to 22, is associated with an approximately 1000-fold reduction in computation time and near linear scaling with time-series length, despite an average reduction in classification accuracy of just 7%. catch22 captures a diverse and interpretable signature of time series in terms of their properties, including linear and non-linear autocorrelation, successive differences, value distributions and outliers, and fluctuation scaling properties. We provide an efficient implementation of catch22, accessible from many programming environments, that facilitates feature-based time-series analysis for scientific, industrial, financial and medical applications using a common language of interpretable time-series properties.
Time series, ordered sets of measurements over time, enable the study of the dynamics of real-world systems and have become a ubiquitous form of data. Quantitative analysis of time series is vital for countless domain applications, including in industry (e.g., to detect production irregularities), finance (e.g., to identify fraudulent transactions), and medicine (e.g., to diagnose pathological heartbeat patterns). As modern time-series datasets have grown dramatically in size, there is a pressing need for efficient methods to solve problems including time-series visualization, clustering, classification, and anomaly detection.
Many applications involving time series, including common approaches to clustering and classification, are based on a defined measure of similarity between pairs of time series. A straightforward similarity measure—for short, aligned time series of equal length—quantifies whether time-series values are close on average (across time) (Berndt and Clifford 1994; Vlachos et al. 2002; Moon et al. 2001; Faloutsos et al. 1994; Ye and Keogh 2009). This approach does not scale well, often quadratic in both number of time series and series length (Bagnall et al. 2017), due to the necessity to compute distances (often using expensive elastic metrics) between all pairs of objects. An alternative approach involves defining time-series similarity in terms of extracted features that are the output of time-series analysis algorithms (Fulcher and Jones 2014; Fulcher 2018). This approach yields an interpretable summary of the dynamical characteristics of each time series that can then be used as the basis of efficient classification and clustering in a feature space using conventional machine-learning methods.
The number of time-series analysis methods that have been devised to convert a complex time-series data stream into an interpretable set of real numbers is vast, with contributions from a diverse range of disciplinary applications. Some examples include standard deviation, the position of peaks in the Fourier power spectrum, temporal entropy, and many thousands of others (Fulcher 2018; Fulcher et al. 2013). From among this wide range of possible features, selecting a set of features that successfully captures the dynamics relevant to the problem at hand has typically been done manually without quantitative comparison across a variety of potential candidates (Timmer et al. 1993; Nanopoulos et al. 2001; Mörchen 2003; Wang et al. 2006; Bagnall et al. 2012). However, subjective feature selection leaves uncertain whether a different feature set may have optimal performance on a task at hand. Addressing this shortcoming, recent methods have been introduced that take a systematic, data-driven approach involving large-scale comparisons across thousands of time-series features (Fulcher et al. 2013; Fulcher and Jones 2017). This 'highly-comparative' approach involves comparison across a comprehensive collection of thousands of diverse time-series features and has recently been operationalized as the hctsa (highly comparative time-series analysis) toolbox (Fulcher et al. 2013; Fulcher and Jones 2017, 2014). hctsa has been used for data-driven selection of features that capture the informative properties in a given dataset in applications ranging from classifying Parkinsonian speech signals (Fulcher et al. 2013) to identifying correlates of mouse-brain dynamics in the structural connectome (Sethi et al. 2017). These applications have demonstrated how automatically constructed feature-based representations of time series can, despite vast dimensionality reduction, yield competitive classifiers that can be applied to new data efficiently (Fulcher and Jones 2014). Perhaps most importantly, the selected features provide interpretable understanding of the differences between classes, and therefore a path towards a deeper understanding of the underlying dynamical mechanisms at play.
Selecting a subset of features from thousands of candidate features is computationally expensive, making the highly-comparative approach unfeasible for some real-world applications, especially those involving large training datasets (Bandara et al. 2017; Shekar et al. 2018; Biason et al. 2017). Furthermore, the hctsa feature library requires a Matlab license to run, limiting its widespread adoption. Many more real-world applications of time-series analysis could be tackled using a feature-based approach if a reduced, efficient subset of features, that capture the diversity of analysis approaches contained in hctsa, was available.
In this study we develop a data-driven pipeline to distill reduced subsets of the most useful and complementary features for classification from thousands of initial candidates, such as those in the hctsa toolbox. Our approach involves scoring the performance of each feature independently according to its classification accuracy across a calibration set of 93 time-series classification problems (Bagnall et al.). We show that the performance of an initial (filtered) pool of 4791 features from hctsa (mean class-balanced accuracy across all tasks: 77.2%) can be well summarized by a smaller set of just 22 features (71.7%). We denote this high-performing subset of time-series features as catch22 (22 CAnonical Time-series CHaracteristics). The catch22 feature set: (1) computes quickly (\(\sim \) 0.5 s/ 10,000 samples, roughly a thousand times faster than the full hctsa feature set in Matlab) and empirically scales almost linearly, \({\mathcal {O}}(N^{1.16})\); (2) provides a low-dimensional summary of time series into a concise set of interpretable characteristics that are useful for classification of diverse real-world time-series; and (3) is implemented in C with wrappers for python, R, and Matlab, facilitating fast time-series clustering and classification. We envisage catch22 expanding the set of problems—including scientific, industrial, financial, and medical applications—that can be tackled using a common feature-based language of canonical time-series properties.
We here describe the datasets we evaluate features on, the time-series features contained in the hctsa toolbox (Fulcher et al. 2013; Fulcher and Jones 2017), and the selection pipeline to generate a reduced feature subset.
To select a reduced set of useful features, we need to define a measure of usefulness. Here we use a diverse calibration set of time-series classification tasks from the UEA/UCR (University of East Anglia and University of California, Riverside) Time-Series Classification Repository (Bagnall et al. 2017). The number of time series per dataset ranges from 28 ('ECGMeditation') to 33,274 ('ElectricalDevices') adding up to a total of 147,198 time series. Individual time series range in length from 24 samples ('ItalyPowerDemand') to 3750 samples ('HeartbeatBIDMC'), and datasets contain between 2 classes (e.g., 'BeetleFly') and 60 classes ('ShapesAll'). For 85 of the 93 datasets, unbalanced classification accuracies were provided for different shape-based classifiers such as dynamic time warping (DTW) (Berndt and Clifford 1994) nearest neighbor, as well as for hybrid approaches such as COTE (Bagnall et al. 2016). All unbalanced accuracies, \(a^\text {ub}\), were computed using the fixed training-test split provided by the UCR repository, as the proportion of class predictions that matched the actual class labels:
$$\begin{aligned} a^\text {ub}(y, {\hat{y}}) = \frac{1}{n_\text {TS}} \sum _{l=1}^{n_\text {TS}} \mathbb {1}{({\hat{y}}_l = y_l)}, \end{aligned}$$
where \(y_l\) is the actual class, \({\hat{y}}_l\) is the predicted class, \(n_\text {TS}\) is the number of time series in the test dataset, and \(\mathbb {1}\) is the indicator function.
Time-series features
Our aim is to obtain a data-driven subset of the most useful time-series features by comparing across as diverse a set of time-series analysis algorithms as possible. An ideal starting point for such an exercise is the comprehensive library of over 7500 features provided in the hctsa toolbox (Fulcher et al. 2013; Fulcher and Jones 2017). Features included in hctsa are derived from a wide conceptual range of algorithms, including measurements of the basic statistics of time-series values (e.g., location, spread, Gaussianity, outlier properties), linear correlations (e.g., autocorrelation, power spectral features), stationarity (e.g., StatAv, sliding window measures, prediction errors), entropy (e.g., auto-mutual information, Approximate Entropy, Lempel-Ziv complexity), methods from the physical nonlinear time-series analysis literature (e.g., correlation dimension, Lyapunov exponent estimates, surrogate data analysis), linear and nonlinear model parameters, fits, and predictive power [e.g., from autoregressive moving average (ARMA), Gaussian Process, and generalized autoregressive conditional heteroskedasticity (GARCH) models], and others (e.g., wavelet methods, properties of networks derived from time series, etc.) (Fulcher et al. 2013; Fulcher and Jones 2017). Features were calculated in Matlab 2017a (a product of The MathWorks, Natick, MA, USA) using hctsa v0.97. For each dataset, each feature was linearly rescaled to the unit interval.
We performed an initial filtering of all 7658 hctsa features based on their characteristics and general applicability. Because the vast majority of time series in the UEA/UCR repository are z-score normalized,Footnote 1 we first removed the 766 features that are sensitive to the mean and variance of the distribution of values in a time series based on keywords assigned through previous work (Fulcher et al. 2013), resulting in a set of 6892 features. We note that on specific tasks with non-normalized data, features of the raw value distribution (such as mean, standard deviation, and higher moments) can lead to significant performance gains and that for some applications, this initial preselection is undesirable (Dau et al. 2018). Given a suitable collection of datasets in which the raw value distributions contain information about class differences, our pipeline can easily skip this preselection. We next excluded the features that frequently outputted special values, which indicate that an algorithm is not suitable for the input data, or that it did not evaluate successfully. Algorithms that produced special-valued outputs on at least one time series in more than 80% of our datasets were excluded: a total of 2101 features (across datasets, minimum: 655, maximum: 3427, mean: 1327), leaving a remaining set of 4791 features. This relatively high number of features lost during preselection reflects our strict exclusion criterion for requiring real-valued outputs across a diverse range of input data, and the restricted applicability of many algorithms (e.g., that require a minimum length of input data, require positive-valued data, or cannot deal with data repeated identical values). For example, the datasets with the most special-valued features are 'ElectricDevices' (3427 special-valued features), which contains 96-sample time series with many repeated values (e.g., some time series contain just 10 unique values), and 'ItalyPowerDemand' (2678 special-valued features), which consists of very short (24-sample) time series. The 4791 features that survived the preselection gave real-valued outputs on at least 90% of the time series of all datasets, and 90% of them succeeded on at least 99% of time series.
Performance-based selection
In contrast to typical feature-selection algorithms, which search for combinations of features that perform well together (and might therefore include features that have low individual performance), our procedure involves a pre-filtration to identify features that individually possess discriminatory power across a diverse range of real-world data, before subsequently identifying those that exhibit complementary behavior. To this end we used the pipeline depicted in Fig. 1, which evaluates the univariate classification performance of each feature on each task, combines feature-scores across tasks, and then selects a reduced set of the most useful features across a two-step filtering process which involves: (1) performance filtering: select features that perform best across all tasks, and (2) redundancy minimization: reduce redundancy between features. The method is general and is easily extendable to different sets of classification tasks, or to different initial pools of features. All analysis was performed in Python 2.7 using scikit-learn and code to reproduce all of our analyses is accessible on GitHub (https://github.com/chlubba/op_importance).
Given a set of classification tasks, our pipeline selects a reduced set of high-performing features while minimizing inter-feature redundancy. aStatistical prefiltering We identified features with performance consistent with that of random number generators. To this end, we derived null accuracy distributions for each feature on each task by classifying repeatedly on shuffled class labels. P-values from those null distributions were combined across datasets to identify features with performance consistent with random-number generators. bPerformance filtering We selected an intermediate set of top features by ranking and thresholding the combined accuracy over all datasets. cRedundancy minimization Top performing features were clustered into groups with similar performance across tasks to minimize redundancy between the final set of canonical features. We selected a single representative feature per cluster to yield a canonical feature set
Quantifying feature performance
Our pipeline (Fig. 1) requires a method to score the performance of individual features across classification tasks. We scored each feature by its ability to distinguish the labeled classes in each of our \(M = 93\) classification tasks and then computed a combined performance score for that feature across all tasks. Classification was performed using a decision tree with stratified cross validation with \(N_\text {CV}\) folds. The number of folds, \(N_\text {CV}\), was chosen separately for each task according to:
$$\begin{aligned} N_\text {CV} = \min \left\{ 10, \max \left[ 2, \min _{k=1}^{N_c}\left( \sum _{l=1}^{N_\text {TS}}\mathbb {1}{\left( y_l = k\right) }\right) \right] \right\} , \end{aligned}$$
where \(N_\text {TS}\) is the number of time series, \(N_c\) is the number of classes, and \(y_l\) is the class-label of the lth time series.
For feature i (\(i = 1, \ldots , 4791\)) on classification task j (\(j = 1, \ldots , M\)), we computed the mean class-balanced classification accuracy across folds \(a_{i,j}\) as a performance score.
$$\begin{aligned} a_{i,j}(y, {\hat{y}}, w) = \frac{1}{\sum _{l=1}^{N_\text {TS}}{w_l}} \sum _{l=1}^{N_\text {TS}} \mathbb {1}{({\hat{y}}_l = y_l)} w_l, \end{aligned}$$
where the weights for each time series \(w_l\) compensate for imbalances in the number of samples per class, \(w_l = 1/\sum _{m=1}^{N_\text {TS}}{\mathbb {1}{(y_m = y_l)}}\). To combine scores across tasks, we computed a normalized accuracy of the jth task by dividing raw feature accuracies, \(a_{i,j}\), by the mean accuracy across all features on that task, \({\bar{a}}_j\), as follows:
$$\begin{aligned} a^\mathrm {n}_{i,j} = \frac{a_{i,j}}{{\bar{a}}_j}. \end{aligned}$$
This allowed us to quantify the performance of each feature on a given task relative to the performances of other features in our library.
Finally, the combined feature-accuracy-score across all tasks, \(a^\mathrm {n,c}_i\), was calculated as the mean over normalized accuracies, \(a^\mathrm {n}_{i,j}\), on our \(M = 93\) tasks:
$$\begin{aligned} a^\mathrm {n,c}_{i} = \frac{1}{M}\sum _{j=1}^M a^\mathrm {n}_{i,j}. \end{aligned}$$
Statistical prefiltering
Given the size and diversity of features in hctsa, we first wanted to determine whether some features exhibited performance consistent with chance on this set of classification tasks. To estimate a p-value for each feature on each classification task, we generated null accuracy distributions, \(a^s_{i,j}\) (superscript s indicating 'shuffeled'), using a permutation-based procedure that involved repeated classification on randomly shuffled class labels, shown in Fig. 1a. The computational expense of estimating \(\sim \) 440,000 p-values using permutation testing, one for each of the 4791 features on each of the 93 problems, scales linearly with the number of repetitions. To limit computation time to within reasonable bounds, we fitted a Gaussian approximation, estimated from 1000 null samples for each each feature-task combination, \(a^s_{i,j}\), and used it to estimate p-values to a resolution beyond the limits of traditional permutation testing with this many null samples (i.e., 0.001). From visual inspection, the distributions were mostly unimodal and approximately normally distributed and, as expected, had higher variance on datasets with fewer time series.
The p-values for a given feature across all classification tasks were combined using Fisher's method (Fisher 1925) and corrected for multiple hypothesis testing across features using the Holm-Bonferroni method (Holm 1979) at a significance level of 0.05.
Selecting a canonical set of features
From the features that performed significantly better than chance, we selected a subset of \(\beta \) high-performing features by ranking them by their combined normalized accuracy \(a^\mathrm {n,c}\) (Fig. 1b), comparing values in the range \(100 \le \beta \le 1000\). As shown in Fig. 1c, we then aimed to reduce the redundancy in these top-performing features, defining redundancy in terms of patterns of performance across classification tasks. To achieve this, we used hierarchical clustering on the Pearson correlation distance, \(d_{ij} = 1 - r_{ij}\) between the M-dimensional performance vectors of normalized accuracies \(a^\mathrm {n}\) of features i and j. Clustering was performed using complete linkage at a threshold \(\gamma = 0.2\) to form clusters of similarly performing features, that are all inter-correlated by \(r_{ij} > 1 - \gamma \) (for all i, j in the cluster). We then selected a single feature to represent each cluster, comparing two different methods: (i) the feature with the highest normalized accuracy combined across tasks, and (ii) manual selection of representative features to favour interpretability (while also taking into account computational efficiency and classification performance).
Overall classification performance
To evaluate the classification performance of different feature sets, and compare our feature-based classification to alternative time-series classification methods, we used two different accuracy measures. Comparisons between different sets of hctsa-features were based on the mean class-balanced accuracy across M tasks and \(N_\text {CV}\) cross-validation folds:
$$\begin{aligned} a_\text {tot} = \frac{1}{M}\sum _{j=1}^{M}\frac{1}{N_{\text {CV},j}}\sum _{k=1}^{N_{\text {CV},j}} a_{j,k}. \end{aligned}$$
When comparing our feature sets to existing methods we used the mean unbalanced classification accuracy across tasks as in Eq. (1) on the given train-test split to match the metric used for the accuracies supplied with the UEA/UCR repository:
$$\begin{aligned} a^\text {ub}_\text {tot} = \frac{1}{N_\text {tasks}}\sum _{j=1}^{N_\text {tasks}} a^\text {ub}_{j}. \end{aligned}$$
Execution times and scaling
One of the merits of a small canonical feature set for time-series characterization is that it is quick to compute. To compare the execution time of different feature sets, we used a benchmark set of 40 time series from different sources, including simulated dynamical systems, financial data, medical recordings, meteorology, astrophysics, and bird sounds (see Sect. 5.3 for a complete list). To estimate how features scale with time-series length, we generated multiple versions for each of our 40 reference time series of different lengths from 50 to 10,000 samples. Lengths were adapted by either removing points after a certain sample count or by up-sampling of the whole time series to the desired length. Execution times were obtained on a 2.2 GHz Intel Core i7, using single-threaded execution (although we note that feature calculation can be parallelized straightforwardly).
Selecting the two most informative features from a small subset
For the purpose of quickly analyzing a dataset visually in feature space, it can be helpful to identify the two features that, taken together, are the most informative to distinguish between time-series classes of the selected dataset. To this end, we used sequential forward selection (Whitney 1971; Fulcher and Jones 2014) that first selects a single feature which achieves the best mean class-balanced accuracy across cross-validation folds and then iterates over the remaining features to select the one that, combined with the first feature, reaches the best accuracy.
We present results of using our pipeline to obtain a canonical set of 22 time-series features from an initial pool of 4791 candidates. We name our set catch22 (22 CAnonical Time-series CHaracteristics), which approximates the classification performance of the initial feature pool to 90% and computes in less than 0.5 s on 10,000 samples.
Performance diversity across classification tasks
We first analyze the 93 classification tasks, which are highly diverse in their properties (see Sect. 2.1) and difficulty, as shown in Fig. 2. We characterized the similarity of two tasks in terms of the types of features that perform well on them, as the Pearson correlation coefficient between accuracies of all features, shown in Fig 2a. The figure reveals a diversity of performance signatures across tasks: for some groups of tasks, similar types of features contribute to successful classification, whereas very different types of features are required for other tasks. The 93 tasks also vary markedly in their difficulty, as judged by the distribution of accuracies, \(a_{i,j}\), across tasks, shown in Fig. 2b. We normalized feature accuracies across tasks by dividing them by the mean accuracy of the task at hand, Eq. (4), yielding normalized accuracies, \(a_{i,j}^\mathrm {n}\), that were comparable across tasks, shown in Fig. 2c. Note that this normalized accuracy scores features relative to all other features on a given task. The red line in Fig. 2c shows the distribution of the mean normalized accuracies across tasks \(a_i^{n,c}\), Eq. (5).
Normalization of feature accuracies allows comparison of performance scores across a diverse set of 93 classification tasks. a A \(93 \times 93\) matrix of Pearson correlation coefficients between the 4791-dimensional accuracy vectors of pairs of tasks, reordered according to hierarchical linkage clustering. b Each line shows the smoothed distribution over feature-accuracies on one classification task. Differences in task difficulty are reflected by a wide range of accuracies. c The accuracies plotted in b were normalized by the mean accuracy of each task, as in Eq. (4). The red line shows the distribution of normalized and combined accuracies across all tasks, Eq. (5) (Color figure online)
Features with performance consistent with chance
To detect whether some features in hctsa exhibit a combined performance across classification tasks that is consistent with the performance of a random-number generator, we used a permutation-testing based procedure (described in Sect. 2.5). At a significance level \(p < 0.05\), 145 of the 4791 features (or 3%) exhibited chance-level performance. These 145 features (listed in "Appendix" Table 3) were mostly related to quantifying complex dynamics in longer time series, such as nonlinear time-series analysis methods, long-range automutual information; properties that are not meaningful for the short, shape-based time-series patterns that dominate the UEA/UCR database.
Top-performing features
As a second step in our pipeline, we ranked features by their combined accuracy across tasks and then selected a subset of \(\beta \) best performers. How important is the choice of \(\beta \)? Fig. 3a shows how the relative difference in classification accuracy between full and reduced set \((a_{\mathrm{tot,full}}-a_{\mathrm{tot,subset}})/a_{\mathrm{tot,full}}\) (blue line) and computation time (red line) evolve when increasing the number of clusters (1–50) into which the top performing features are grouped. The relative classification accuracy difference saturated at under 10% for between 20 and 30 selected features showing that this modest number of estimators covers most of diversity of the full set. Error bars signify the standard deviation over accuracies and computation times when starting from different numbers of top performers \(\beta = 100, 200, 300, \ldots , 1000\). Their tightness demonstrates that the accuracy of the final feature subset was not highly sensitive to the value of \(\beta \). Computation time is more variable. To obtain a reduced set of high-performing features, we used a threshold on the combined normalized accuracy \(a^{n,c}\) of one standard deviation above the mean, \(a_\mathrm {th} = \overline{a^\mathrm {n,c}} + \sigma _{a^\mathrm {n,c}}\), shown in Fig. 3b, yielding a set of 710 features.
The mean classification performance of the full feature set can be well approximated (to within 10%) by as few as 20 features. a While computation time is observed to rise linearly with increasing number of single selected features, the relative difference in accuracy to the full set of 4791 features starts to saturate at around 20 features. The performance of our final set of features is not highly sensitive to the size of our intermediate set of top-performing features, \(\beta \). Error bars signify standard deviation over both relative loss in accuracy and computation time for different numbers of top features (\(\beta = 100, 200, \ldots , 1000\)), which were clustered to obtain single features (see Methods Sect. 2.6). b We select the number of top features from a relative threshold on the combined normalized accuracy across tasks shown as a dashed blue vertical line, yielding a set of 710 high-performing features. c High-performing features were clustered on performance-correlation distances using hierarchical complete linkage clustering, using a distance threshold \(\gamma \) of of 0.2, yielding 22 clusters (Color figure online)
A canonical feature set, catch22
We reduced inter-feature redundancy in hctsa (Fulcher et al. 2013), by applying hierarchical complete linkage clustering based on the correlation distances between performance vectors of the set of 710 high-performing features, as shown in Fig. 3c. Clustering at a distance threshold \(\gamma = 0.2\) (see Sect. 2.6) yielded 22 clusters of similarly-performing features, where the correlation of performance vectors between all pairs of features within each cluster was greater than 0.8. Different values of \(\gamma \) correspond to different penalties for redundancy; e.g., higher values (\(\gamma > 0.4\)) group all features into a single cluster, whereas low values would form many more clusters and increase the size and complexity of computing the resulting canonical feature set. We found \(\gamma = 0.2\) to represent a good compromise that yields a resulting set of 22 clusters that matches the saturation of performance observed between 20 and 30 features (Fig. 3a).
We next aimed to capture the behavior of each of the 22 clusters as a single feature with the most representative behavior of its cluster. We first achieved this automatically: selecting the feature with the highest combined normalized accuracy from each cluster. When classifying our tasks with this set of 22 best estimators, it reached an overall class-balanced accuracy over folds and tasks \(a^\text {tot}\), Eq. (7), of \(\sim \) 70%, compared to \(\sim \) 77% using the full set. However, it is desirable for our 22 features to be as fast and easily interpretable as possible. For 6 of the 22 clusters, the top-performing feature was relatively complicated to compute and only offered a small improvement in performance relative to simpler features with similar performance in the same cluster. In these cases, we manually selected a simpler and more interpretable feature, yielding a final canonical set of 22 features which we call catch22 (CAnonical Time-series CHaracteristics). The 22 features that make up catch22 are described in Table 1. The catch22 features reflect the diverse and interdisciplinary literature of time-series analysis methods that have been developed to date (Fulcher et al. 2013), simultaneously probing different types of structure in the data, including properties of the distribution of values in the time series, its linear and non-linear autocorrelation, predictability, scaling of fluctuations, and others.
Table 1 The catch22 feature set spans a diverse range of time-series characteristics representative of the diversity of interdisciplinary methods for time-series analysis
Using the diverse canonical catch22 feature set, the mean class-balanced accuracy across all datasets, \(a^\text {tot}\), of catch22 was \(\sim \) 72%, a small reduction relative to the \(\sim \) 77% achieved when computing all 4791 features and very similar to the \(\sim \) 70% of the 22 highest-ranked features in each cluster. See Fig. 4a for a dataset-by-dataset scatter. The change in mean accuracy across folds and tasks, \(a^\text {tot}\), from using the 22 features of catch22 instead of all 4791 features cote the properties of a given dataset, but there was an average reduction in class-balanced accuracy (mean across folds) of 7.5% relative to the full set accuracy (77.2% full vs. 71.7% canonical, 7.5 percentage points). For some difficult problems, the increased computational expense of the full set of 4791 features yields a large boost in classification accuracy (accuracy of catch22 lower by a relative difference of 37% for the dataset 'EthanolLevel'; 50.2% full vs. 31.8% catch22). The reduced set gave better mean performance in only a small number of cases: e.g., for 'ECGMeditation' with 60% full versus 81.2% catch22; given that this dataset contained just 28 time series and had a high standard deviation in accuracies of the full set between folds (35.3%), the performance might not be significantly increased.
The catch22 set of 22 features approximates the classification performance of all 4791 features despite a dramatic reduction in computation time. a Each point represents one dataset in its balanced accuracy based on the catch22 feature set (x-axis) and the full set of 4791 features (y-axis). Error bars signify standard deviation across cross-validation folds. catch22 performs only a relative 7.5% worse than the full set of 4791 features: 71.7% versus 77.2% mean class-balanced accuracy across tasks \(a^\text {tot}\) as defined in Eq. (6). b Bars represents average over serial computation times for each of our 40 reference time series at a length of 10,000 samples using the full set of 4791 features, catch22 in Matlab and catch22 in c. From full set in Matlab to catch22 in c, computation time decreases from \(\sim \)300 s to less than 0.5 s. c Each dot shows computation time for one of the 40 reference time series adjusted to different lengths for the C-implemented catch22 set. The linear fit in the logarithmic plot reveals an almost linear scaling, with a scaling exponent of 1.16. See Sect. 2.8 for a description of the data
Our automatically selected catch22 feature set performs as well as the standard feature set for simple time series contained in the tsfeatures package. Class-balanced accuracy is shown for tsfeatures and catch22, error bars indicate standard deviation across folds. A gray dashed equality line is annotated, and particular datasets with the greatest differences in accuracy are highlighted as red circles and labeled
How does the performance of the data-driven features, catch22, compare to other small feature sets proposed in the literature? One popular collection of features is the manually-curated tsfeatures package (Hyndman et al. 2019) of which certain features were used for forecasting (Bandara et al. 2017), anomaly detection (Hyndman et al. 2016), and clustering (Williams 2014). While not being explicitly optimized for classification and clustering, its widespread adoption demonstrates its versatility in characterizing time series and makes it an interesting candidate to compare with catch22. We classified all datasets based on the 16 default features of tsfeatures (version 1.0.0) listed in "Appendix" Sect. 5.4. Reassuringly, the class-balanced accuracies of both feature sets were very similar across the generic UEA/UCR datasets, with a Pearson correlation coefficient \(r = 0.93\) (Fig. 5). The mean accuracy across tasks and folds, \(a^\text {tot}\), was slightly higher for catch22 (71.7%) than tsfeatures (69.4%). Our pipeline is general, and can select informative subsets of features for any collection of problems; e.g., for a more complex set of time-series classification tasks, our pipeline may yield estimators of more distinctive and complex dynamics.
How diverse are the features in catch22? Fig. 6 displays the class-balanced accuracies of each of the catch22 features (rows) on each task (columns), z-normalized by task. Some groups of tasks recruit the same types of features for classification (reflected by groups of columns with similar patterns). Patterns across rows capture the characteristic performance signature of each feature, and are visually very different, reflecting the diversity of features that make up catch22. This diversity is key to being able to probe the different types of temporal structure required to capture specific differences between labeled classes in different time-series classification tasks in the UEA/UCR repository. Feature-performances often fit the known dynamics in the data, e.g., for the two datasets 'FordA' and 'FordB' in which manual inspection reveals class differences in the low frequency content, the most successful feature is 'CO_FirstMin_ac' which finds the first minimum in the autocorrelation function. In some datasets, high performance can be attained using just a single feature, e.g., in 'ChlorineConcentration' ('SB_motifThree_quantile.hh', 52.3% versus 67.5% class-balanced mean accuracy over folds a for catch22 vs. all features) and 'TwoPatterns' ('CO_trev_1.num', 73.4% vs. 88.1%).
The canonical features in catch22 are sufficiently diverse to enable high performance across diverse classification tasks. The matrix shows class-balanced accuracies, z-scored per task (column), truncated at ± 3, and was reordered by hierarchical linkage clustering based on correlation distance in both columns (93 classification tasks) and rows (22 features). Similar columns are visible for datasets of the same type. The catch22 features each show strengths and weaknesses, and their diversity allows them to complement each other across a range of tasks (Color figure online)
Computation time and complexity
The classification performance using all 4791 features is well approximated by the 22 features in catch22, but how much computational effort does it save? To minimize execution time and make our condensed subset accessible from all major ecosystems used by the data-mining community, we implemented all catch22 features in C and wrapped them for R, Python and Matlab. All code is accessible on GitHub (https://github.com/chlubba/catch22). Using this C-implementation, the catch22 feature set can be computed sequentially on all 93 datasets of the UEA/UCR repository in less than 15 min on an Intel Core i7. On average, the features for each dataset were calculated within 9.4 s, the slowest being 'StarLightCurves' with 97 s due to its many (9236) relatively long (1024 samples) time series. The 27 quickest datasets stayed below 1 s in computation time; the three quickest, 'BirdChicken', 'Coffee', and 'BeetleFly' took less than 0.25 s.
While time series contained in the UEA/UCR repository are usually short, with an average length of 500 samples, real-world recordings can be substantially longer. Therefore, to understand how the computation times of our feature set scale with time-series lengths above those available in the UEA/UCR repository, we used a set of 40 reference time series from diverse sources (described in Sect. 2.8) to evaluate execution times of all hctsa- and the catch22-features for longer time series. Figure 4b shows execution times of different feature sets as a mean over our 40 reference time series at length 10,000. The Matlab implementation of catch22 accelerates computation time by a factor of \(\sim \) 30 compared to the full set of 4791 from \(\sim \) 300 to \(\sim \) 10 s. The C-implementation of catch22 again reduces execution time by a factor of approximately 30 compared to the Matlab implementation to \(\sim \)0.3 s at 10,000 samples, signifying an approximately 1000-fold acceleration compared to the full hctsa feature set in Matlab. The C-version of catch22 exhibits near-linear computational complexity, \({\mathcal {O}}(N^{1.16})\), as shown in Fig. 4c. Features varied markedly in their execution time, ranging from (C-implemented) DN_HistogramMode_10 (\(<0.1\) ms for our 10,000-sample reference series) to PD_PeriodicityWang_th0_01 (79 ms), with the latter representing approximately one third of the total computation time for catch22. A further acceleration by a factor of 3 could be achieved through parallelization, limited by the slowest feature PD_PeriodicityWang_th0_01 which takes up one third of the overall computation time.
Performance comparison
Compared to conventional shape-based time-series classifiers, that use distinguishing patterns in the time domain as the basis for classification (Fulcher and Jones 2014; Fulcher 2018), feature-based representations can reduce high-dimensional time series down to a compact and interpretable set of important numbers, constituting a dramatic reduction in dimensionality. While this computationally efficient representation of potentially long and complex streams of data is appealing, important information may be lost in the process, resulting in poorer performance than alternative methods that learn classification rules on the full time-series object. To investigate this, we compared the classification performance of catch22 (using a decision tree classifier as for every classification, see Sect. 2.4) to that of 36 other classifiers (accuracies obtained from the UEA/UCR repository Bagnall et al. 2017) including shape-based approaches like Euclidean or DTW nearest neighbor, ensembles of different elastic distance metrics (Lines and Bagnall 2015), interval methods, shapelets (Ye and Keogh 2009), dictionary based classifiers, or complex transformation ensemble classifiers that combine multiple time-series representations (COTE) (Bagnall et al. 2016). All comparisons are based on (class unbalanced) classification accuracies \(a^\text {ub}_\text {tot}\) on a fixed train-test split obtained from UEA/UCR classification repository. As shown in Fig. 7a, most datasets exhibit similar performance between the alternative methods and catch22, with a majority of datasets exhibiting better performance using existing algorithms than catch22. However, despite drastic dimensionality reduction, our feature-based approach outperforms the existing methods on a range of datasets, some of which are labeled in Fig. 7a. To better understand the strengths and weaknesses of our low-dimensional feature-based representation of time series, we compared it directly to two of the most well-studied and purely shape-based time-series classification methods: Euclidean-1NN and DTW-1NN ('DTW-R1-1NN' in the UEA/UCR repository), as shown in Fig. 7b. There is an overall high correlation in performance across datasets, with a range of average performance (unbalanced classification rate on the given train-test partition \(a^\text {ub}_\text {tot}\)): catch22 (69%), Euclidean 1-NN (71%), and DTW 1-NN (74%). The most interesting datasets are those for which one of the two approaches (shape-based or feature-based) markedly outperforms the other, as in these cases there is a clear advantage to tailoring your classification method to the structure of the data (Fulcher 2018); selected examples are annotated in Fig. 7b. We next investigate the characteristics of time-series datasets that make them better suited to different classification approaches.
Despite massive dimensionality reduction to 22 features, the catch22 representation often achieves similar or better performance on time-series classification tasks. a Classification accuracy is plotted from using the feature-based catch22 representation versus the performance of a range of existing methods across the 93 tasks in the UEA/UCR repository. Each dot represents the mean accuracy of alternative classifiers on a given dataset; error bars show the standard deviation over the 36 considered other methods containing simple full sequence shape-based approaches, over ensembles, shapelets, intervals, to complex transformation ensembles. An equality gray-dashed line is plotted, and regions in which catch22 or other methods perform better are labeled. b The two purely shape-based classifiers, Euclidean (blue circles) and DTW (green circles) 1 nearest-neighbor, are compared against catch22 features and a classification tree. All accuracies are unbalanced, as Eq. (1), and evaluated on the fixed train-test split provided in the UEA/UCR repository (Color figure online)
Characteristics of datasets that favor feature- or shape-based representations
There is no single representation that is best for all time-series datasets, but rather, the optimal representation depends on the structure of the dataset and the questions being asked of it (Fulcher 2018). In this section we characterize the properties of selected datasets that show a strong preference for either feature-based or shape-based classification, as highlighted in Fig. 7.
One striking example, is that of 'ShapeletSim', where the two labeled classes are much more accurately distinguished using the catch22 feature-based representation (unbalanced accuracy \(a^\text {ub}\) of 100%) than by all but two existing methods [BOSS (Schäfer 2015) and Fast Shapelets (Rakthanmanon and Keogh 2013)] with a mean and standard deviation over all other classifiers of 69.0 ± 18.7% (DTW-1NN 65%, Euc-1NN 53.9%). To understand the discrepancy, we visualized the data in the time-domain, as shown in Fig. 8a (upper), where one example time series and the mean in each class are plotted, revealing no consistent time-domain shape across the 100 instances of each class. However, the two classes of time series are clearly distinguished by their frequency content, as shown in the corresponding Welch power spectra in Fig. 8a (lower). The features in catch22 capture the temporal autocorrelation properties of each time series in various ways, facilitating an efficient representation to successfully capture class differences in 'ShapeletSim'; these differences cannot be captured straightforwardly from the time series' shape. In general, datasets without reliable shape differences between classes pose problems for time-domain distance metrics; consequently, the catch22 feature-based representation often yields superior classification performance. Examples are 'USOLeaf' (86.7 catch22 vs. 69.5±13.3% others; DTW-1NN 59.1%, Euc-1NN 52.1%), and 'SmallKitchenAppliances' (73.3% vs. 63.3 ± 12.1%; DTW-1NN 64.3%, Euc-1NN 34.4%).
Differences in the frequency domain are better picked up by features; subtle differences in shape are better detected by shape-based methods. Each subplot represents a class, blue lines show individual time series, red lines show an average over all time series in one class. a In the time domain (upper two plots), we show one example time series of the 'ShapeletSim' dataset (blue) and the average across all time series (red) for each class. The lower two plots display the Welch spectra of all time series individually in blue and the average over single time-series spectra in red. The mean spectra of the two classes differ visibly while there is no reliable difference in the time domain. b The individual (blue) and averaged (red) time series of the dataset 'Plane' should favor shape-based comparisons because of the highly reliable and aligned shapes in each class. c For the dataset 'CincECGtorso', all four classes can be well distinguished by their temporal offsets (Color figure online)
An example of a dataset that is well-suited to shape-based classification is the seven-class 'Plane' dataset, shown in Fig. 8b. Apart from a minority of anomalous instances in e.g., the 'Harrier' class, each class has a subtle but robust shape, and these shapes are phase-aligned, allowing shape-based classifiers to accurately capture class differences. Despite being visually well-suited to shape-based classification, catch22 captures the class differences with only a small reduction in accuracy \(a^\text {ub}\) (89.5%) compared to the shape-based classifiers (99.2 ± 1.4% over all given classifiers; DTW-1NN 100%, Euc-1NN 96.1%), demonstrating that feature-based representations can be versatile in capturing differences in time-series shape, despite a substantial reduction in dimensionality.
As a final example we consider the four classes of the 'CinCECGtorso' dataset, which are similarly accurately classified by our catch22 feature-based method (78.9%) and the average existing classifier (81.3 ± 13.3%). Interestingly, when comparing selected shape-based classifiers in Fig. 7, Euclidean-1NN (89.7%) outperforms the more complex DTW-1NN (65.1%). This difference in performance is due to the subtle differences in shape (particularly temporal offset of the deviation from zero) between the four classes, as shown in Fig. 8b. Simple time-domain distance metrics like Euclidean-1NN will capture these important differences well, whereas elastic distance measures like DTW shadow the informative temporal offsets. Converting to our feature-based representation discards most of the phase-information but still leads to a high classification accuracy.
Informative features provide understanding
Concise, low-dimensional summaries of time series, that exploit decades of interdisciplinary methods development for time-series analysis, are perhaps most important for scientists because they provide a means to understand class differences. Often a researcher will favor a method that provides interpretable understanding that can be used to motivate new solutions to a problem, even if it involves a small drop in classification accuracy relative to an opaque, black-box method. To demonstrate the ability of catch22 to provide understanding into class difference, we projected all datasets into a two-dimensional feature space determined using sequential forward selection (Whitney 1971) as described in Sect. 2.9. Two examples are shown in Fig. 9. In the dataset 'ShapeletSim' (Fig. 9a), the simple feature, SB_BinaryStats_diff_longstretch0, clearly distinguishes the two classes. This simple measure quantifies the length of the longest continued descending increments in the data which enables a perfect separation of the two classes because time series of the 'triangle' class vary on a slower timescale than 'noise' time series.
In the most accurate two-dimensional feature space for the 7-class 'Plane' dataset, shown in Fig. 9b, each class occupies a distinctive part of the space. The first feature, FC_LocalSimple_mean3_stderr captures variability in residuals for local 3-sample mean predictions of the next datapoint applied to through time, while the second feature, SP_Summaries_welch_rect_area_5_1, captures the proportion of low-frequency power in the time series. We discover, e.g., that time series of 'F-14 wings open' are less predictable from a 3-sample running mean than other planes, and that time series of 'Harrier' planes exhibit a greater proportion of low-frequency power than other types of planes. Thus, in cases when both shape-based and feature-based methods exhibit comparable performance (unbalanced accuracies \(a^\text {ub}\) on given split: 89.5% by catch22 vs. 99.1% mean over other classifiers), the ability to understand class differences can be a major advantage of the feature-based approach.
Class differences can be interpreted using feature-based representations of time series. We plot a projection of time series into an informative two-dimensional feature space (estimated from catch22 using sequential forward selection, see Sect. 2.9), where each time series is a point in the space and colored by its class label. Plots are shown for two datasets: a 'ShapeletSim', and b 'Plane'; in both cases, all labeled classes are clearly distinguished in the space. In 'ShapeletSim', SB_BinaryStats_diff_longstretch0, which calculates the length of the longest run of consecutive decreases in the time series. The two features selected for the 'Plane' dataset are the local predictability measure, FC_LocalSimple_mean3_stderr, and the low-frequency power estimate, SP_Summaries_welch_rect_area_5_1 (Color figure online)
Feature-based representations of time-series can distill complex time-varying dynamical patterns into a small set of interpretable characteristics that can be used to represent the data for applications like classification and clustering. Most importantly, features connect the data analyst to deeper theory, allowing interpretation of the properties of the data that facilitate successful performance. While large feature libraries have helped to overcome the limitations of manual, subjective duration of time-series features, they are inefficient and computationally expensive. Overcoming this limitation, here we introduce a methodology to generate small, canonical subsets of features that each display high classification performance across a given ensemble of tasks, and exhibit complementary performance characteristics with each other. We apply the method to a set of 93 classification tasks from the UEA/UCR repository, showing how a large library of 4791 features can be reduced to a canonical subset of just 22 features, catch22, which displays similar classification accuracy as the full set (relative reduction of 7.5% on average, 77.2% vs. 71.7%), computes quickly (\(<\,0.5\) s/10,000 samples), scales approximately linearly with time-series length (\({\mathcal {O}}(N^{1.16})\)), and allows the investigator to learn and understand what types of dynamical properties distinguish the labeled classes of their dataset. Compared to shape-based methods like dynamic time warping (DTW), catch22 gives comparable, and often superior classification performance, despite substantial dimensionality reduction. Using case studies, we explain why some datasets are better suited to shape-based classification (e.g., there are characteristic aligned shapes within each class), while others are better suited to feature-based classification (e.g., where classes do not have a characteristic, temporally aligned shape, but have characteristic dynamical properties that are encapsulated in one or more time-series features).
While some applications may be able to justify the computational expense of searching across a large feature library such as hctsa (Fulcher and Jones 2014, 2017), the availability of an efficient, reduced set of features, as catch22, will make the advantages of feature-based time-series classification and clustering more widely accessible. As an example application catch22 is being used in the self-organizing time-series database for data-driven interdisciplinary collaboration CompEngine to assess the similarity of recordings (Fulcher et al. 2019). Unlike the Matlab-based hctsa, catch22 does not require a commercial license to run, computes efficiently, and scales approximately linearly with time-series length in the cases we tested. This makes it straightforwardly applicable to much longer time series than are typically considered in the time-series classification literature, e.g., for a 10,000-sample time series, catch22 computes in 0.5 s. As well as being suitable for long recordings, feature-based representations do not require all time series to be the same length (unlike conventional shape-based classifiers), opening up the feature-based approach to new types of datasets – and indeed new types of analyses. Even though catch22 is selected here based on classification performance , the availability and ease of computing catch22 opens up applications to areas including feature-based time-series modeling, forecasting, anomaly detection, motif discovery, and others. To facilitate its adoption, we provide an efficient C-implementation of catch22, with wrappers for Matlab, Python, and R.
We have shown that the most useful representation of a time series varies widely across datasets, with some problems better suited to feature-based classification, and others better suited to shape-based classification. The 22 features selected here are tailored to the properties of the UEA/UCR datasets (which are typically short and phase-aligned), but the method we present here is general and could be used to generate reduced feature sets tailored to any application domain of interest that allows individual features to be assigned performance scores. For example, given a different set of classification datasets where key class differences are the result of subtle variations in dynamical properties in long streams of time-series data, we would obtain a canonical set that might include features of long-range automutual information or measures the nonlinear time-series analysis literature: very different features to the relatively simple measures contained in catch22. As new time-series datasets are added to the UEA/UCR repository, that better capture the diversity of time-series data studied across industry and science, our feature reduction method could be rerun to extract new canonical feature sets that reflect the types of time-series properties that are important to measure in the new data. Note that hybrid methods such as COTE (Bagnall et al. 2016), which are not limited to a single time-series representation but can adapt to the problem at hand, consistently outperform both the shape-based existing classifiers and our features at the price of a much higher computational effort. Given its computational efficiency, catch22 could be incorporated straightforwardly in these ensemble-based frameworks of multiple representations. Here we excluded features that are sensitive to the location and spread of the data distribution, to ensure a fair comparison to shape-based methods which use normalized data; but for many real-world applications these could be highly relevant and should therefore be retained to allow improvements in classification accuracy. Our selection pipeline is agnostic to the classification task collection used and can in principle be generalized beyond classification tasks to different time-series analyses as well. Here we score the performance of each feature on a given task as the classification accuracy, but this metric could be adapted to allow application to regression problems (correlation of a feature with the exogenous target variable), forecasting problems (prediction error), and clustering problems (separation of known clusters). The proposed method has the advantage of identifying individually informative estimators and transparently grouping features into similarly performing clusters for enhanced interpretability. Still, other approaches for selecting feature subsets exist, such as sequential forward selection or LASSO, and it would be interesting to compare the results of alternative pipelines building on these existing selection techniques with our results in future work.
In conclusion, here we present catch22, a concise, accessible feature-based summary of an interdisciplinary time-series analysis literature for use in time-series classification tasks. We hope that the ability to readily leverage feature-based representations of time series—and to generate new reduced feature sets tailored to specific domain problems—will aid diverse applications involving time series.
Information sharing statement
The C-implementation of our canonical features along with their wrapped versions for R, Python and Matlab can be accessed on GitHub repository
https://github.com/chlubba/catch22.
The selection pipeline is accessible on https://github.com/chlubba/op_importance.
With the notable exception of four unnormalized datasets: 'AALTDChallenge', 'ElectricDeviceOn', 'ECGMeditation', 'HeartbeatBIDMC'.
Bagnall A, Davis LM, Hills J, Lines J (2012) Transformation based ensembles for time series classification. In: Proceedings of the 2012 SIAM international conference on data mining, pp 307–318. ISBN 978-1-61197-232-0
Bagnall A, Lines J, Hills J, Bostrom A (2016) Time-series classification with COTE: the collective of transformation-based ensembles. In: 2016 IEEE 32nd international conference on data engineering, ICDE, vol 27, no 9, pp 1548–1549, 2016. ISSN 10414347. https://doi.org/10.1109/ICDE.2016.7498418
Bagnall A, Lines J, Bostrom A, Large J, Keogh E (2017) The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min Knowl Discov 31(3):606–660. ISSN 1573756X. https://doi.org/10.1007/s10618-016-0483-9
Bagnall A, Lines J, Vickers W, Keogh E The UEA & UCR time series classification repository. http://www.timeseriesclassification.com/
Bandara K, Bergmeir C, Smyl S (2017) Forecasting across time series databases using long short-term memory networks on groups of similar series: a clustering approach. arXiv. ISSN 13578170. https://doi.org/10.1002/pdi.718. http://arxiv.org/abs/1710.03222
Berndt D, Clifford J (1994) Using dynamic time warping to find patterns in time series. In: Workshop on knowledge knowledge discovery in databases, vol 398, pp 359–370. ISBN 0-929280-73-3
Biason A, Pielli C, Rossi M, Zanella A, Zordan D, Kelly M, Zorzi M (2017) EC-CENTRIC: an energy- and context-centric perspective on IoT systems and protocol design. IEEE Access 5:6894–6908. ISSN 21693536. https://doi.org/10.1109/ACCESS.2017.2692522
Dau HA, Bagnall A, Kamgar K, Yeh CM, Zhu Y (2018) UCR time series archive 2018. arXiv
Faloutsos C, Ranganathan M, Manolopoulos Y (1994) Fast subsequence matching in time-series databases. In: SIGMOD '94 proceedings of the 1994 ACM SIGMOD international conference on management of data, pp 419–429
Fisher RA (1925) Statistical methods for research workers. ISBN 978-1614271666. 52, 281–302
Fulcher BD (2017) 1000 empirical time series
Fulcher BD (2018) Feature-based time-series analysis. In: Dong G, Liu H (eds) Feature engineering for machine learning and data analytics, chap 4, pp 87–116. CRC Press
Fulcher BD, Jones NS (2014) Highly comparative feature-based time-series classification. IEEE Trans Knowl Data Eng 26(12):3026–3037. ISSN 10414347. https://doi.org/10.1109/TKDE.2014.2316504
Fulcher BD, Jones NS (2017) hctsa: a computational framework for automated time-series phenotyping using massive feature extraction. Cell Syst 5(5):527–531. ISSN 24054720. https://doi.org/10.1016/j.cels.2017.10.001
Fulcher BD, Little MA, Jones NS (2013) Highly comparative time-series analysis: the empirical structure of time series and their methods. J R Soc Interface 10(83):20130048. ISSN 1742-5662. https://doi.org/10.1098/rsif.2013.0048
Fulcher BD, Lubba CH, Sethi S, Jones NS (2019) CompEngine: a self-organizing, living library of time-series data (in submission)
Holm S (1979) A simple sequentially rejective multiple test procedure. Scand J Stat 6:65–70. ISSN 03036898. https://doi.org/10.2307/4615733
Hyndman RJ, Wang E, Laptev N (2016) Large-scale unusual time series detection. In: Proceedings—15th IEEE international conference on data mining workshop, ICDMW 2015, pp 1616–1619. ISSN 2375-9259. https://doi.org/10.1109/ICDMW.2015.104
Hyndman RJ, Wang E, Kang Y, Talagala T, Taieb SB (2019) tsfeatures: time series feature extraction. https://github.com/robjhyndman/tsfeatures
Lines J, Bagnall A (2015) Time series classification with ensembles of elastic distance measures. Data Min Knowl Discov 29(3):565–592. ISSN 13845810. https://doi.org/10.1007/s10618-014-0361-2
Mietus JE (2002) The pNNx files: re-examining a widely used heart rate variability measure. Heart 88(4):378–380. ISSN 00070769. https://doi.org/10.1136/heart.88.4.378
Moon Y-S, Whang K-Y, Loh W-K (2001) Duality-based subsequence matching in time-series databases. In: Proceedings 17th international conference on data engineering, pp 263–272. ISSN 1063-6382. https://doi.org/10.1109/ICDE.2001.914837
Mörchen F (2003) Time series feature extraction for data mining using DWT and DFT. Technical Report, 33
Nanopoulos A, Alcock RJ, Manolopoulos Y (2001) Feature-based classification of time-series data. Int J Comput Res 10(3):
Rakthanmanon T, Keogh E (2013) Fast shapelets: a scalable algorithm for discovering time series shapelets. In: Proceedings of the 2013 SIAM international conference on data mining, pp 668–676. ISSN 1063-4266. https://doi.org/10.1137/1.9781611972832.74. https://doi.org/10.1137/1.9781611972832.74
Schäfer P (2015) The BOSS is concerned with time series classification in the presence of noise. Data Min Knowl Discov 29(6):1505–1530. ISSN 13845810. https://doi.org/10.1007/s10618-014-0377-7
Sethi SS, Zerbi V, Wenderoth N, Fornito A, Fulcher BD (2017) Structural connectome topology relates to regional BOLD signal dynamics in the mouse brain. Chaos 27(4). ISSN 10541500. https://doi.org/10.1063/1.4979281
Shekar AK, Pappik M, Iglesias Sánchez P, Müller E (2018) Selection of relevant and non-redundant multivariate ordinal patterns for time series classification. In: Larisa S, Joaquin V, George P, Michelangelo C (eds) Discovery science. Springer International Publishing, Cham, pp 224–240 (ISBN 978-3-030-01771-2)
Timmer J, Gantert C, Deuschl G, Honerkamp J (1993) Characteristics of hand tremor time series. Biol Cybern 70(1):75–80. ISSN 03401200. https://doi.org/10.1007/BF00202568
Vlachos M, Kollios G, Gunopulos D (2002) Discovering similar multidimensional trajectories. In: Data mining and knowledge discovery, p 673. ISBN 978-3-319-23519-6. https://doi.org/10.1007/978-3-319-23519-6_1401-2
Wang X, Smith K, Hyndman R (2006) Characteristic-based clustering for time series data. Data Min Knowl Discov 13(3):335–364. ISSN 13845810. https://doi.org/10.1007/s10618-005-0039-x
Wang X, Wirth A, Wang L (2007) Structure-based statistical features and multivariate time series clustering. In: Proceedings—IEEE international conference on data mining, ICDM, pp 351–360. ISSN 15504786. https://doi.org/10.1109/ICDM.2007.103
Whitney AW (1971) A direct method of nonparametric measurement selection. IEEE Trans Comput 20(September):1100–1103
Williams J (2014) Clustering household electricity use profiles. In: MLSDA '13 Proceedings of workshop on machine learning for sensory data analysis (December 2013), pp 19–26. https://doi.org/10.1145/2542652.2542656
Ye L, Keogh E (2009) Time series shapelets. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining—KDD '09, p 947. https://doi.org/10.1145/1557019.1557122
CL thanks Engineering and Physical Sciences Research Council (EPSRC) Grant EP/L016737/1 and Galvani Bioelectronics. SSS is supported by the Natural Environment Research Council through the Science and Solutions for a Changing Planet DTP. BDF is supported by the National Health and Medical Research Council (NHMRC) Grant, 1089718. NJ thanks EPSRC Grants EP/N014529/1 and EP/K503733/1. We further thank the authors behind the UEA/UCR time series classification repository for the valuable data that made this project possible. Funding was provided by Natural Environment Research Council.
Ben D. Fulcher and Nick S. Jones have contributed equally to this study.
Department of Bioengineering, Imperial College London, South Kensington, London, SW7 2AZ, UK
Carl H. Lubba & Simon R. Schultz
Department of Mathematics, Imperial College London, South Kensington, London, SW7 2AZ, UK
Sarab S. Sethi, Philip Knaute & Nick S. Jones
School of Physics, Faculty of Science, The University of Sydney, Camperdown, NSW, 2006, Australia
Ben D. Fulcher
Carl H. Lubba
Sarab S. Sethi
Philip Knaute
Simon R. Schultz
Nick S. Jones
Correspondence to Ben D. Fulcher or Nick S. Jones.
Responsible editor: Eamonn Keogh.
Manually replaced features
Table 2 lists the best five features of each cluster in which a feature has been manually exchanged. The full list of 710 features grouped by cluster can be accessed as a "Appendix" file.
Table 2 The five best features of each cluster with manual replacement
Insignificant features
The features listed in Table 3 were found to exhibit a classification performance across tasks consistent with a random-number generator.
Table 3 The 145 features listed here exhibited classification performance consistent with a random-number generator
Time series for computation time evaluation
A selection of 40 time series was obtained from the dataset '1000 Empirical Time series' (Fulcher 2017) (Table 4).
Table 4 40 empirical time series selected for evaluating the computation times of features
Performance comparison with tsfeatures
The list of the 16 default features from tsfeatures that we used for a performance comparison with catch22 are in Table 5.
Table 5 The 16 features of tsfeatures we used for classification
Lubba, C.H., Sethi, S.S., Knaute, P. et al. catch22: CAnonical Time-series CHaracteristics. Data Min Knowl Disc 33, 1821–1852 (2019). https://doi.org/10.1007/s10618-019-00647-x
Issue Date: November 2019
DOI: https://doi.org/10.1007/s10618-019-00647-x | CommonCrawl |
New general decay results for a von Karman plate equation with memory-type boundary conditions
DCDS Home
Orbital stability of elliptic periodic peakons for the modified Camassa-Holm equation
March 2020, 40(3): 1737-1755. doi: 10.3934/dcds.2020091
Large time behavior of solution to quasilinear chemotaxis system with logistic source
Jie Zhao
College of Mathematics and Information, China West Normal University, Nanchong 637009, China
Received May 2019 Revised October 2019 Published December 2019
This paper deals with the quasilinear parabolic-elliptic chemotaxis system
$ \begin{eqnarray*} \left\{ \begin{array}{llll} u_{t} = \nabla\cdot(D(u)\nabla u)-\nabla\cdot(\chi u \nabla v)+\mu u- \mu u^{r}, \, \, \, &x\in\Omega, \, \, \, t>0, \\ \tau v_{t} = \Delta v-v+u, &x\in\Omega, \, \, \, t>0, \end{array} \right. \end{eqnarray*} $
under homogeneous Neumann boundary conditions in a bounded domain
$ \Omega\subset\mathbb{R}^{n} $
with smooth boundary, where
$ \tau\in\{0, 1\} $
$ \chi>0 $
$ \mu>0 $
$ r\geq2 $
$ D(u) $
is supposed to satisfy
$ \begin{equation*} \begin{split} D(u)\geq (u+1)^{\alpha} \, \, \, \text{with}\, \, \, \alpha>0. \end{split} \end{equation*} $
It is shown that when
$ \mu>\frac{\chi^{2}}{16} $
, then the solution to the system exponentially converges to the constant stationary solution
$ (1, 1) $
Keywords: Chemotaxis, asymptotic behavior, logistic source.
Mathematics Subject Classification: 92C17, 35B40, 35K57.
Citation: Jie Zhao. Large time behavior of solution to quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1737-1755. doi: 10.3934/dcds.2020091
T. Cieálak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, J. Differential Equations, 252 (2012), 5832-5851. doi: 10.1016/j.jde.2012.01.045. Google Scholar
T. Cieálak and M. Winkler, Finite-time blow-up in a quasilinear system of chemotaxis, Nonlinearity, 21 (2008), 1057-1076. doi: 10.1088/0951-7715/21/5/009. Google Scholar
E. Galakhov, O. Salieva and J. I. Tello, On a parabolic-elliptic system with chemotaxis and logistic type growth, J. Differential Equations, 261 (2016), 4631-4647. doi: 10.1016/j.jde.2016.07.008. Google Scholar
X. He and S. N. Zheng, Convergence rate estimates of solutions in a higher dimensional chemotaxis system with logistic source, J. Math. Anal. Appl., 436 (2016), 970-982. doi: 10.1016/j.jmaa.2015.12.058. Google Scholar
D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system, J. Differential Equations, 215 (2005), 52-107. doi: 10.1016/j.jde.2004.10.022. Google Scholar
S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains, J. Differential Equations, 256 (2014), 2993-3010. doi: 10.1016/j.jde.2014.01.028. Google Scholar
K. Kang and A. Stevens, Blowup and global solutions in a chemotaxis-growth system, Nonlinear Anal. TMA, 135 (2016), 57-72. doi: 10.1016/j.na.2016.01.017. Google Scholar
E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theoret. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
E. F. Keller and L. A. Segel, Model for chemotaxis, J. Theoret. Biol., 30 (1971), 225-234. doi: 10.1016/0022-5193(71)90050-6. Google Scholar
E. F. Keller and L. A. Segel, Traveling bands of chemotactic bacteria: A theoretical analysis, J. Theoret. Biol., 30 (1971), 235-248. doi: 10.1016/0022-5193(71)90051-8. Google Scholar
J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source, J. Differential Equations, 258 (2015), 1158-1191. doi: 10.1016/j.jde.2014.10.016. Google Scholar
J. Lankeit, Chemotaxis can prevent thresholds on population density, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 1499-1527. doi: 10.3934/dcdsb.2015.20.1499. Google Scholar
K. Lin and C. L. Mu, Global dynamics in a fully parabolic chemotaxis system with logistic source, Discrete Contin. Dyn. Syst., 36 (2016), 5025-5046. doi: 10.3934/dcds.2016018. Google Scholar
T. Nagai, Blow-up of radially symmetric solutions to a chemotaxis system, Adv. Math. Sci. Appl., 5 (1995), 581-601. Google Scholar
T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two-dimensional domains, J. Inequal. Appl., 6 (2001), 37-55. doi: 10.1155/S1025583401000042. Google Scholar
T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis, Funkc. Ekvacioj, 40 (1997), 411-433. Google Scholar
K. Osaki and A. Yagi, Finite dimensional attractors for one-dimensional Keller-Segel equations, Funkcial. Ekvac., 44 (2001), 441-469. Google Scholar
M. M. Porzio and V. Vespri, Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations, J. Differential Equations, 103 (1993), 146-178. doi: 10.1006/jdeq.1993.1045. Google Scholar
Y. S. Tao and M. Winkler, Persistence of mass in a chemotaxis system with logistic source, J. Differential Equations, 259 (2015), 6142-6161. doi: 10.1016/j.jde.2015.07.019. Google Scholar
Y. S. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subscritical sensitivity, J. Differential Equations, 252 (2012), 692-715. doi: 10.1016/j.jde.2011.08.019. Google Scholar
Y. S. Tao and M. Winkler, Large time behavior in a multi-dimensional chemotaxis-haptotaxis model with slow signal diffusion, SIAM J. Math. Anal., 47 (2015), 4229-4250. doi: 10.1137/15M1014115. Google Scholar
J. I. Tello and M. Winkler, A chemotaxis system with logistic source, Comm. Partial Differential Equations, 32 (2007), 849-877. doi: 10.1080/03605300701319003. Google Scholar
G. Viglialoro and T. E. Woolley, Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth, Discrete Contin. Dyn. Syst. Ser. B, 23 (2018), 3023-3045. doi: 10.3934/dcdsb.2017199. Google Scholar
L. C. Wang, C. L. Mu and P. Zheng, On a quasilinear parabolic-elliptic chemotaxis system with logistic source, J. Differential Equations, 256 (2014), 1847-1872. doi: 10.1016/j.jde.2013.12.007. Google Scholar
Z. A. Wang and T. Xiang, A class of chemotaxis systems with growth source and nonlinear secretion, arXiv: 1510.07204v1. Google Scholar
L. C. Wang, C. L. Mu, X. G. Hu and P. Zheng, Boundedness and asymptotic stability of solutions to a two-species chemotaxis system with consumption of chemoattractant, J. Differential Equations, 264 (2018), 3369-3401. doi: 10.1016/j.jde.2017.11.019. Google Scholar
L. C. Wang, J. Zhang, C. L. Mu and X. G. Hu, Boundedness and stabilization in a two-species chemotaxis system with two chemicals, Discrete Contin. Dyn. Syst. Ser. B, 25 (2020), 191-221. Google Scholar
M. Winkler, Does a 'volume-filling effect' always prevent chemotactic collapse?, Math. Methods Appl. Sci., 33 (2010), 12-24. doi: 10.1002/mma.1146. Google Scholar
M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. Google Scholar
M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Comm. Partial Differential Equations, 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. Google Scholar
M. Winkler, Emergence of large population densities despite logistic growth restrictions in fully parabolic chemotaxis systems, Discrete Contin. Dyn. Syst. Ser. B, 22 (2017), 2777-2793. doi: 10.3934/dcdsb.2017135. Google Scholar
M. Winkler, Finite-time blow-up in low-dimensional Keller-Segel systems with logistic-type superlinear degradation, Z. Angew. Math. Phys., 69 (2018), Art. 69, 40 pp. doi: 10.1007/s00033-018-0935-8. Google Scholar
M. Winkler, A critical blow-up exponent in a chemotaxis system with nonlinear signal production, Nonlinearity, 31 (2018), 2031-2056. doi: 10.1088/1361-6544/aaaa0e. Google Scholar
M. Winkler, How far can chemtactic cross-diffusion enforce exceeding carrying capacities?, J. Nonlinear Sci., 24 (2014), 809-855. doi: 10.1007/s00332-014-9205-x. Google Scholar
M. Winkler, Blow-up in a higher-dimensional chemotaxis system despite logistic growth restriction, J. Math. Anal. Appl., 384 (2011), 261-272. doi: 10.1016/j.jmaa.2011.05.057. Google Scholar
M. Winkler, Global asymptotic stability of constant equilibria ina fully parabolic chemotaxis system with strong logistic dampening, J. Differential Equations, 257 (2014), 1056-1077. doi: 10.1016/j.jde.2014.04.023. Google Scholar
J. Zhao, C. L. Mu, L. C. Wang and K. Lin, A quasilinear parabolic-elliptic chemotaxis-growth system with nonlinear secretion, Appl. Anal., (2018). doi: 10.1080/00036811.2018.1489955. Google Scholar
J. S. Zheng and Y. F. Wang, Boundedness and decay behavior in a higher-dimensional quasilinear chemotaxis system with nonlinear logistic source, Comput. Math. Appl., 72 (2016), 2604-2619. doi: 10.1016/j.camwa.2016.09.020. Google Scholar
P. Zheng, C. L. Mu and X. G. Hu, Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source, Discrete Contin. Dyn. Syst., 35 (2015), 2299-2323. doi: 10.3934/dcds.2015.35.2299. Google Scholar
Shijie Shi, Zhengrong Liu, Hai-Yang Jin. Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source. Kinetic & Related Models, 2017, 10 (3) : 855-878. doi: 10.3934/krm.2017034
Tomomi Yokota, Noriaki Yoshino. Existence of solutions to chemotaxis dynamics with logistic source. Conference Publications, 2015, 2015 (special) : 1125-1133. doi: 10.3934/proc.2015.1125
Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 789-802. doi: 10.3934/dcds.2014.34.789
Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa. Global existence for an attraction-repulsion chemotaxis fluid model with logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 423-447. doi: 10.3934/dcdsb.2018180
Ke Lin, Chunlai Mu. Global dynamics in a fully parabolic chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 5025-5046. doi: 10.3934/dcds.2016018
Wenji Zhang, Pengcheng Niu. Asymptotics in a two-species chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - B, 2021, 26 (8) : 4281-4298. doi: 10.3934/dcdsb.2020288
Jie Zhao. A quasilinear parabolic-parabolic chemotaxis model with logistic source and singular sensitivity. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021193
Giuseppe Viglialoro, Thomas E. Woolley. Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3023-3045. doi: 10.3934/dcdsb.2017199
Georges Chamoun, Moustafa Ibrahim, Mazen Saad, Raafat Talhouk. Asymptotic behavior of solutions of a nonlinear degenerate chemotaxis model. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4165-4188. doi: 10.3934/dcdsb.2020092
Rachidi B. Salako, Wenxian Shen. Existence of traveling wave solutions to parabolic-elliptic-elliptic chemotaxis systems with logistic source. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 293-319. doi: 10.3934/dcdss.2020017
Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3357-3377. doi: 10.3934/dcdsb.2018324
Rachidi B. Salako, Wenxian Shen. Spreading speeds and traveling waves of a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems, 2017, 37 (12) : 6189-6225. doi: 10.3934/dcds.2017268
Guoqiang Ren, Bin Liu. Global boundedness of solutions to a chemotaxis-fluid system with singular sensitivity and logistic source. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3843-3883. doi: 10.3934/cpaa.2020170
Ke Lin, Chunlai Mu. Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2233-2260. doi: 10.3934/dcdsb.2017094
Chunhua Jin. Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3547-3566. doi: 10.3934/dcds.2018150
Rachidi B. Salako. Traveling waves of a full parabolic attraction-repulsion chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 5945-5973. doi: 10.3934/dcds.2019260
Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2299-2323. doi: 10.3934/dcds.2015.35.2299
Hong Yi, Chunlai Mu, Guangyu Xu, Pan Dai. A blow-up result for the chemotaxis system with nonlinear signal production and logistic source. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2537-2559. doi: 10.3934/dcdsb.2020194
Lei Yang, Lianzhang Bao. Numerical study of vanishing and spreading dynamics of chemotaxis systems with logistic source and a free boundary. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1083-1109. doi: 10.3934/dcdsb.2020154
Langhao Zhou, Liangwei Wang, Chunhua Jin. Global solvability to a singular chemotaxis-consumption model with fast and slow diffusion and logistic source. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021122 | CommonCrawl |
Alea iacta est
Isaac B. Manfred always dreamed about being a terribly rich man. Recently, he started to study dice games. He found several of them similar to a trademarked game called Yahtzee! The rules sometimes vary but basic principles are the same. To give you an idea, we will describe a simplified version of such rules.
The game consists of rounds. In each round, a player rolls five dice. After the first roll, it is possible to keep some of the dice and re-roll the rest of them. Any number of dice can be rerolled (including none or all of them). If the re-rolled dice still do not fit the player's intentions, it is possible to re-roll some of them again, for the third and final time. After at most two such re-rolls, the player must assign the result to one of possible combinations and the round is scored according to that combination.
At least one 1.
One point for each 1.
Twos
Two points for each 2.
Three points for each 3.
Four points for each 4.
Five points for each 5.
Six points for each 6.
(1 2 3 4 5) or (2 3 4 5 6).
Thirty points.
Three of the same value and
Sum of all dice values.
a pair of another value.
Four of the same value,
the fifth one different.
Five of a kind
All five of the same value.
Fifty points.
Figure 1: The list of combinations, conditions that must be satisfied to use them, and the number of points scored when the combination is used.
A small example: The player rolls 2, 3, 6, 6, 5. The two 6's are kept and the three remaining dice re-rolled, they give new values: 1, 1, 6. The player may now choose to score 20 points immediately for a Full House. Instead, he or she decides to re-roll the two 1's again, in hope there will be another 6. The dice give 4 and 5 and the player will score either 18 points for Sixes or 27 points for Chance.
The main point of the game is that there are eleven combinations and eleven rounds. During the whole game, each combination must be used exactly once. It may happen that some result would not fit into any available combination. In such a case, the player must select some combination anyway, scoring zero points for that round and losing the possibility to use that combination later. These rules make the game very tricky, especially at the end, when the combinations have been almost exhausted.
Now, we get back to Isaac. He found a casino with an electronic version of this dice game. After carefully watching many games of other players, he was able to crack the random-number generator used in the machine. Therefore, he is able to predict the following rolls exactly. What an opportunity! However, it is still not easy to find the optimal strategy. If you write a program that would help him to become rich, he may share some of his money with you.
The input contains several scenarios (at most 12), each of them specified on a single line. The line contains three numbers separated by a space: $A$, $C$, and $X_0$. These numbers describe the random-number generator: $A$ is called a multiplier $(1\leq A\leq 2^{31})$, $C$ is an increment $(0\leq C\leq 2^{31})$, and $X_0$ is the initial seed $(0\leq X_0\leq 2^{31})$. The last scenario is followed by a line containing three zeros.
The generator is a linear congruential generator, which means that the next random number is calculated from the previous one using the following formula:
\[ X_{n+1} = (A\cdot X_ n + C) \bmod 2^{32} \]
The modulo operation specifies that only the lowest 32 bits of the result are used, the rest is discarded. Numbers $X_1, X_2, X_3, \ldots $ constitute a pseudo-random sequence, each of them determines the result of one individual roll of a dice. With congruential generators, the "randomness" of the numbers is in their higher bits only – therefore, to get a result of the $n$-th roll (starting with $n = 1$), we discard lower 16 bits of the number $X_ n$ and compute the remainder when the number in bits 16–31 is divided by six. This gives a number between 0 and 5, by adding one, we get a number shown on a dice:
\[ roll(n) = (\lfloor X_ n/2^{16}\rfloor \bmod 6) + 1 \]
For example, when $A = 69069, C = 5,$ and the $X_0 = 0$ is zero, we get the following sequence of "random" rolls: $1, 6, 6, 3, 2, 4, 3, 2, 3, 5, 1, 6, 6, 4, 5, 1, 3, 4, 1, \ldots $.
For each scenario, print one integer number: the maximal number of points that may be scored in a game determined by the given generator. The score is calculated after 11 rounds as the sum of scores in all combinations.
1664525 1013904223 177
1103515245 12345 67890
Problem ID: alea
CPU Time limit: 4 seconds
Authors: Josef Cibulka, Martin Kačer, and Jan Stoklasa
Source: CTU Open 2008 | CommonCrawl |
User Modeling and User-Adapted Interaction
July 2019 , Volume 29, Issue 3, pp 573–618 | Cite as
A methodology for creating and validating psychological stories for conveying and measuring psychological traits
Kirsten A. Smith
Matt Dennis
Judith Masthoff
Nava Tintarev
First Online: 19 March 2019
Personality impacts all areas of our lives; it governs who we are and how we react to life's challenges. Personalized systems that adapt to end users should take into account the user's personality to perform well. Several methodologies (e.g. User-as-Wizard, indirect studies) that use personality adaptation require first for personality to be conveyed to the participant; this has few validated approaches. Furthermore, measuring personality is often time consuming, prone to response bias (e.g. using questionnaires) or data intensive (e.g. using behaviour or text mining). This paper presents a methodology for creating and validating stories to convey psychological traits and for using such stories with a personality slider scale to measure these traits. We present the validation of the scale and evaluate its reliability. To evidence the validity of the methodology, we outline studies where the stories and scale have been effectively applied (in recommender systems, intelligent tutoring systems, and persuasive systems).
Empirical methodology Personality Personality measurement Research tools
Personality—"a person's nature or disposition; the qualities that give one's character individuality"1—is a key area of research in user modelling and user adaptive systems. One of the most popular ways to describe and measure personality is trait theory—where a person is assessed against one or more factors (e.g. 'Conscientiousness' or 'Agreeableness'). These measurable differences in how people interact with the world are prime targets for providing users with an appropriately tailored user experience. However, to facilitate these tailored user experiences, researchers first need to discover which aspects of personality are important for adaptation, and how to tailor experience to them.2
One approach would be to measure users' personality and ask them to use the system or evaluate its features. However, as noted in Paramythis et al.'s (2010) discussion on layered evaluation, one issue with using a user-based study for an adaptive system is that adaptation takes time, often more than is available during a study. One solution they advocate is an indirect study, where the user model is given to participants and they perform the task on behalf of a third party. This allows researchers to control the characteristics of the imaginary user, avoiding the time delay needed for populating the user model from actual user interactions with the system. An indirect study also ensures that the input to an adaptation layer is perfect, making it very suitable for layered evaluations. Indirect studies may also be required for other reasons—for example, they are needed when it is difficult to recruit a large enough number of target participants, such as in the work by Smith et al. (2016) for skin cancer patients.
Another way to investigate adaptation strategies and discover pertinent personality traits is by using a User-as-wizard approach (Masthoff 2006; Paramythis et al. 2010), which uses human behaviour to inspire the algorithms needed in an adaptive system. In a User-as-Wizard study, participants are given the same information the system would have, and are asked to perform the system's task. Normally, participants will deal with fictional users, which allows us to study multiple participants dealing with the same user, controlling exactly what information participants get.
When using a User-as-Wizard or indirect approach for adaptation to personality research, the simulated user's personality needs to be conveyed. However, there is a paucity of easy, validated ways to convey or represent the personality of a third party to participants. One option is to use real people, allowing participants to interact with a person with the desired trait. However, this is hard to control as it is hard to ensure participants adapt to personality instead of, for example, current affective state. Participants would have to spend considerable time with the individual to perceive their personality. Another option is to ask participants to "imagine a user who is extravert" or provide statements such as "John is neurotic". This approach is unlikely to elicit empathy from participants due to a lack of context about the simulated user and could possibly be overlooked when placed with other data, such as test scores.
This is a non-trivial research problem: how to provide enough information about the personality of a simulated user for participants to identify and empathise with them, without making the simulated user seem one-dimensional and implausible. This paper details a methodology for conveying personality using validated personality stories.
In addition to conveying personality, these stories can be used as part of an alternative method of measuring personality.
Reliable and efficient personality measurement is still largely an open challenge. Whilst validated personality tests exist, completing them may create an overhead that is unacceptable to users: personality tests range from the Five Item Personality Inventory (FIPI test) (Gosling et al. 2003) to the 300-item International Personality Item Pool (IPIP-NEO) (Goldberg et al. 2006). A problem with questionnaires is response bias, in particular, the bias introduced by acquiescence or 'yea-saying'—the tendency of individuals to consistently agree with survey items regardless of their content (Jackson and Messick 1958). This is an issue with many personality trait questionnaires, and was one reason why a new version of the Big Five Inventory (BFI-2) was produced recently (Soto and John 2017). Questionnaires may also be undesirable for reasons described later. Current approaches to unobtrusively measure personality include analysis of blogs (e.g. Nowson and Oberlander 2007; Iacobelli et al. 2011), users' social media content (e.g. Facebook, Twitter) (Gao et al. 2013; Golbeck et al. 2011; Quercia et al. 2011) or social media behaviour (e.g. Amichai-Hamburger and Vinitzky 2010; Ross et al. 2009). These indirect approaches are however still far less reliable than direct approaches.
Using the personality stories as a basis, we propose an alternative and light-weight approach for reliably measuring personality, using so-called personality sliders with the stories at the slider ends, which is faster than completing many personality tests. We describe how identification with the people in personality stories can easily and engagingly be used to measure user personality. Personality sliders provide a broad characterisation of a personality trait, whilst at the same time making it less salient to participants what they are asked about. Personality sliders take about a minute to complete per trait (assuming an average reading speed), so are fast to administer and may save time particularly:
In studies or systems that require a user characteristic for which short questionnaires do not yet exist. Short questionnaires only exist for some personality traits (most noticeably the Five Factor Model), whilst the slider approach can be used for any personality trait as well as other user characteristics. Of course, the personality stories are created from questionnaire items, and using more items increases reading time. However, only one decision/interaction is required per trait (compared to one per item for the questionnaires), reducing cognitive load and decision time.
In studies that require both the measurement of the participants' personality and the portrayal of the personality of fictional people—e.g. looking at the impact of self-similar personality on book recommendations for fictional users. Participants only need to read the stories once, so 1 min suffices to both complete the personality test and portray two fictional users' personality.
In studies or systems that require obtaining personality measurements for multiple people provided by one person. For example, in Moncur et al. (2014), automated messages about babies in intensive care to their parents' social network were adapted to individual receivers' characteristics. This may require a parent to indicate the emotional stability of the people closest to them. Using the personality sliders, participants only have to read the stories once, and then only need to make one decision/interaction per personality trait per person.
Another advantage of using personality sliders is that it reduces response bias. Using the personality story sliders, participants need to judge which person they resemble more, so are not agreeing/disagreeing with individual items, removing bias due to acquiescence. Multi-item surveys also tend to suffer from straight-lining. Straight-lining occurs when participants give identical (or nearly identical) responses to items in a battery of questions using the same response scale (Zhang and Conrad 2014). Requiring only one interaction per trait (as in the sliders) mitigates this. Finally, personality sliders provide a higher granularity of personality, as the sliders provide continuous rather than interval data, whilst most personality tests are restricted to a small number of points. This also means that the data is more appropriate for parametric analysis than traditional likert data.
To evidence the practical value of our methodology for conveying and measuring personality, we show how the personality stories and personality sliders have been successfully used in many of our studies (see Sect. 6).
The methodology used in this paper for personality slider development
1.1 Overview of methodology
Our methodology for conveying and measuring personality traits using personality stories (see Fig. 1) consists of the following stages:
Creating short stories about a person to express distinct personality traits (their target trait): we use Resilience, Generalized Self-Efficacy, and those from the Five Factor model.
Iteratively validating the generated stories to ensure that the stories convey their target trait at high and low levels, and are able to robustly portray the desired trait by asking people to fill out a personality questionnaire for the person in the story (different from the questionnaires used for story creation). Issues include both the case where the perceived score for a non-target trait (a personality trait other than the target trait) differs significantly between high and low story, and where the scores for these non-target traits lie outside a normative range. The pilots were conducted in the lab with later studies conducted using crowdsourcing for broader generalizability.
Validating the approach of measuring personality through stories by allowing users to pick which individual they are most like, using a slider. The values of these results were correlated with standardized personality tests for the same traits.
Outline how the slider values can be used to distinguish groups of users with distinct levels of personality traits. Before the sliders could be used in a system, or even applied experimentally to evaluate adaptation, we needed to define how to use the slider values. We summarise the advantages and disadvantages of the respective methods.
Validating the approach in an experiment where personality is likely to affect adaptation (i.e. use the stories in an experiment where you hypothesize that there ought to be an effect of personality). We tested the approach in multiple studies.
1.2 Crowd sourcing participants
We rely heavily on rapid questionnaire responses from a participant pool to iteratively validate personality stories. Where the number of unique participants required was small, we used convenience sampling. However, our participant pool was too small for Five Factor Model validation as many iterations were required (explained in Sect. 4.3). To expand our participant pool, we decided to use the crowd-sourcing service, Amazon Mechanical Turk (MT) (2012).
MT is helpful when requiring large numbers of participants for studies. However, valid concerns exist that data collected online may be of lower quality and requires robust validation methods. Many studies, such as those described by Weinberg et al. (2014) have tried to show the validity of using MT to collect research data. These studies have generally found that the quality of MT data is comparable to what would be collected from supervised lab experiments, if studies are carefully set up, explained, and controlled. We follow recommended best practice in our MT experimental design and procedures.
In our work we have obtained some insights into using crowd-sourcing to gather experimental data. We were initially concerned that crowd-sourced participants (workers) would simply complete questionnaires in a random fashion in order to be paid. However, we found no evidence for this. "Gaming the system" by random scoring did not occur: participants correctly identified the personality trait we were portraying.
MT holds statistics on each worker, including acceptance rate. This is available to all requesters (those setting tasks) representing the percentage of work submitted by a particular worker that was approved (by all requesters). Thus if somebody consistently submits poor work, their acceptance rate drops. As requesters can set a high acceptance rate as a qualification for their tasks, this causes participants to value their acceptance rate, and complete tasks conscientiously. In addition to this, the integrated Cloze Test for English Fluency (Taylor 1953) was used as an attentional check to ensure participants were carefully reading the instructions, and had enough literacy skills to understand the task. We were also able to restrict participation to the United States only, which considerably drops the possibility of spam in the results.
The paper is structured as follows. Section 2 surveys the literature on measuring, conveying and adapting to personality. Section 3 describes the story creation process. Section 4 discusses the process of story validation. In Sect. 5, we test using the stories to measure user personality and outline how these results can be applied to group users by personality trait. Section 6 shows the application of the methodology by summarising many studies that investigated adaptation to personality and used the stories to convey or measure personality. Section 7 concludes the paper, discusses its limitations and provides directions for future work.
2 Related work
In this section, we describe the models of personality used in this paper and the rationale for choosing these, focusing specifically on trait theories and social learning approaches. We summarize the methods for obtaining users' personality traits and then summarize how personality can be portrayed, building on these methods. Finally, we discuss adaptation to personality in recommender systems, persuasive systems, and intelligent tutoring systems. We focus on adaptation to particular personality traits and the acquisition and portrayal of personality in the studies conducted.
The five robust dimensions of personality from Fiske (1949) to present
Reproduced from Digman (1990)
Fiske (1949)
Social adaptivity
Will to achieve
Emotional control
Inquiring intellect
Eysenck (2013)
Extraversion
<————Psychoticism————>
Neuroticism
Tupes and Christal (1992)
Surgency
Agreeableness
Emotionality
Norman (1963)
Conscientiousness
Borgatta (1964)
Likeability
Task interest
Cattell (1957)
Exvia
Cortertia
Superego strength
Guilford (1975)
Paranoid disposition
Thinking introversion
Emotional stability
Digman (1988)
Friendly compliance
Hogan (1986)
Sociability and ambition
Intellectance
Costa and McCrae (1985)
Peabody and Goldberg (1989)
Buss and Plomin (1984)
Tellegen (1985)
Positive emotionality
Negative emotionality
Lorr (1986)
Interpersonal involvement
Level of socialization
2.1 Models of personality
2.1.1 Personality trait theories
Traits are defined as "an enduring personal characteristic that reveals itself in a particular pattern of behaviour in different situations" (Carlson et al. 2004, p. 583). Over time, trait theorists have tried to identify and categorise these traits (Carlson et al. 2004). The number of traits identified has varied, with competing theories arising. The best known include Eysenck's three factors (Eysenck 2013), Cattell's 16PF (Cattell 1957), and the Five-Factor Model (FFM) (Goldberg 1993). More recently a general consensus towards five main traits (or dimensions) (Digman 1990; McCrae and John 1992) has emerged, shown in Table 1 (reproduced from Digman 1990). Most psychologists consider the FFM robust (Magai and McFadden 1995), and a multi-year study found that individuals' trait levels remained relatively stable (Soldz and Vaillant 1999). The exact names of the traits are still disputed by psychologists (Goldberg 1993; McCrae and John 1992; Digman 1990), however we adopt the common nomenclature from John and Srivastava (1999) and refer to them as:
Extraversion: How talkative, assertive and energetic a person is.
Agreeableness: How good natured, cooperative and trustful a person is.
Conscientiousness: How orderly, responsible and dependable a person is.
Emotional Stability (ES): How calm, non-neurotic and imperturable a person is.3
Openness to Experience: How intellectual, imaginative and independent-minded a person is.
2.1.2 Resilience
The FFM is the core model of personality, as it is considered to be stable (i.e. a person's personality does not change, or changes very slowly). However, people also have traits that vary more quickly, encapsulate several core traits or are more environment/experience–dependent. One example is resilience, which is an often poorly defined term that encapsulates "the ability to bounce back from stress" (Smith et al. 2010, p. 166). Poor resilience is associated with depression (O'Rourke et al. 2010; Southwick and Charney 2012; Hjemdal et al. 2011) and anxiety (Connor and Davidson 2003; Hjemdal et al. 2011). While not as stable as the FFM traits, resilience is a medium-term trait that may be improved by interventions (Smith et al. 2010).
2.1.3 Social learning approaches
The Social Learning approach to personality "embodies the idea that both the consequences and behaviour and an individual's beliefs about those consequences determine personality" (Carlson et al. 2004, p. 593). Whereas trait theorists argue that knowing the stable characteristics of individuals can predict behaviour in certain situations; advocates of the Social Learning approach think that the environment surrounding an individual is more important when predicting behaviours (Carlson et al. 2004). Two popular Social Learning models are Locus of Control (Rotter 1966) (LoC) and (generalized) Self-Efficacy (Bandura 1994) (GSE).
An individual's Locus of Control represents the extent to which a person believes they can control events that affect them (Rotter 1966). A learner with an internal LoC believes that they can control their own fate, e.g. they feel responsible for the grades they achieve. A learner with external LoC believes that their fate is determined by external forces e.g. they believe that their grade is a result of the difficulty of the exam or their teaching quality. Self-Efficacy is defined as "the belief in one's capabilities to organize and execute the courses of action require to manage prospective situations" (Bandura 1995, p. 2) and determines whether individuals will adapt their behaviour to make changes in their environment, based on an evaluation of their competency (Carlson et al. 2004). It also defines whether an individual will maintain that change in behaviour in the face of adversity; GSE has been shown to be an excellent indicator of motivation (McQuiggan et al. 2008).
2.2 Measuring personality
There are many explicit or implicit approaches for measuring personality. Explicitly, personality traits can be obtained through self-reporting questionnaires, which typically ask users to rate to what extent certain statements apply to them. Multiple versions of such questionnaires exist—for example, the Five-Factor model (FFM) is often used in research, not only because there is broad agreement between psychologists, but because many validated questionnaires exist which measure it, with varying item numbers (e.g. 5 item FIPI (Gosling et al. 2003), 10 item TIPI (Gosling et al. 2003), BFI-10 (Rammstedt and John 2007), 20-item mini-IPIP (Donnellan et al. 2006), 40-item minimarkers (Saucier 1994a), 44-item BFI (John and Srivastava 1999), 50 item IPIP-NEO-50 (Goldberg et al. 2006), 60 item NEO-FFI (McCrae and Costa 2004), 240 item IPIP-PI-R, and 300-item IPIP-NEO Goldberg et al. 2006). Questionnaires for other traits also exist (see Table 2 for questionnaires that have been used for other traits). Advantages of measuring personality from self-reporting questionnaires include the ease of administration, the existence of validated questionnaires for most traits (so, easily extended to other traits), and transparency to users. Disadvantages are that they are often time consuming (leading to problems such as straight-liningZhang and Conrad 2014) and may be inaccurate (either because respondents see themselves differently then they really are, or because they want to portray a certain image to other people).
Personality traits can be measured implicitly using machine learning techniques. Personality can be inferred from user generated content in social media, e.g. Facebook Likes (Kosinski et al. 2014; Youyou et al. 2015), language used (Park et al. 2015; Oberlander and Nowson 2006), Twitter user types (e.g. number of followers) (Quercia et al. 2011), a combination of linguistic and statistical features (e.g. puctuation, emoticons, retweets) (Celli and Rossi 2012), and structural social network properties (Bachrach et al. 2012; Quercia et al. 2012; Lepri et al. 2016). See Farnadi et al. (2016) for a comparative analysis.
Examples of existing work on adapting to personality
Personality measure
Persuasive system
Kaptein et al. (2012, 2015)
Susceptability to Cialdini principles
STPSKaptein et al. (2012)
Orji et al. (2014)
Gamertypes
BrainHex (Nacke et al. 2014)
Smith et al. (2016)
Sliders (this paper)
Schiavo et al. (2016)
Group participation
BFI-10
de Vries et al. (2016)
Change processes
IPIP-NEO
Alkiş and Temizel (2015)
Arteaga et al. (2010)
Game choice and messages
Halko and Kientz (2010)
Hirsh et al. (2012)
Phone adverts
BFAS (DeYoung et al. 2007)
Lepri et al. (2016)
Social strategies
Chen et al. (2015)
Travel adverts
FFM (O,ES)
tweets; 20 from IPIP-NEO-50
Nov and Arazy (2013)
Rating UI
FFM (C)
2 from TIPI
Oyibo et al. (2017)
Message type
Anagnostopoulou et al. (2017)
IPIP-NEO-50
Nguyen et al. (2018)
Feedback,reminders
60 item Truity LLC (2018)
Ciocarlan et al. (2017)
FFM (C,O,ES)
Portrayed
Hexad (Tondello et al. 2016)
Messages, Tasks
Intelligent tutoring system
Dennis et al. (2016)
Okpo et al. (2016b, 2017)
Exercise selection
Alhathli et al. (2016)
FFM (E)
Conati and Maclaren (2009)
Educational hints
FFM (C,E,A,ES)
Personality test for children Graziano et al. (1997)
Robison et al. (2010)
Feedback type
NEO-PI-R Costa and McCrae (2008)
Harley et al. (2016)
Prompt, Feedback
mini-IPIP
Leontidis et al. (2011)
Pedag. Strategy
Santos et al. (2016)
Affective rec. for language learning
FFM, GSE
GSE (Schwarzer and Jerusalem 1995), BFI
GSE, BFI
McQuiggan et al. (2008)
Sarsam and Al-Samarraie (2018)
Interface display
Recommender system
Hu and Pu (2011)
Cold-start rec.
Nov et al. (2013)
FFM (E,ES)
Tkalčič et al. (2011)
Tintarev et al. (2013)
FFM (O)
Cantador et al. (2013)
Cross-domain rec.
Quijano-Sanchez et al. (2010)
Group rec.
Accommodating, Competing, Collaborating, Compromising, Avoiding
TKI Thomas (2008)
Kompan and Bieliková (2014)
FFM (E,N), Competing, Coop.
NEO-FFI, TKI
Rawlings and Ciancarelli (1997)
Range of items, Popularity of items
FFM (O,E)
NEO-PI-R
Ferwerda et al. (2015)
Preferred choice for browsing
FFM (O,C,ES)
Appel et al. (2016)
Closeness, Curiosity, Adventurous
Social media (Gou et al. 2013)
Nunes (2008)
Braunhofer et al. (2015)
FIPI
Odić et al. (2013)
Emotion Induction (e.g. in group vs alone)
FFM (A,E)
Fernández-Tobías et al. (2016)
MyPersonality (Kosinski 2012)
Wu and Chen (2015)
Implicit, 25-items
Diversity, popularity, and serendipity
Wu et al. (2018)
Alternatively other interaction data can be used, such as measuring personality traits from gaming behaviour. For example, Cowley and Charles (2016) use features that describe game player behaviour based on the temperament theory of personality, Yee et al. (2011) measure personality from player behaviour in World of Warcraft, Wohn and Wash (2013) from spatial customisation in a city simulation game, and Koole et al. (2001) using a common resources dilemma gaming paradigm. Implicit association tests have also been used, measuring reaction times to visual stimuli associated with contrasting personality descriptors (Grumm and von Collani 2007).
Non-verbal data can also be used from speech and video, such as prosody, intonation, gaze behaviour, and gestures. For example, Polzehl (2014) details how speech features can be used. Biel and Gatica-Perez (2013) use features from video blogs such as speaking time, speaking speed, how much the person looks at the camera. Staiano et al. (2011) use speech and gaze attention features from videos of meetings. Rojas et al. (2011) use facial features.
Finally, multi modal personality recognition can also be used; for example Farnadi et al. (2014) used a combination of textual (linguistic and emotional) features extracted from transcripts of video blogs in addition to audio-video features. Similarly, Srivastava (2012) used a combination of non-verbal behaviour and lexical features.
For a more in depth review of automated personality recognition including a summary of existing studies and which personality traits were recognised see Vinciarelli and Mohammadi (2014).
Advantages of measuring personality implicitly are that it can be done unobtrusively (as long as the data used is generated naturally) and tends to have good accuracy. Disadvantages are potential privacy implications (it is important that users provide explicit consent), the need for substantial data for the underlying machine learning algorithms (so it requires time to measure the personality of new users) and the poor availability of existing datasets for other applications. Dunn et al. (2009) investigated ease of use, user satisfaction, and accuracy for three interfaces to obtain personality, one explicit one (NEO PI-R, with 240 questions) and two implicit ones (a game and an implicit association test). They concluded that an explicit way of measuring personality is better for ease of use and satisfaction.
2.3 Portraying personality
Personality can be portrayed in many ways, often inspired by the ways in which it can be measured. Firstly, participants can be shown content generated by someone who with the personality trait we want to portray, such as a blog post, audio recording, or video. This is hard to do well, as it is difficult to avoid conveying information beyond personality. For example, facial expressions (as may be present in video recordings), speech (as present in video and audio recordings), and linguistic content (as present in text and speech) provide superfluous information about affective state (Zeng et al. 2009). Video, audio and text often also implicitly provide information about the person's ethnicity/region of origin, age, gender, and opinions (Rao and Yarowsky 2010). Additionally, it requires finding those with exactly the personality trait required, and obtaining their permission for using content they generate for this purpose.
Secondly, participants can be shown such content, but rather than using a person with a desired personality trait, the trait is portrayed by an actor, researcher or automatically generated based on what we know influences the measurement of certain personality traits. This provides more control, as an actor can be instructed to depict only one trait at the extreme, and to try to be neutral on other variables, such as affective state. Social Psychology and Medical Education commonly use actors to depict personality traits. For example, Kulik (1983) used actors to portray extraversion (actor smiled, spoke rapidly and loudly, discussed drama, reunions with friends, lively parties) and introversion (actor spoke more hesitantly, talked about his law major, lack of spare time, interest in Jazz). Barrows (1987) describes stimulated/standardized patients as presenting the gestalt of the patient being simulated including their personality. The problem remains that actors also provide information about gender, age, ethnicity. Additionally, hiring good actors may be costly.
Portraying personality is also widely investigated in the Affective Computing community, particularly by virtual agents (Calvo et al. 2015). For example, Doce et al. (2010) convey the personality of game characters by the nature and strengths of emotions a character portrays, and their tendency to act in a certain manner. However, this is still difficult to do well, and again it is hard to do it in a way that only a personality trait is expressed and nothing more.
Thirdly, a person can be described explicitly by mentioning the personality trait (e.g. "John is very conscientious") or how the person behaves or would behave in certain circumstances (e.g. "John tends to get his work done very rapidly"). For example, Luchins (1958) produced short stories to portray extraversion and introversion. These contained sentences such as "he stopped to chat with a school friend who was just coming out of the store" and "[he] waited quietly till the counterman caught his eye". Using a single sentence with just the personality trait is easy to do, but it may not provide participants with a strong enough perception of the trait and it can easily be overlooked. Using a story solves this, but the story may not convey the intended trait.
In all of these cases, it is important that the portrayal of a personality trait is validated as accurately creating the impression of personality intended, and not producing additional impressions (of an unintended personality trait or attribute such as intelligence, etc). For example, Luchins (1958) actually found that participants associated many other characteristics (such as friendliness) based on his stories. Kulik (1983) found that prior conceptions about the actors influenced people's opinions.
2.4 Adapting to personality
There is growing interest in personalization to personality, as seen from the UMUAI 2016 special issue on "Personality in Personalized Systems" (Tkalčič et al. 2016) and the "Emotions and Personality in Personalized Systems" (EMPIRE) workshops. Research on personalization to personality has focused mainly in three domains: Persuasive Technology, Intelligent Tutoring Systems, and Recommender Systems. Table 2 presents a non-exhaustive list of such research.
As shown in Table 2, research on personality in Persuasive Systems has mainly focused on adapting messages (motivational messages, prompts, adverts, reminders) and selecting persuasive strategies. Adaptation tends to use the Five Factor Model, though there has also been work on adapting to susceptibility to persuasion principles and gamer types.4 All papers cited use self-reporting questionnaires.
Research on personality in Intelligent Tutoring Systems has mainly focused on adapting feedback/emotional support, navigation (exercise and material selection) and hints/prompts. The Five Factor Model tends to be the basis for personality adaptation, though generalized self-efficacy (GSE) is also used. To assess personality, all papers cited used self-reporting questionnaires, except for Dennis et al. (2016), Okpo et al. (2016b) and Alhathli et al. (2016) who used indirect experiments in which participants made choices for a fictitious learner with a given personality.
Self-report questionnaire for Generalized Self Efficacy (Schwarzer and Jerusalem 1995)
I can always manage to solve difficult problems if I try hard enough
\(_{-}\)
If someone opposes me, I can find the means and ways to get what I want
It is easy for me to stick to my aims and accomplish my goals
I am confident that I could deal efficiently with unexpected events
Thanks to my resourcefulness, I know how to handle unforeseen situations
I can solve most problems if I invest the necessary effort
I can remain calm when facing difficulties because I can rely on my coping abilities
When I am confronted with a problem, I can usually find several solutions
If I am in trouble, I can usually think of a solution
I can usually handle whatever comes my way
Scoring: 1 = Not at all true, 2 = Hardly true, 3 = Moderately true, 4 = Exactly true
Research on personality in Recommender Systems (see also Tkalčič and Chen 2015) has broadly considered the following topics: improving recommendation accuracy (Wu and Chen 2015), boot-strapping preferences for new users (Hu and Pu 2011; Tkalčič et al. 2011; Fernández-Tobías et al. 2016), the impact of personality on users' preferences on recommendation diversity (Tintarev et al. 2013; Chen et al. 2016; Nguyen et al. 2017), cross-domain recommendation (Cantador et al. 2013), and group recommender systems (Kompan and Bieliková 2014; Quijano-Sanchez et al. 2010; Rawlings and Ciancarelli 1997). Adaptation in recommender systems aimed at individuals tends to use the FFM. However, for group recommender systems other personality traits have been used (see also Masthoff 2015) such as cooperativeness. To assess personality all papers cited used self-reporting questionnaires, except Appel et al. (2016) who extracted personality from social media usage.
3 Creation of stories to express personality traits
This section describes the creation process of personality stories to express GSE, Resilience and the Five-Factor Model traits.5 These stories will be validated and amended in the next section. Male names were used for all stories to keep gender constant. If "gender neutral" names had been used, then participants' interpretation of the learner's sex may have caused an unwanted interaction effect on the validation.
3.1 Stories for generalized self-efficacy
The self-report questionnaire for Generalized Self Efficacy Schwarzer and Jerusalem (1995) was used as a starting point, shown in Table 3.6 Each questionnaire item is a positively weighted value. The overall score for GSE is the sum of each scale item, with a high score (max 40) indicating high GSE.
For the high GSE story, a selection of the questionnaire items were used and changed into the third person. For the low GSE story, the valence of the items was inverted. The stories were made more realistic by associating them with a character, a first year learner called "James" (the most popular male name in English in 2010, and therefore suitably generic). The resulting stories are shown in Table 4.
Stories used for Generalized Self-Efficacy, high and low
James is a first year student. When he is faced with a difficult task, which requires him to solve a problem which he has not seen before, he tends to panic and give up, believing that he will never solve the problem. He finds it difficult to defend his ideas when someone disagrees with him. He believes that he cannot solve problems by himself. He finds it difficult to stick to his aims when learning. He tends to be quite nervous, and doesn't believe he can pass
James is a first year student. When he is faced with a difficult task, which requires him to solve a problem that he has not seen before, he remains calm and believes he can always find a solution to the problem, if he tries hard enough. He believes he can defend his ideas if someone disagrees with him. He believes that he can solve any problem, whatever it is. He finds it easy to stick to his aims when learning. He is laid back about his work and believes that he will pass
3.2 Stories for resilience
For Resilience, questions were used from the Connor-Davidson Resilience scale (Connor and Davidson 2003). These encapsulate 5 factors that contribute to resilience—Positive attitudes to change and strong relationships; Personal competency and tenacity; Spiritual beliefs and superstitions; Instincts and tolerance of negative emotions; and Control. Using questions from each factor, a story was composed for both high and low resilience (see Table 5) that are roughly symmetrical in order and content. The clauses 'David is kind and generous' (for both high and low stories) and 'He is friendly'(in the low story) were added to counter the fact that the low resilience story depicted a fairly negative character.
High and low resilience personality stories
David is kind and generous. He is pessimistic and dislikes challenges. He doesn't expect things to get better when times are tough. He gives up easily. He doesn't believe that doing good things brings you good luck and thinks that events are down to chance. He finds it hard to deal with hardships and can't see the positive side of tricky situations. He doesn't feel in control of his life. He is friendly, but has few strong friendships. He is modest of his achievements
David is kind and generous. He is optimistic and likes challenges. He believes that when things go badly, they will always get better and he will come out stronger; whenever he fails, he tries harder until he succeeds. He tries to do the right thing because 'what goes around comes around'. He can tough out hardships and make light of them. He feels in control of his life. He has many close friends and is proud of his successes
Story construction for low emotional stability using the NEO-IPIP low items
NEO-IPIP Phrases
"Often feel blue." "Dislike myself." "Am often down in the dumps." "Have frequent mood swings." "Panic easily." "Am filled with doubts about things." "Feel threatened easily." "Get stressed out easily." "Fear for the worst." "Worry about things"
Generated story
"Josh often feels sad, and dislikes the way he is. He is often down in the dumps and suffers from frequent mood swings. He is often filled with doubts about things and is easily threatened. He gets stressed out easily, fearing the worst. He panics easily and worries about things"
3.3 Stories for the five factor model
Unlike GSE and Resilience, the Five Factor Personality Trait Model does not describe a single trait. As discussed in Sect. 2.1.1, the five factors (traits) are Extraversion, Agreeableness, Conscientiousness, Emotional Stability and Openness to Experience. Thus, the personality of any individual can be described by five scores, one for each of the factors. This means that stories had to be created for each trait, at both low and high level (totalling 10 stories).
To make the FFM Stories, we used the NEO-IPIP 20-item scales (Gow et al. 2005): combining the phrases into sentences to form a short story, with the addition of a name picked from the most common male names. Unlike the GSE scale, these scales provided both positive and negative items, so the high and low story could be made from the positive and negative items respectively. Table 6 exemplifies how the stories were constructed. Table 7 shows the stories.
Preliminary Stories expressing each FFM trait at high and low levels
Jack has little to say to others, preferring to stay in the background. He would describe his life experiences as somewhat dull. He doesn't like drawing attention to himself, and doesn't talk a lot. He avoids contact with others and is hard to get to know. He retreats from others, finding it difficult to approach them. He keeps people at a distance
Jack feels comfortable around people and makes friends easily. He is skilled in handling social situations, and is the life and soul of the party. He knows how to start conversations and easily captivates his audience. He warms up quickly to others, and likes talking to a lot of different people at parties. He doesn't mind being the centre of attention and cheers people up
Charlie has a sharp tongue and cuts others to pieces. He suspects hidden motives in people. He holds grudges and gets back at others. He insults and contradicts people, believing he is better than them. He makes demands on others, and is out for his own personal gain
Charlie has a good word for everyone, believing that they have good intentions. He respects others and accepts people as they are. He makes people feel at ease. He is concerned about others, and trusts what they say. He sympathizes with others' feelings, and treats everyone equally. He is easy to satisfy
Alexander procrastinates and wastes his time. He finds it difficult to get down to work. He does just enough work to get by and often doesn't see things through, leaving them unfinished. He shirks his duties and messes things up. He doesn't put his mind on the task at hand and needs a push to get started
Alexander is always prepared. He gets tasks done right away, paying attention to detail. He makes plans and sticks to them and carries them out. He completes tasks successfully, doing things according to a plan. He is exacting in his work; he finishes what he starts
Josh seldom feels sad and is comfortable with himself. He rarely gets irritated, is not easily bothered by things and he is relaxed most of the time. He is not easily frustrated and seldom gets angry with himself. He remains calm under pressure and rarely loses his composure
Openness to experience
Oliver is not interested in abstract ideas, as he has difficulty understanding them. He does not like art, and dislikes going to art galleries. He avoids philosophical discussions. He tends to vote for conservative political candidates. He does not like poetry and rarely looks for a deeper meaning in things. He believes that too much tax money goes to supporting artists. He is not interested in theoretical discussions
Oliver believes in the importance of art and has a vivid imagination. He tends to vote for liberal political candidates. He likes to carry the conversation to a higher level, enjoying hearing new ideas. He enjoys thinking about things and can express himself beautifully. He enjoys wild flights of fantasy, getting excited by new ideas. He has a rich vocabulary
4 Validation of stories to express personality traits
This section describes the validation process of each story: how each story was checked that it correctly depicted the trait that it was intended to depict (the target trait).
A series of validation studies were performed for the stories constructed to convey Generalised Self-Efficacy, Resilience, and the traits from the FFM (Extraversion, Agreeableness, Conscientiousness, Emotional Stability and Openness to Experience). Each trait had two stories associated with it—one to express the trait at a high level, and one to express the trait at a low level.
For each trait, at least one validation experiment was conducted (the traits from the Five Factor Model required more, this is explained further in Sect. 4.3). Each validation experiment utilized a between-subjects design: participants were shown either the high story or the low story, and then asked to rate the personality of the person depicted in the story using a validated questionnaire for the trait in question.
As outlined in Sect. 3, the stories were originally constructed using an existing personality measurement questionnaire. For validation purposes, a different measurement questionnaire was used for the same trait, as this used different language and terms to the story (preventing participants from just recognising phrases), and made the purpose of the experiment less obvious and decrease demand characteristics.
For the GSE and FFM stories, we also measured how the stories conveyed other traits (non-target traits), to check how they were conveyed. For GSE, we investigated how the stories conveyed the FFM traits and Locus of Control.7 It has been shown previously (Judge et al. 2002; Hartman and Betz 2007) that GSE interacts with both of these measures, however, if we found an unexpected interaction this would allow us to correct the story. For the FFM stories we checked how the other four non-target FFM traits were conveyed.8 For Resilience, which again used crowd sourcing, a different approach was taken, which is elaborated on in Sect. 4.2.
4.1 Generalized self-efficacy (GSE) validation
This experiment explored whether stories did correctly convey different levels of GSE, and what other personality traits were implied, using a different validated trait assessment questionnaire for GSE (Chen et al. 2001). We also explored how the story depicted other traits in the FFM (using minimarkers Saucier 1994a) and a questionnaire for Locus of control (Goolkasian 2009). Fifty participants (42% female, 52% male, 6% preferred not to say; 34% aged 18–25, 48% aged 26–40, 14% aged 41–65, 2% aged over 65, 2% preferred not to say) recruited through convenience sampling in a between-subject design, answered these questionnaires, after reading the GSE personality story. 26 viewed the low GSE story and 24 viewed the high GSE story.
Results of t tests for GSE story validation
Low GSE story
High GSE story
GSE\(^{\mathrm{a}}\)
\(< 0.001\)
Extraversion\(^{\mathrm{b}}\)
\(> 0.05\)
Agreeableness\(^{\mathrm{b}}\)
Conscientiousness\(^{\mathrm{b}}\)
\(<0.05\)
Emotional Stability\(^{\mathrm{b}}\)
Openness\(^{\mathrm{b}}\)
Locus of Control \(^{\mathrm{c}}\)
\(<0.001\)
Bold values indicate significant difference between high and low story
\(^{\mathrm{a}}\)From 8 to 40 with 8 lowest
\(^{\mathrm{b}}\)From 1 to 9 with 1 lowest
\(^{\mathrm{c}}\)From 0 to 13 with 0 indicating entirely internal locus and 13 indicating entirely external locus
Table 8 shows the results. t tests9 were run for each of the traits to test whether the high and low GSE stories were significantly different from each other. This was significant at \(t(48)=-\,13.514\), \(p<0.001\). A Point-Biserial Correlation showed a significant difference (\(r(50)=0.89\), \(p<0.001\), \(R^2=0.79\)), showing a strong effect size for the GSE Stories.
The stories did however express some other personality traits and models at significantly different levels (Conscientiousness and Locus of control). However, this was to be expected as GSE is not an isolated construct: previous research has discussed possible correlations between GSE and other psychological constructs, including conscientiousness and locus of control (Judge et al. 2002; Hartman and Betz 2007). We therefore judged that these stories were sufficient for further experiments.
4.2 Resilience validation
Similarly to GSE, resilience is expected to correlate with other personality traits. We validated that the high and low stories depicted high and low resilience; no other traits were compared as it was anticipated that there would be an interaction (e.g. with low emotional stability) and this is not a problem for this measure. 44 participants were recruited through MT (26 female, 17 male, 1 undisclosed, aged 18–65). They were shown either the high or low story (between-subjects design) and asked them to assess the person in the story on the six item 'Brief Resilience Scale' (Smith et al. 2008). We added six items from another scale to mitigate hypothesis guessing and reduce response bias.
To validate the stories, we performed a between-subjects t test to test Average Resilience rating between the low and high stories. This was significant at \(t(41)=0.29\), \(p<0.001\). The mean resilience rating was 1.75 ± 0.51 SD for the low story and 4.20 ± 0.49 SD for the high story on a 1–5 scale. A Point-Biserial Correlation showed a significant difference (\(r(43)=0.93\), \(p<0.001\), \(R^2=0.85\)), showing a strong effect size for the Resilience Stories.
4.3 Five factor trait validation
This section is an improved version of previous research reported in Dennis et al. (2012b), with clarifications and an additional effect size analysis.
The pilot story validation questionnaire, for Emotional Stability
4.3.1 First iteration FFM: pilot study
The Emotional Stability stories from the FFM were used for a validation pilot study for the FFM traits, and to determine whether non-target trait mitigation would be required.
The same methodology from Sect. 4.1 was used. Eight participants (4 female; 5 aged 18–25, 3 aged 26–40) recruited through convenience sampling (4 students and 4 staff at the University of Aberdeen) were presented with one of the stories using a between-subjects design and asked to judge them on personality. However, as this was a pilot study, instead of using the 40 item minimarkers to judge the FFM, we used a TIPI questionnaire (Gosling et al. 2003) with 10 items instead (for brevity), shown in Fig. 2. The results are shown in Table 9.
Results of pilot study for ES stories (high and low), as rated using TIPI for the FFM traits
Values could range between 1 and 7. Bold values indicate significant difference between high and low stories. Grey cells indicate trait designed to convey
The stories did convey Emotional Stability at polarized levels (i.e. the ratings for each story were at opposite ends of the scale for ES). However, there appeared to be a positive correlation with Agreeableness—more emotionally stable people were judged to be more agreeable (nicer) than neurotic ones. This effect could be spurious due to the low number of participants, or due to our decision to use the ten-item TIPI test rather than a more comprehensive test with a higher number of items. For more formal validation, a large number of unique participants is required for reliable data, particularly if adjustments to the stories are required. The second iteration uses a larger set of participants recruited through crowd-sourcing to establish whether the correlation with Agreeableness persists and also attempts to validate the stories for the other FFM traits.
4.3.2 Second iteration: validation of stories for the five factor model
100 participants (10 per story; 67% female) were recruited using MT. In a between-subjects design, each participant was presented with one story about a learner (see Table 7) which attempted to convey a target trait at either a high or low-level. Participants assessed this student's personality using the Mini-Markers scale (Saucier 1994a).
Normative ranges for each of the five traits, arising from the ratings of a liked peer for the minimarkers scale (Saucier 1994b), plus or minus one standard deviation
Normative range
The rating for the target trait (i.e. the trait that the story was created to express) should be as polarized as possible—the "low" variant of a story aimed for a score as close to 1 as possible, and the "high" story aimed for a score as close to 9 as possible.
The decision for an acceptable value for a non-target trait is rather arbitrary. However, it is possible to derive normative values for each trait from large population samples. As these samples are similar to our own (e.g. English-speaking, USA-based), we decided it was acceptable to use these to characterise people as being either 'high', 'low' or 'neutral' in a trait.
To decide on acceptable values for non-target traits, a "normative range" was made for each of the five traits based on the average ratings of a liked peer for the minimarkers scales from 329 students from Illinois (Saucier 1994b),10 plus or minus one standard deviation, shown in Table 10.
Results Table 11 shows the results of the original stories. There was a significant difference between all 5 pairs of stories in the perceived trait values for the target trait between the high story and the low story. For all but one personality trait (Openness), the perceived target trait values were clearly outside the normative range and in the correct direction. The perceived target trait value for low openness is below the normative range, but high story marginally outside the normative range. Problematically, there were many significant differences between the perceived non target trait values. Several perceived non-target trait values were also outside the normative range.
Results for FFM stories
Bold items indicate \(p < 0.05\), (t test Bonferroni corrected) between low/high stories. Grey cells indicate target trait levels. Italics indicate non-target trait outside normative range. Target trait score underline—score not outside normative range
4.3.3 Mitigation
The following problems occurred between the pairs of stories during validation:
P1:
Perceived trait values on a non-target trait differ significantly
Perceived trait values on a non-target trait are outside the normative range
Perceived target trait values are very close to normative range
Problems P1 and P2 often appeared together—one (or both) of the perceived values for a non-target trait were outside the normative range and thus significantly different from the other. For example, in the story for low extraversion, the student was perceived to be less agreeable, despite correctly conveying low extraversion and the scores for the remaining non target traits being within the normative range. We hypothesised that the following story modifications could be taken in an attempt to mitigate problems P1 and P2:
S1:
Add a statement which implies a semi neutral stance on the problem trait, e.g. "Jack is quite a nice person" to mitigate low agreeableness.
Remove a statement which may be causing the interaction—e.g. removing "Jack has little to say to others" may increase agreeableness.
Add a statement targeting the problematic non-target trait from its own story—e.g. adding "Jack has a good word for everyone" from the high agreeableness story to increase agreeableness in other stories.
S1 was used because S2 (removing statements from the stories) was undesirable: this may affect the story's expression of the target trait. We did not attempt S3 as it may over-alter the non-target trait score, and introducing another trait into a story may bring that trait's undesirable interactions into the story. For example, the low conscientiousness story also conveys low agreeableness (see Table 16). If we added a statement from the high agreeableness story, this could in turn raise the ES score, as the high agreeableness story also conveyed high ES (further confounding the problem).
Mitigating Statements for each non-target FFM trait
Non-target trait
Statement to add if below normative
Tends to enjoy talking with people
Quite a nice person
Tends to do his work
Tends be calm
Quite likes exploring new ideas
Two stories for high Openness to Experience
Modified story
Oliver believes in the importance of art and has a vivid imagination. He tends to vote for liberal political candidates. He enjoys hearing new ideas and thinking about things. He enjoys wild flights of fantasy, getting excited by new ideas
4.3.4 Third iteration: validation with mitigated sentences
As the undesired non-target trait scores occurred most frequently in the low stories, these were targeted first. We constructed slightly positive statements (see Table 12) and added them where necessary. For the 'high' stories, only two non-target traits required modification: Extraversion in the Openness High story, and Emotional Stability in the Extraversion High and Agreeableness High stories. For the Extraversion High story, the score for Emotional Stability was 6.10, and the normative range ends at 6.08. Because this margin was so small, and there was no significant difference between the high and low variants' ES scores, modification was not attempted to avoid more adverse effects. In the case of the high Agreeableness story, the value for ES was 7.28. S1 was employed by adding a mildly negative statement: "He is occasionally a bit anxious". The Openness High story did not convey its target trait convincingly, and thus already required modification. Approach S2 was used in this case, removing statements such as "[he can] express himself beautifully" (see Table 13).
Design The design was the same as Sect. 4.3.2. Seventy participants (10 per adjusted story) were recruited from MT. Each participant saw one story in a between-subjects design.
Results Tables 14 and 15 shows the results for the modified stories. S1 was successful in most cases in mitigating P1 and P2. Exceptions to this were in the Agreeableness stories, the undesired non-target trait scores still remain, with the Low story expressing low ES and the High story expressing high ES (P1 and P2). For Conscientiousness, P1 occurred for Openness, despite both values being in the normative range. For low Emotional Stability, S1 was not effective for bringing the perceived trait value into normative range for Extraversion, with P1 and P2 still extant. S2 was successful in solving P2 for Openness High; bringing the Agreeableness value into the normative range. However, we were not successful in solving P3 for Openness high; the score for the target trait is further within the normative range.
Effect Size for Modified Stories To explore how strongly the high and low stories differed for each trait, a Point-Biserial correlation was computed between the high and low stories for each trait. There was a strong positive correlation between the story trait level (low or high) and trait score for each trait, showing that the stories depict the traits strongly at the intended levels (see Table 14).
Point-Biserial correlations between the high and low story for each trait
\(R^2\)
Results for corrected FFM stories
Bold items indicate \(p< 0.05\), (t test Bonferroni corrected) between low/high stories. Grey cells indicate target trait levels. Italics indicate non-target trait outside normative range. Target trait score underline—score not outside normal range
\(^\mathrm{a}\)Story not adjusted, previous values used
Validated stories for each FFM trait, high and low
Jack has little to say to others, preferring to stay in the background. He would describe his life experiences as somewhat dull. He doesn't like drawing attention to himself, and doesn't talk a lot. He avoids contact with others and is hard to get to know. He retreats from others, finding it difficult to approach them. He keeps people at a distance. Jack is quite a nice person
Jack feels comfortable around people and makes friends easily. He is skilled in handling social situations, and is the life and soul of the party. He knows how to start conversations and easily captivates his audience. He warms up quickly to others, and likes talking to a lot of different people at parties. He doesn't mind being the centre of attention and cheers people up. Jack can sometimes be insensitive
Charlie has a sharp tongue and cuts others to pieces. He suspects hidden motives in people. He holds grudges and gets back at others. He insults and contradicts people, believing he is better than them. He makes demands on others, and is out for his own personal gain. Charlie tends to be calm and quite likes exploring new ideas
Charlie has a good word for everyone, believing that they have good intentions. He respects others and accepts people as they are. He makes people feel at ease. He is concerned about others, and trusts what they say. He sympathizes with others' feelings, and treats everyone equally. He is easy to satisfy. Charlie tends to be quite anxious
Josh procrastinates and wastes his time. He finds it difficult to get down to work. He does just enough work to get by and often doesn't see things through, leaving them unfinished. He shirks his duties and messes things up. He doesn't put his mind on the task at hand and needs a push to get started. Josh tends to enjoy talking with people
Josh is always prepared. He gets tasks done right away, paying attention to detail. He makes plans and sticks to them and carries them out. He completes tasks successfully, doing things according to a plan. He is exacting in his work; he finishes what he starts. Josh is quite a nice person, tends to enjoy talking with people, and quite likes exploring new ideas
James often feels sad, and dislikes the way he is. He is often down in the dumps and suffers from frequent mood swings. He is often filled with doubts about things and is easily threatened. He gets stressed out easily, fearing the worst. He panics easily and worries about things. James is quite a nice person who tends to enjoy talking with people and tends to do his work
James seldom feels sad and is comfortable with himself. He rarely gets irritated, is not easily bothered by things and he is relaxed most of the time. He is not easily frustrated and seldom gets angry with himself. He remains calm under pressure and rarely loses his composure
Oliver is not interested in abstract ideas, as he has difficulty understanding them. He does not like art, and dislikes going to art galleries. He avoids philosophical discussions. He tends to vote for conservative political candidates. He does not like poetry and rarely looks for a deeper meaning in things. He believes that too much tax money goes to supporting artists. He is not interested in theoretical discussions. Oliver is quite a nice person, and tends to enjoy talking with people
4.3.5 Discussion
The adjusted FFM stories are shown in Table 16. A story expressing a single polarized trait was always going to be difficult to achieve as the traits within the FFM are intercorrelated (Chamorro-Premuzic 2011). The interaction between Agreeableness and Emotional Stability was too strong to remove entirely. Adding a stronger statement to bring Emotional Stability into the normal range may cause more interactions with the other three non-target traits. In the Conscientiousness and Extraversion stories—the score for certain non target traits (O and A, respectively) still significantly differed. However, as these were all in the normal range, we do not see this as a problem. Problem P3 was not solved in the case of High Openness. Openness is a difficult trait to conceptualise—incorporating culture and art as well as political beliefs (Chamorro-Premuzic 2011). The perceived score was high, so it is likely therefore that it was expressing Openness highly, just not outside the range we devised.
4.4 Conclusion and limitations
A set of stories for the FFM, GSE and Resilience have been constructed and validated. Not all FFM stories are perfect, modifying them seemed to "dilute" the effect of the target trait, implying a balancing act. Further strategies could be used to remove the remaining interactions, however it may be that one trait inevitably infers another. We judge that the stories are good enough at expressing the traits for the purpose of investigating adaptation to personality in intelligent systems.
5 Using stories to determine personality
In this section we investigate how to use the stories to measure personality. Participants were given a standardised personality test and asked to rate how close they were to a pair of diametrically opposed personality stories using a sliding scale. A correlational analysis was performed on each trait to show that the sliding scale measured the trait with a strong correlation coefficient. We then conducted a reliability check, where a new sample of participants completed the sliders twice, 1 week apart. The scores between week 0 and week 1 were strongly correlated—thus the sliders could be used to measure personality (though this should not replace a standardised test when high granularity is required).
5.1 Methods
5.1.1 Materials
The validated stories were taken from Tables 4, 5 and 16. Different common Western names were used for each story, gender-matched to the participant. These were formatted so that opposing stories of the same trait were placed at either end of a sliding scale (see Fig. 3). The scale was coloured using a gradient from blue to green (left to right), with markers every 12.5%. The participant could indicate their position on the scale using a drag-and-drop slider. The position of the positive and negative stories was randomised for each participant and for each trait. The slider position gave a value of between 18 and 162, emulating a conventional 1–9 scale with greater acuity.
Validated personality questionnaires were used. For the Five Factor Model, the minimarker test (Saucier 1994a) was used. For resilience, the Brief Resilience Scale was used (Smith et al. 2008). For self-efficacy, the general self-efficacy scale was used (Schwarzer and Jerusalem 1995).
5.1.2 Procedure
Participants completed a personality questionnaire and then were presented with the slider test for each trait of the personality questionnaire they had completed, one at a time (five pairs of sliders for the Big Five Minimarker questionnaire and one pair of sliders for each other questionnaire).11 Participants were asked to move the slider towards the person they thought they were most like. The slider was initially set at the 50% marker on the scale and participants had to manipulate the slider before they were allowed to continue, even if they chose to select 50%. Participants were then thanked for their time and invited to view the results of the slider test in the form of a bar graph. Participants were recruited from MT and were paid $0.80 (demographics shown in Table 17).
Screenshot of the slider between opposing trait stories
Participant demographics for FFM, Self Efficacy and Resilience for slider validation studies
Story set
Self Efficacy
5.1.3 Design
Participants completed both the personality questionnaire and the slider test in a within-subjects design. Their score on the personality questionnaire was the independent variable and the Value of the slider position (which represents how close to the 2 trait stories the participant thought they were) was the dependent variable.
Our hypothesis (H1) was: For each trait, there will be a positive correlation between personality score and slider value.
5.2 Results
5.2.1 Five factor model
For each trait, a correlation analysis was run of Trait Score \(\times \) Slider Value. This was significant for each trait (see Table 18). Correlation graphs were plotted for each trait (Fig. 4) and a regression analysis run. The regression formula for each trait is shown in Table 18. Participants' mean scores on the minimarkers scale (see Table 19) were compared with the minimarkers normal range (see Table 10) to see if the MT participants' varied from a normal population. All traits were within the normal range, except emotional stability which was slightly higher. To investigate the effect of other traits on the correlation for each trait, a partial correlation analysis was run to control for the effect of non-target traits. This correlations remain strong (see Table 20).
Pearson's r for correlation of Trait Score \(\times \) Slider Value for each personality trait, effect size \(R^2\), regression formula and standardized error of the estimate SEE
Regression formula for slider
2.23 \(\times \) ConScore − 0.22
2.71 \(\times \) ExtScore − 23.56
1.58 \(\times \) OpExScore + 33.39
1.67 \(\times \) AgrScore + 29.48
1.67 \(\times \) EmStScore + 27.16
3.39 \(\times \) ResScore + 43.25
3.33 \(\times \) GseScore + 26.54
Correlation of Trait Score \(\times \) Slider Values for the FFM personality traits
5.2.2 Resilience and generalised self efficacy
For each personality test, correlation graphs were plotted (Fig. 5) and a correlation analysis was run of Test Score \(\times \) Slider Value. This was significant for Resilience (\(r(60)=0.58\), \( p< 0.01\)) and GSE (\(r(62)=0.62\), \(p < 0.01\)). The regression formula for each trait is shown in Table 18.
5.3 Reliability check
To test the reliability of the sliders, a reliability check experiment was conducted using all 7 sliders (FFM, GSE and Resilience). Participants recruited through opportunistic sampling completed the sliders and the FFM TIPI test (Gosling et al. 2003) as the first part of a persuasion experiment (reported in Ciocarlan et al. 2019). After 1 week they completed the sliders and TIPI test again (as well as the second part of the persuasion experiment).
Fifty-one participants completed the study (27 female, 23 male, 1 undisclosed; 21 aged 18–25, 23 aged 26–40, 7 aged 40–65). A correlation analysis was run between Slider Values for Week 0 \(\times \) Week 1 for all traits. The results are shown in Table 21. There was a strong correlation for each of the sliders between Week 0 and Week 1 (\(r=0.70\)–0.86, mean \(=0.81\)). There were several other significant weaker correlations—expected correlations between FFM traits and GSE and Resilience (as these traits are known to correlate with FFM traits; see Section 4), and some correlation within FFM traits.
To explore the inter-trait correlations within the FFM traits, a correlational analysis was run for the TIPI test for each FFM trait between Week 0 and Week 1. The results are shown in Table 22. We found a similar pattern of correlation between non-target traits as we found in the sliders, with the TIPI test showing more correlations between non-target traits than the slider test. We can therefore see that the inter-trait correlations are captured by a validated personality test within our sample, and that the sliders show good test-retest reliability for target traits at Week 1.
Additionally, we used the data from Week 0 to repeat our validation experiment for the FFM sliders. A correlational analysis of FFM slider values \(\times \) TIPI test scores showed a significant correlation between each trait's score on the slider test and TIPI test (E: \(r=0.78\), A: \(r=0.62\), C: \(r=0.62\), ES: \(r=0.83\), O: \(r=0.33\); \(p<0.01\) for E, A, C and ES, \(p<0.05\) for O). These are similar to correlations reported in Table 18; O has a weaker correlation and ES has a stronger correlation in this reliability check.
Means of study participants for the minimarkers scale
Partial correlations of each FFM trait on Minimarkers compared with the slider score, controlling for each other trait score on the non-target sliders
Partial correlations
\(<\,0.001\)
Correlation of Trait Score \(\times \) Slider Value for GSE and Resilience
Pearson's r Correlation of the slider value of each pair of stories: FFM (E, A, C, ES, O), GSE and Resilience, repeated after 1 week
Grey cells indicate the correlation of same trait at week 0 and week 1
\(^{*}p<0.05;\,\,{^{**}}p<0.01\)
Pearson's r Correlation of the FFM TIPI test score (E, A, C, ES, O) at Week 0 and Week 1
5.4 Interpreting slider values
There are several possible strategies in the interpretation of the slider values for use in personality experiments. The slider values form a continuous variable, which can be used in analysis for further studies (e.g. using a regression analysis). Splitting data into distinct groups is often considered undesirable, as it causes the data to lose power (Irwin and McClelland 2003). However, for some studies it may be useful to use the slider values to divide participants into High and Low groups (for example, when you want to offer different content to people with different traits).
When choosing to divide participants into groups, it is important to consider statistical features of the data (e.g is the data statistically normal), as well as the purpose of the study, and the limitations of data collection. For non-normal data, data can be split using the median, tertiles or quartiles. For normal data, groups can be formed using the mean or standard deviation. A further option is to take the highest and lowest scoring participants to form a defined group size (e.g. top 50 and bottom 50), or to use a hybrid method (e.g. the top and bottom 20 participants at least 1 standard deviation from the mean). It is also possible to compute the equivalent score on a standardised test (e.g. the TIPI test), by using the regression formula generated at validation (e.g. in Table 18) and group by population normative data for that test, when available (e.g. Table 10). The choice should be guided by how much data can be discarded, the importance of groups being distinct from each other, and how many groups are required (i.e. a 'neutral' group required). This is summarised in Table 23.
5.5 Discussion
This section has demonstrated how to use trait stories to measure personality. For each trait, there is a strong correlation between participants' scores on standardised personality tests and their scores on the slider scale (see Table 18). The effect size of the correlations imply that more polar trait stories (i.e. pairs of stories that are rated as very high and low in the trait) result in a sliding scale that better reflects the personality test. This can be seen in the comparatively low correlation for the Openness to Experience slider in Table 20. This highlights the importance of the story validation stage of development.
It should be noted that, while the sliders may be preferable to questionnaires, they have a lower accuracy than many standardised questionnaires. As for any decision about which measure to use in a study, the benefits of using the slider measure should be weighed against its lower accuracy; e.g. where high attrition needs to be mitigated by simplifying the questionnaires, or where the intended analysis groups users by trait.
Summary of ways to divide Personality Slider data into groups
Tertile
Mean split
Suitable for non-normal data
\(\checkmark \)
Suitable for normal data
Groups equal size
\(\checkmark \) \(^\mathrm{a}\)
Distinct high/low groups
\(\checkmark \) \(^\mathrm{b}\)
'Normal' group
\(\checkmark \) \(^\mathrm{c}\)
No data discarded
Groups reflect population norms
\(^\mathrm{a}\)Double size normal group
\(^\mathrm{b}\) Groups are statistically different from each other
\(^\mathrm{c}\)Only possible if high and low thresholds are defined by other research
6 Applying stories and sliders in personality research and beyond
This section provides examples of how the personality stories and sliders, and the method used to produce them, have been used in adaptation research, for adaptation to personality and beyond, demonstrating evidence of the method's usefulness.
Personality stories provide an easy way of portraying certain personalities as needed for indirect and user-as-wizard studies. Based on our research (i.e. Sect. 4), using personality stories also ensures (as far as possible) that the impression of the participant of the person's personality is in accordance to what the story is intended to express. Personality stories have been used for investigations into adaptation in persuasive technology, intelligent tutoring systems, and recommender systems (see Table 24). In Dennis et al. (2015) an indirect study was run with 68 participants investigating the impact of a skin cancer patient's personality on the perceived suitability of reminder messages (varied types based on Cialdini principles Cialdini 2001) to self-check their skin. Participants were provided with a personality story about a fictional skin cancer patient. They rated the suitability of reminder messages for this patient and selected the best message to use. Results showed a significant difference between participants based on levels of Conscientiousness: those high in Conscientiousness preferred authority messages as the second reminder whilst those low in Conscientiousness preferred scarcity messages.
Studies using personality stories and sliders to obtain or portray personality
Portraying
Dennis et al. Dennis et al. (2015)
Judge reminder persuasiveness
Dennis et al. (2012a, 2013, 2016)
Provide feedback and emotional support
FFM (ES)
Provide emotional support
Smith (2016)
RecSys
Select an item set
Okpo et al. (2016a, b, 2018)
Select exercise difficulty
Alhathli et al. (2016, 2017)
Judge learning materials
Smith and Masthoff (2018)
Judge emotional support messages
Judge reminder persuasiveness for a person with their own personality
Thomas et al. (2017); Josekutty Thomas et al. (2017)
Judge healthy eating messages
In Dennis et al. (2016), five user-as-wizard studies were run with 1203 participants in total, each investigating the impact of one of the FFM personality traits (as well as performance) on feedback (emotional support and slant) given to a learner. Participants were provided with a personality story about a learner and their performance, and provided feedback. Based on this data, an algorithm was developed that adapted feedback to Conscientiousness and Emotional Stability.
In Dennis et al. (2011), a User-as-Wizard study was run with 19 teachers, investigating the impact of GSE on feedback (slant). Participants were provided with a GSE personality story about a learner and their performance, and produced feedback. There was some evidence of teachers putting a positive spin on feedback for learners with a low GSE.
In Okpo et al. (2017), a User-as-Wizard study was run with 201 participants, investigating the impact of the Self-Esteem personality trait (as well as effort and performance) on exercise selection (difficulty level). Personality stories were constructed for Self-Esteem using the methodology presented in this paper. Participants were provided with either a low or high self-esteem story, the effort put in by the learner and their performance on a previous exercise. Participants selected the difficulty level of the next exercise for the learner to do. Self-esteem had an impact on difficulty level selection.
In Tintarev et al. (2013), a User-as-Wizard study was run with 120 participants, investigating the impact of Openness to Experience on recommendation diversity. Participants were provided with a personality story about a fictional friend as well as some indication of that friend's book preferences, and provided three book recommendations to this friend. There was some evidence that participants took Openness to Experience into account when producing the recommendations.
In Smith et al. (2015) and Smith (2016), two User-as Wizard studies were run with 61 and 45 participants respectively, investigating whether emotional support messages should be adapted to the recipient's Emotional Stability and Resilience respectively. Participants were provided with a personality story about a carer experiencing a stressful situation, and provided emotional support messages for this carer. Results showed that neurotic carers were provided with a wider range of emotional support. No effect was found of resilience on message selection.
6.2 Obtaining personality
Some studies require participants' personalities in order to analyse the impact of that personality on dependent variables (e.g. participants' preferences, participants' learning, etc). Most of the studies presented in Table 2 are of this type. The personality sliders have been used to obtain participants' personality to investigate adaptation in persuasive systems and intelligent tutoring systems. See Table 24 for example studies.
In Smith and Masthoff (2018), a study was run with 138 participants investigating the impact of personality on their appreciation of emotional support messages for stressful situations. Participants were told about a carer experiencing a stressful situation and rated an emotional support message provided by the carer's friend on how helpful, effective and sensitive they felt it was. Participants' FFM personality traits were obtained using personality sliders. Results showed that personality only had a small impact, with agreeableness and emotional stability warranting further investigation.
In Smith et al. (2016), an indirect study was run with 51 participants investigating the impact of personality on perceived persuasiveness of reminder messages (differing in type based on Cialdini principles Cialdini 2001) to self-check their skin for skin cancer patients. Participants' FFM traits were obtained using the personality sliders. They were told about a skin cancer patient who had the same personality as themselves and rated the suitability of reminder messages for this person. Results showed that personality is important when deciding on the type of persuasion to use in reminder messages.
In Thomas et al. (2017) and Josekutty Thomas et al. (2017), an indirect study was run with 152 participants investigating the impact of personality on the perceived persuasiveness of healthy eating messages differing in type and framing (positive or negative). Using the FFM personality sliders, the participants' personalities were obtained. They rated the perceived persuasiveness of messages for someone with a similar personality as themselves. There was some evidence of conscientiousness impacting persuasiveness.
In Alhathli et al. (2016), an indirect study was run with 50 participants exploring the impact of a learner's extraversion on the selection of learning materials (active vs passive, and social vs individual). Participants' personalities were obtained using the FFM personality sliders and they were told the learner had the same personality as them. They rated learning materials on the extent they felt the learner would enjoy them and they would increase the learner's skills and confidence. Extraversion was found to impact perceived enjoyment of social learning materials. In Alhathli et al. (2017), a similar study was run with 163 participants where the learning materials reflected learning styles, and participants' learning styles were measured in addition to their personality. No impact of either personality or learning style was found.
Results from these studies showed that the slider results can be used both for correlation analyses and to divide participants into high/low groups on different traits.
6.3 Applying the method beyond personality research
Finally, the method described in this paper for developing validated stories can also be applied to non-personality user or context characteristics. We have successfully applied this in multiple studies—for example, Smith et al. (2014) and Kindness (2014) developed stories that depicted different types of stressors experienced respectively by carers and community first responders. Forbes et al. (2014) developed stories that depicted different attitudes towards usage of transport means. In all of these cases, the stories were used to bootstrap adaptation research.
Increasingly, as illustrated in Sect. 2.4, research on adaptive systems is investigating personality as a user characteristic for adaptation. However, to do this effectively, reliable and lightweight ways are needed to express personality (for use in indirect and user-as-wizard studies) and to obtain user-personality. The paper makes two major contributions to this.
Firstly, the paper contributes a methodology for creating and validating stories that reliably express a personality trait. To illustrate the methodology, the paper presented the creation and validation of stories expressing the Five Factor model traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience), generalized self-efficacy, and resilience. The usefulness of the personality stories for adaptation research has been shown by the many examples provided of their use for indirect and user-as-wizard studies (see Sect. 6).
Secondly, the paper contributes a lightweight methodology for obtaining user-personality, using the personality stories as part of a self-assessment scale. These personality story scales can be used in studies investigating the impact of a trait, and may also be used by a system to allow it to adapt to this trait. The paper contributes guidelines on how to use such scales. The usefulness of the personality story scales for obtaining study participants' personality has been shown by their usage in adaptation studies (see Sect. 6).
While this paper looks at a small number of personality traits, the methodology can be extended to any user factor for which a validated questionnaire exists. So, as indicated in Sect. 6, this methodology has not only been been successfully used to produce additional stories for the personality trait self-esteem, but also to express user attitudes and stressors experienced. The more general methodology is the same as we used for personality (see Fig. 1), now using stories to express any characteristic.
There are several limitations and opportunities for future work. Firstly, the personality stories developed in this paper only portray a single trait. Although this enables investigations of the impact of such a trait, e.g on feedback to a learner, this does not facilitate investigations into interaction effects of multiple traits. To investigate this, stories which express two or more traits at the same time need to be developed.
Secondly, the stories developed in this paper only portrayed personality traits. We discussed above how the same method for constructing and validating stories has been used by us to portray other user and context characteristics such as stressors and user attitudes. We would like to extend this work by developing validated stories for portraying affective state, based on existing self-reporting affect scales. Similarly, we are interested in developing stories that reliably express other aspects such as learner performance and learner effort (a starting point towards the latter has been made in Okpo et al. (2017). When constructing such stories, care needs to be taken to avoid unintentionally evoking personality. For example, a learner who always performs well could be perceived as being highly conscientious, even when this was not the case. Another interesting area for validated story development may be to portray cultural differences (in line with Hofstede's work on cultural dimensions Hofstede 1983).
In summary, whilst there has been substantial research effort on obtaining user-personality, there has been only very limited work on reliably expressing user personality. This paper has provided a methodology for doing so through validated personality stories, and has also shown that these stories can be used as an additional light-weight method for obtaining user personality.
http://www.chambers.co.uk.
Personality is only one of many user characteristics that may impact user behaviour (Okpo et al. 2018). Other user characteristics include cognitive and physical ability (Loitsch et al. 2017), knowledge (Pelánek 2017), interests (Piao and Breslin 2018), and affect (Mizgajski and Morzy 2018; Grawemeyer et al. 2017). Additionally, situational factors, norms, and roles may moderate the impact of personality (Harland et al. 2007). Researchers normally investigate adaptation to personality together with other factors.
vs 'Neuroticism (N)'. Referring to this trait in this way is more consistent with the nomenclature of the other four traits (with higher scores inferring more "positive" personalities), and removes the need to invert this score's trait in analysis.
Based on the work by Perloff (2010), future work may include adapting to other personality traits such as self-monitoring, need for cognition, dogmatism and argumentativeness.
This is not an exhaustive list of traits, but a selection intended to convey the methodology, that we required for our other research.
Reproduced here to clarify how the stories were created; please refer to the original paper Schwarzer and Jerusalem (1995) when using the questionnaire.
This research was developed in the e-learning domain, where previous literature identified GSE, FFM and locus of control as salient adaptation characteristics. Therefore it would be desirable to have stories that isolated these traits, hence their inclusion.
As explained later, these stories needed alterations, and therefore crowd-sourcing was used to recruit the much larger number of participants required. The use of crowd-sourcing meant that we no longer investigated how the FFM stories were rated on the GSE and LOC scales, as including these scales would make the experiments too cumbersome and time consuming for participants on this platform.
Throughout this paper we use parametric measures to analyse Likert data. The conventional way to analyse personality tests is to total or average the score for the questions that relate to each factor; this indicates that the developers of these validated questionnaires intend the Likert scale items to be treated as numerical items. Indeed, the analyses of these questionnaires are generally provided by the scale developer using parametric methods. Whether to use a Mann–Whitney or t test on Likert data is debatable; Likert scales are commonly analysed using a t test, though there is good reason to treat them as non-parametric data. However, in practical application it has been found that there is little to no difference in the outcome, especially in the likelihood of Type 1 error (De Winter and Dodou 2010).
The 'liked peer' data was used as it was closer to the task in our experiment i.e. rating the personality of another person. A retrospective comparison of the self-reported minimarker scores of a subset of MTurk users revealed that the means for each trait are within 1SD of the means for the Illinois population, except for Emotional Stability, where the mean in the MTurk group was higher (6.29 vs 4.90). This is sufficiently similar to make the populations comparable.
The Minimarker scale was done first, to reduce the risk of straight-lining due to tiredness. There may have been a slight order effect, however as personality is relatively stable we do not expect an impact, particularly given the stories were not constructed from the Minimarker scale.
This paper acknowledges the Northern Research Partnership and the Scottish Informatics and Computer Science Alliance, who co-funded the Ph.Ds of the first two authors. This work was partially funded by: the RCUK Digital Economy award to the dot.rural Digital Economy Hub, University of Aberdeen, award reference EP/G066051/1; and the 'Affecting People with Natural Language' EPSRC platform grant, award reference EP/E011764/1. We thank Ana Ciocarlan for her help in investigating the reliability of the personality sliders, Jacek Kopecky for his help in the GSE validation study, and the anonymous reviewers for their constructive comments.
Alhathli, M., Masthoff, J., Siddharthan, A.: Exploring the impact of extroversion on the selection of learning materials. In: Workshop on Personalization Approaches in Learning Environments (2016)Google Scholar
Alhathli, M., Masthoff, J., Siddharthan, A.: Should learning material's selection be adapted to learning style and personality? In: Adjunct Proceedings of UMAP Conference, pp. 275–280. ACM (2017)Google Scholar
Alkiş, N., Temizel, T.: The impact of individual differences on influence strategies. Pers. Individ. Dif. 87, 147–152 (2015)Google Scholar
Amichai-Hamburger, Y., Vinitzky, G.: Social network use and personality. Comput. Hum. Behav. 26(6), 1289–1295 (2010)Google Scholar
Anagnostopoulou, E., Magoutas, B., Bothos, E., Schrammel, J., Orji, R., Mentzas, G.: Exploring the links between persuasion, personality and mobility types in personalized mobility applications. In: Persuasive Technology'17, pp. 107–118. Springer (2017)Google Scholar
Appel, A.P., Candello, H., de Souza, B.S., Andrade, B.D.: Destiny: a cognitive mobile guide for the olympics. In: Proceedings of WWW'16, pp. 155–158 (2016)Google Scholar
Arteaga, S.M., Kudeki, M., Woodworth, A., Kurniawan, S.: Mobile system to motivate teenagers' physical activity. In: International Conference on Interaction Design and Children, pp. 1–10. ACM, NY, USA (2010)Google Scholar
Bachrach, Y., Kosinski, M., Graepel, T., Kohli, P., Stillwell, D.: Personality and patterns of facebook usage. In: Web Science, pp. 24–32 (2012)Google Scholar
Bandura, A.: Self-efficacy. Wiley Online Library, London (1994)Google Scholar
Bandura, A.: Exercise of Personal and Collective Efficacy in Changing Societies. Self-Efficiency in Changing Society of Australia. Cambridge University Press, Cambridge (1995)Google Scholar
Barrows, H.S.: Simulated (standardized) patients and other human simulations. Health Sciences Consortium (1987)Google Scholar
Biel, J.I., Gatica-Perez, D.: The youtube lens: crowdsourced personality impressions and audiovisual analysis of vlogs. IEEE Trans. Multimed. 15(1), 41–55 (2013)Google Scholar
Borgatta, E.F.: The structure of personality characteristics. Behav. Sci. 9(1), 8–17 (1964)Google Scholar
Braunhofer, M., Elahi, M., Ricci, F.: User personality and the new user problem in a context-aware point of interest recommender system. In: Information and Communication Technologies in Tourism 2015, pp. 537–549. Springer (2015)Google Scholar
Buss, A.H., Plomin, R.: Temperament: early developing personality traits. L. Erlbaum Associates Hillsdale, NJ (1984)Google Scholar
Calvo, R.A., D'Mello, S., Gratch, J., Kappas, A.: The Oxford Handbook of Affective Computing. Oxford Library of Psychology. Oxford University Press, Oxford (2015)Google Scholar
Cantador, I., Fernández-Tobías, I., Bellogín, A.: Relating personality types with user preferences in multiple entertainment domains. In: Workshop on Emotions and Personality in Personalized Services (2013)Google Scholar
Carlson, N.R., Martin, G.N., Buskist, W.: Psychology, 2nd edn. Pearson Education Ltd., London (2004)Google Scholar
Cattell, R.B.: Personality and Motivation Structure and Measurement. World Book Co., New York (1957)Google Scholar
Celli, F., Rossi, L.: The role of emotional stability in twitter conversations. In: Workshop on Semantic Analysis in Social Media, pp. 10–17. ACL (2012)Google Scholar
Chamorro-Premuzic, T.: Personality and Individual Differences, 2nd edn. BPS Blackwell, Oxford (2011)Google Scholar
Chen, G., Gully, S.M., Eden, D.: Validation of a new general self-efficacy scale. Organ. Res. Methods 4(1), 62–83 (2001)Google Scholar
Chen, J., Haber, E., Kang, R., Hsieh, G., Mahmud, J.: Making use of derived personality: the case of social media ad targeting. In: ICWSM (2015)Google Scholar
Chen, L., Wu, W., He, L.: Personality and recommendation diversity. In: Tkalcic, M., De Carolis, B., de Gemmis, M., Odic, A., Košir, A. (eds.) Emotions and Personality in Personalized Services. Human-Computer Interaction Series. Springer, Cham (2016)Google Scholar
Cialdini, R.B.: Harnessing the science of persuasion. Harv. Bus. Rev. 79(9), 72–81 (2001)Google Scholar
Ciocarlan, A., Masthoff, J., Oren, N.: Qualitative study into adapting persuasive games for mental wellbeing to personality, stressors and attitudes. In: Adjunct Publication of UMAP'17, pp. 402–407. ACM (2017)Google Scholar
Ciocarlan, A., Masthoff, J., Oren, N.: Kindness is contagious: Study into exploring engagement and adapting persuasive games for wellbeing. In: Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization, UMAP'18, pp. 311–319. ACM, New York, NY, USA (2018). https://doi.org/10.1145/3209219.3209233
Ciocarlan, A., Masthoff, J., Oren, N.: Actual persuasiveness: impact of personality, age and gender on message type susceptibility. In: Proceedings of the Persuasive Technology Conference. Springer (2019)Google Scholar
Conati, C., Maclaren, H.: Empirically building and evaluating a probabilistic model of user affect. UMUAI 19(3), 267–303 (2009)Google Scholar
Connor, K.M., Davidson, J.R.: Development of a new resilience scale: the Connor–Davidson resilience scale (cd-risc). Depress Anxiety 18(2), 76–82 (2003)Google Scholar
Costa, P.T., McCrae, R.R.: NEO Personality Inventory–Form R (1985)Google Scholar
Costa, P.T., McCrae, R.R.: The revised neo personality inventory (neo-pi-r). In: The SAGE Handbook of Personality Theory and Assessment 2, pp. 179–198. SAGE Publications Inc (2008)Google Scholar
Cowley, B., Charles, D.: Behavlets: a method for practical player modelling using psychology-based player traits and domain specific features. UMUAI 26(2), 257–306 (2016)Google Scholar
de Vries, R.A., Truong, K.P., Evers, V.: Crowd-designed motivation: combining personality and the transtheoretical model. In: International Conference on Persuasive Technology, pp. 41–52. Springer (2016)Google Scholar
de Vries, R.A., Truong, K.P., Zaga, C., Li, J., Evers, V.: A word of advice: how to tailor motivational text messages based on behavior change theory to personality and gender. Pers. Ubiquitous Comput. 21(4), 675–687 (2017)Google Scholar
De Winter, J.C., Dodou, D.: Five-point likert items: t test versus Mann–Whitney–Wilcoxon. Pract. Assess. Res. Eval. 15(11), 2 (2010)Google Scholar
Dennis, M., Masthoff, J., Pain, H., Mellish, C.: Does self-efficacy matter when generating feedback? In: Biswas, G., Bull, S., Kay, J., Mitrovic, A. (eds.) Artificial Intelligence in Education, pp. 444–446. Springer, Berlin (2011)Google Scholar
Dennis, M., Masthoff, J., Mellish, C.: Adapting performance feedback to a learner's conscientiousness. In: UMAP, pp. 297–302. Springer (2012a)Google Scholar
Dennis, M., Masthoff, J., Mellish, C.: The quest for validated personality trait stories. In: IUI, pp. 273–276. ACM (2012b)Google Scholar
Dennis, M., Masthoff, J., Mellish, C.: Does learner conscientiousness matter when generating emotional support in feedback? In: Affective Computing and Intelligent Interaction, pp. 209–214. IEEE (2013)Google Scholar
Dennis, M., Smith, K., Masthoff, J., Tintarev, N.: How can skin check reminders be personalised to patient conscientiousness? PATH Workshop (2015)Google Scholar
Dennis, M., Masthoff, J., Mellish, C.: Adapting progress feedback and emotional support to learner personality. Int. J. Artif. Intell. Educ. 26(3), 877–931 (2016)Google Scholar
DeYoung, C.G., Quilty, L.C., Peterson, J.B.: Between facets and domains: 10 aspects of the big five. J. Pers. Soc. Psychol. 93(5), 880 (2007)Google Scholar
Digman, J.M.: Classical theories of trait organization and the big five factors of personality. In: Annual Meeting of American Psychological Association, Atlanta, GA (1988)Google Scholar
Digman, J.M.: Personality structure: emergence of the five-factor model. Ann. Rev. Psychol. 41(1), 417–440 (1990)Google Scholar
Doce, T., Dias, J., Prada, R., Paiva, A.: Creating individual agents through personality traits. In: IVA, pp. 257–264. Springer (2010)Google Scholar
Donnellan, M.B., Oswald, F.L., Baird, B.M., Lucas, R.E.: The mini-IPIP scales: tiny-yet-effective measures of the big five factors of personality. Psychol. Assess. 18(2), 192 (2006)Google Scholar
Dunn, G., Wiersema, J., Ham, J., Aroyo, L.: Evaluating interface variants on personality acquisition for recommender systems. In: UMAP'09, pp. 259–270. Springer (2009)Google Scholar
Eysenck, H.J.: The Structure of Human Personality (Psychology Revivals). Routledge, Abingdon (2013)Google Scholar
Farnadi, G., Sushmita, S., Sitaraman, G., Ton, N., De Cock, M., Davalos, S.: A multivariate regression approach to personality impression recognition of vloggers. In: Proceedings of WCPR at ACMMM'14, pp. 1–6. ACM (2014)Google Scholar
Farnadi, G., Sitaraman, G., Sushmita, S., Celli, F., Kosinski, M., Stillwell, D., Davalos, S., Moens, M.F., De Cock, M.: Computational personality recognition in social media. UMUAI 26(2), 109–142 (2016)Google Scholar
Fernández-Tobías, I., Braunhofer, M., Elahi, M., Ricci, F., Cantador, I.: Alleviating the new user problem in collaborative filtering by exploiting personality information. UMUAI 26, 221–255 (2016)Google Scholar
Ferwerda, B., Yang, E., Schedl, M., Tkalcic, M.: Personality traits predict music taxonomy preferences. In: CHI Ext. Abstracts, pp. 2241–2246. ACM (2015)Google Scholar
Fiske, D.W.: Consistency of the factorial structures of personality ratings from different sources. J. Abnorm. Soc. Psychol. 44(3), 329 (1949)Google Scholar
Forbes, P., Gabrielli, S., Maimone, R., Masthoff, J., Wells, S., Jylhä, A.: Towards using segmentation-based techniques to personalize mobility behavior interventions. ICST Trans. Ambient Syst. 1(4), e4 (2014)Google Scholar
Gao, R., Hao, B., Bai, S., Li, L., Li, A., Zhu, T.: Improving user profile with personality traits predicted from social media content. In: Recommender Systems, pp. 355–358. ACM (2013)Google Scholar
Golbeck, J., Robles, C., Turner, K.: Predicting personality with social media. In: CHI Extended Abstracts, pp. 253–262. ACM (2011)Google Scholar
Goldberg, L.: The structure of phenotypic personality traits. Am. Psychol. 48, 26–34 (1993)Google Scholar
Goldberg, L.R., Johnson, J.A., Eber, H.W., Hogan, R., Ashton, M.C., Cloninger, C.R., Gough, H.C.: The international personality item pool and the future of public-domain personality measures. J. Res. Pers. 40, 84–96 (2006)Google Scholar
Goolkasian, P.: The locus of control (2009). http://www.psych.uncc.edu/pagoolka/LC.html. Accessed 1 Mar 2019
Gosling, S.D., Rentfrow, P.J., Swann Jr., W.B.: A very brief measure of the big-five personality domains. J. Res. Pers. 37(6), 504–528 (2003a)Google Scholar
Gou, L., Mahmud, J., Haber, E., Zhou, M.: Personalityviz: a visualization tool to analyze people's personality with social media. In: Adj. Proceedings of IUI, pp. 45–46. ACM (2013)Google Scholar
Gow, A.J., Whiteman, M.C., Pattie, A., Deary, I.J.: Goldberg's ipip big-five factor markers: internal consistency and concurrent validation in scotland. Pers. Individ. Dif. 39(2), 317–329 (2005)Google Scholar
Grawemeyer, B., Mavrikis, M., Holmes, W., Gutiérrez-Santos, S., Wiedmann, M., Rummel, N.: Affective learning: improving engagement and enhancing learning with affect-aware feedback. User Model. User-adapt Interact. 27(1), 119–158 (2017)Google Scholar
Graziano, W.G., Jensen-Campbell, L.A., Finch, J.F.: The self as a mediator between personality and adjustment. J. Pers. Soc. Psychol. 73(2), 392 (1997)Google Scholar
Grumm, M., von Collani, G.: Measuring big-five personality dimensions with the implicit association test-implicit personality traits or self-esteem? Pers. Individ. Dif. 43(8), 2205–2217 (2007)Google Scholar
Guilford, J.P.: Factors and factors of personality. Psychol. Bull. 82(5), 802 (1975)Google Scholar
Halko, S., Kientz, J.A.: Personality and persuasive technology: an exploratory study on health-promoting mobile applications. In: International Conference on Persuasive Technology, pp. 150–161. Springer (2010)Google Scholar
Harland, P., Staats, H., Wilke, H.A.: Situational and personality factors as direct or personal norm mediated predictors of pro-environmental behavior: questions derived from norm-activation theory. Basic Appl. Soc. Psychol. 29(4), 323–334 (2007)Google Scholar
Harley, J.M., Carter, C.K., Papaionnou, N., Bouchet, F., Landis, R.S., Azevedo, R., Karabachian, L.: Examining the predictive relationship between personality and emotion traits and students' agent-directed emotions: towards emotionally-adaptive agent-based learning environments. UMUAI 26(2–3), 177–219 (2016)Google Scholar
Hartman, R.O., Betz, N.E.: The five-factor model and career self-efficacy: general and domain-specific relationships. J. Career Assess. 15(2), 145–161 (2007)Google Scholar
Hirsh, J.B., Kang, S.K., Bodenhausen, G.V.: Personalized persuasion: Tailoring persuasive appeals to recipients' personality traits. Psychol. Sci. 23(6), 578–581 (2012)Google Scholar
Hjemdal, O., Vogel, P.A., Solem, S., Hagen, K., Stiles, T.C.: The relationship between resilience and levels of anxiety, depression, and obsessive–compulsive symptoms in adolescents. Clin. Psychol. Psychot. 18(4), 314–321 (2011)Google Scholar
Hofstede, G.: National cultures in four dimensions: a research-based theory of cultural differences among nations. Int. Stud. Manag. Organ. 13(1–2), 46–74 (1983)Google Scholar
Hogan, R.: Manual for the Hogan personality inventory (1986)Google Scholar
Hu, R., Pu, P.: Enhancing collaborative filtering systems with personality information. In: Proceedings of RecSys'11, pp. 197–204. ACM (2011)Google Scholar
Iacobelli, F., Gill, A.J., Nowson, S., Oberlander, J.: Large scale personality classification of bloggers. In: Proceedings of ACII'11, pp. 568–577. Springer (2011)Google Scholar
Irwin, J.R., McClelland, G.H.: Negative consequences of dichotomizing continuous predictor variables. J. Mark. Res. 40(3), 366–371 (2003)Google Scholar
Jackson, D.N., Messick, S.: Content and style in personality assessment. Psychol. Bull. 55(4), 243 (1958)Google Scholar
John, O.P., Srivastava, S.: The Big Five trait taxonomy: history, measurement, and theoretical perspectives. In: Pervin, L.A., John, O.P. (eds.) Handbook of Personality. Elsevier (1999)Google Scholar
Josekutty Thomas, R., Masthoff, J., Oren, N.: Personalising healthy eating messages to age, gender and personality: using cialdini's principles and framing. In: Adj. Proceedings IUI, pp. 81–84. ACM (2017)Google Scholar
Judge, T.A., Erez, A., Bono, J.E., Thoresen, C.J.: Are measures of self-esteem, neuroticism, locus of control, and generalized self-efficacy indicators of a common core construct? J. Pers. Soc. Psychol. 83(3), 693–710 (2002)Google Scholar
Kaptein, M., De Ruyter, B., Markopoulos, P., Aarts, E.: Adaptive persuasive systems: a study of tailored persuasive text messages to reduce snacking. TiiS 2(2), 10 (2012)Google Scholar
Kaptein, M., Markopoulos, P., de Ruyter, B., Aarts, E.: Personalizing persuasive technologies: explicit and implicit personalization using persuasion profiles. IJHCS 77, 38–51 (2015)Google Scholar
Kindness, P.: Designing emotional support for a virtual teammate aimed at alleviating stress. Ph.D. thesis, University of Aberdeen (2014)Google Scholar
Kompan, M., Bieliková, M.: Social structure and personality enhanced group recommendation. In: Proceedings of EMPIRE Workshop'14 (2014)Google Scholar
Koole, S.L., Jager, W., van den Berg, A.E., Vlek, C.A., Hofstee, W.K.: On the social nature of personality: effects of extraversion, agreeableness, and feedback about collective resource use on cooperation in a resource dilemma. Pers. Soc. Psychol. Bull. 27(3), 289–301 (2001)Google Scholar
Kosinski, M.: Mypersonality (2012). http://www.mypersonality.org. Accessed 1 Mar 2019
Kosinski, M., Bachrach, Y., Kohli, P., Stillwell, D., Graepel, T.: Manifestations of user personality in website choice and behaviour on online social networks. Mach. Learn. 95(3), 357–380 (2014)MathSciNetGoogle Scholar
Kulik, J.A.: Confirmatory attribution and the perpetuation of social beliefs. J. Pers. Soc. Psychol. 44(6), 1171 (1983)Google Scholar
Leontidis, M., Halatsis, C., Grigoriadou, M.: Using an affective multimedia learning framework for distance learning to motivate the learner effectively. Int. J. Learn. Technol. 6(3), 223–250 (2011)Google Scholar
Lepri, B., Staiano, J., Shmueli, E., Pianesi, F., Pentland, A.: The role of personality in shaping social networks and mediating behavioral change. UMUAI 26(2–3), 143–175 (2016)Google Scholar
LLC, T.P.: The big five personality test (2018). https://www.truity.com/test/big-five-personality-test. Accessed 1 Mar 2019
Loitsch, C., Weber, G., Kaklanis, N., Votis, K., Tzovaras, D.: A knowledge-based approach to user interface adaptation from preferences and for special needs. User Model. User-Adapted Interact. 27(3–5), 445–491 (2017)Google Scholar
Lorr, M.: Interpersonal style inventory (ISI): Manual. Western Psychological Services (1986)Google Scholar
Luchins, A.S.: Definitiveness of impression and primacy–recency in communications. J. Soc. Psychol. 48(2), 275–290 (1958)Google Scholar
Magai, C., McFadden, S.: The Role of Emotions in Social and Personality Development. Plenum Press, New York (1995)Google Scholar
Masthoff, J.: The user as wizard: A method for early involvement in the design and evaluation of adaptive systems. In: Proceedings of UCDEAS, UMAP '06, vol. 1, pp. 460–469 (2006)Google Scholar
Masthoff, J.: Group Recommender Systems: Aggregation, Satisfaction and Group Attributes, pp. 743–776. Springer, Berlin (2015)Google Scholar
McCrae, R.R., Costa Jr., P.T.: A contemplated revision of the neo five-factor inventory. Pers. Individ. Dif 36(3), 587–596 (2004)Google Scholar
McCrae, R.R., John, O.P.: An introduction to the five-factor model and its applications. J. Pers. 60(2), 175–215 (1992)Google Scholar
McQuiggan, S., Mott, B., Lester, J.: Modeling self-efficacy in intelligent tutoring systems: an inductive approach. UMUAI 18(1–2), 81–123 (2008)Google Scholar
Mizgajski, J., Morzy, M.: Affective recommender systems in online news industry: how emotions influence reading choices. User Model. User-Adapt. Interact. (2018). https://doi.org/10.1007/s11257-018-9213-x
Moncur, W., Masthoff, J., Reiter, E., Freer, Y., Nguyen, H.: Providing adaptive health updates across the personal social network. Hum. Comput. Interact. 29(3), 256–309 (2014)Google Scholar
MT: Amazon mechanical turk. (2012). http://www.mturk.com. Accessed 1 Mar 2019
Nacke, L.E., Bateman, C., Mandryk, R.L.: Brainhex: a neurobiological gamer typology survey. Entertain. Comput. 5(1), 55–62 (2014). https://doi.org/10.1016/j.entcom.2013.06.002 Google Scholar
Nguyen, T.T., Harper, F.M., Terveen, L., Konstan, J.A.: User personality and user satisfaction with recommender systems. Inform. Syst. Front. 20(6), 1173–1189 (2017)Google Scholar
Nguyen, H., Ruiz, C., Wilson, V., Strong, D., Djamasbi, S.: Using personality traits and chronotype to support personalization and feedback in a sleep health behavior change support system. In: Proceedings of HICSS'18 (2018)Google Scholar
Norman, W.T.: Toward an adequate taxonomy of personality attributes: replicated factor structure in peer nomination personality ratings. J. Abnorm. Soc. Psychol. 66(6), 574 (1963)Google Scholar
Nov, O., Arazy, O.: Personality-targeted design: theory, experimental procedure, and preliminary results. In: CSCW, pp. 977–984. ACM (2013)Google Scholar
Nov, O., Arazy, O., López, C., Brusilovsky, P.: Exploring personality-targeted UI design in online social participation systems. In: Proceedings of CHI'13, pp. 361–370. ACM (2013)Google Scholar
Nowson, S., Oberlander, J.: Identifying more bloggers. In: ICWSM (2007)Google Scholar
Nunes, M.A.S.N.: Recommender systems based on personality traits. Ph.D. thesis, Universite Montpellier 2 (2008)Google Scholar
Oberlander, J., Nowson, S.: Whose thumb is it anyway?: classifying author personality from weblog text. In: COLING/ACL, pp. 627–634 (2006)Google Scholar
Odić, A., Tkalčič, M., Tasic, J.F., Košir, A.: Personality and social context: impact on emotion induction from movies. In: Workshop on Emotions and Personality in Personalized Services (2013)Google Scholar
Okpo, J., Dennis, M., Masthoff, J., Smith, K.A., Beacham, N.A.: Exploring requirements for an adaptive exercise selection system. In: UMAP (Extended Proceedings) (2016a)Google Scholar
Okpo, J., Dennis, M., Smith, K.A., Masthoff, J., Beacham, N.: Adapting exercise selection to learner self-esteem and performance. In: Intelligent Tutoring Systems, p. 517. Springer (2016b)Google Scholar
Okpo, J., Masthoff, J., Dennis, M., Beacham, N., Ciocarlan, A.: Investigating the impact of personality and cognitive efficiency on the selection of exercises for learners. In: Proceedings of UMAP'17, pp. 140–147. ACM (2017)Google Scholar
Okpo, J.A., Masthoff, J., Dennis, M., Beacham, N.: Adapting exercise selection to performance, effort and self-esteem. New Rev. Hypermedia Multimed. 24(3), 1–32 (2018)Google Scholar
Orji, R., Vassileva, J., Mandryk, R.L.: Modeling the efficacy of persuasive strategies for different gamer types in serious games for health. UMUAI 24(5), 453–498 (2014)Google Scholar
Orji, R., Nacke, L.E., Di Marco, C.: Towards personality-driven persuasive health games and gamified systems. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 1015–1027. ACM (2017)Google Scholar
Orji, R., Tondello, G.F., Nacke, L.E.: Personalizing persuasive strategies in gameful systems to gamification user types. Studies 61, 62 (2018)Google Scholar
O'Rourke, N., Kupferschmidt, A.L., Claxton, A., Smith, J.Z., Chappell, N., Beattie, B.L.: Psychological resilience predicts depressive symptoms among spouses of persons with Alzheimer disease over time. Aging Ment. Health 14(8), 984–993 (2010)Google Scholar
Oyibo, K., Orji, R., Vassileva, J.: Investigation of the influence of personality traits on Cialdini's persuasive strategies. In: Proceedings of PPT, Persuasive Technology'17 (2017)Google Scholar
Paramythis, A., Weibelzahl, S., Masthoff, J.: Layered evaluation of interactive adaptive systems: framework and formative methods. UMUAI 20(5), 383–453 (2010)Google Scholar
Park, G., Schwartz, H.A., Eichstaedt, J.C., Kern, M.L., Kosinski, M., Stillwell, D.J., Ungar, L.H., Seligman, M.E.: Automatic personality assessment through social media language. J. Pers. Soc. Psychol. 108(6), 934 (2015)Google Scholar
Peabody, D., Goldberg, L.R.: Some determinants of factor structures from personality-trait descriptors. J. Pers. Soc. Psychol. 57(3), 552 (1989)Google Scholar
Pelánek, R.: Bayesian knowledge tracing, logistic models, and beyond: an overview of learner modeling techniques. User Model. User-Adapt. Interact. 27(3–5), 313–350 (2017)Google Scholar
Perloff, R.M.: The Dynamics of Persuasion: Communication and Attitudes in the Twenty-First Century. Routledge, Abingdon (2010)Google Scholar
Piao, G., Breslin, J.G.: Inferring user interests in microblogging social networks: a survey. User Model. User-Adapt. Interact. 28(3), 277–329 (2018)Google Scholar
Polzehl, T.: Personality in Speech: Assessment and Automatic Classification. Springer, Berlin (2014)Google Scholar
Quercia, D., Kosinski, M., Stillwell, D., Crowcroft, J.: Our twitter profiles, our selves: predicting personality with twitter. In: Proceeding of PASSAT, SocialCom'11, pp. 180–185 (2011)Google Scholar
Quercia, D., Lambiotte, R., Stillwell, D., Kosinski, M., Crowcroft, J.: The personality of popular facebook users. In: CSCW, pp. 955–964 (2012)Google Scholar
Quijano-Sanchez, L., Recio-Garcia, J.A., Diaz-Agudo, B.: Personality and social trust in group recommendations. In: International Conference on Tools with Artificial Intelligence, vol. 2, pp. 121–126. IEEE (2010)Google Scholar
Rammstedt, B., John, O.P.: Measuring personality in one minute or less: a 10-item short version of the big five inventory in English and German. J. Res. Pers. 41(1), 203–212 (2007)Google Scholar
Rao, D., Yarowsky, D.: Detecting latent user properties in social media. In: Proceedings of the NIPS MLSN Workshop, pp. 1–7. Citeseer (2010)Google Scholar
Rawlings, D., Ciancarelli, V.: Music preference and the five-factor model of the neo personality inventory. Psychol. Music 25(2), 120–132 (1997)Google Scholar
Robison, J., McQuiggan, S., Lester, J.: Developing empirically based student personality profiles for affective feedback models. In: Intelligent Tutoring Systems, pp. 285–295. Springer, Berlin (2010)Google Scholar
Rojas, M., Masip, D., Todorov, A., Vitria, J.: Automatic prediction of facial trait judgments: appearance vs. structural models. PloS ONE 6(8), e23,323 (2011)Google Scholar
Ross, C., Orr, E.S., Sisic, M., Arseneault, J.M., Simmering, M.G., Orr, R.R.: Personality and motivations associated with facebook use. Comput. Hum. Behav. 25(2), 578–586 (2009)Google Scholar
Rotter, J.: Generalized expectancies for internal versus external control of reinforcement. Psychol. Monogr. 80, 1–26 (1966)Google Scholar
Santos, O.C., Saneiro, M., Salmeron-Majadas, S., Boticario, J.G.: A methodological approach to eliciting affective educational recommendations. In: International Conference on Advanced Learning Technologies, pp. 529–533 (2014)Google Scholar
Santos, O.C., Saneiro, M., Boticario, J.G., Rodriguez-Sanchez, M.: Toward interactive context-aware affective educational recommendations in computer-assisted language learning. New Rev. Hypermedia Multimed. 22(1–2), 27–57 (2016)Google Scholar
Sarsam, S.M., Al-Samarraie, H.: Towards incorporating personality into the design of an interface: a method for facilitating users' interaction with the display. User Model. User-Adapt. Interact. 28(1), 75–96 (2018)Google Scholar
Saucier, G.: Mini-markers: a brief version of goldberg's unipolar big-five markers. J. Pers. Assess. 63(3), 506–516 (1994a)Google Scholar
Saucier, G.: Normative values for some large samples (1994b). https://pages.uoregon.edu/gsaucier/MINIMARK.doc. Accessed 1 Mar 2019
Schiavo, G., Cappelletti, A., Mencarini, E., Stock, O., Zancanaro, M.: Influencing participation in group brainstorming through ambient intelligence. Int. J. Hum. Comput. Interact. 32(3), 258–276 (2016)Google Scholar
Schwarzer, R., Jerusalem, M.: Generalized self-efficacy scale. In: Weinman, J., Wright, S., M.J (eds.) Measures in health psychology: a user's portfolio. Causal and control beliefs, pp. 35–37. NFER-NELSON (1995)Google Scholar
Smith, K.A.: Exploring personalised emotional support. Ph.D. thesis, University of Aberdeen (2016)Google Scholar
Smith, K.A., Masthoff, J.: Can a virtual agent provide good emotional support? In: Proceedings of 32nd BCS HCI Conference, Belfast, UK, 2018. BCS Learning and Development Ltd. (2018)Google Scholar
Smith, B.W., Dalen, J., Wiggins, K., Tooley, E., Christopher, P., Bernard, J.: The brief resilience scale: assessing the ability to bounce back. Int. J. Behav. Med. 15, 194–200 (2008)Google Scholar
Smith, B., Tooley, E., Christopher, P., Kay, V.: Resilience as the ability to bounce back from stress: a neglected personal resource? J. Posit. Psychol. 5(3), 166–176 (2010)Google Scholar
Smith, K.A., Masthoff, J., Tintarev, N., Moncur, W.: The development and evaluation of an emotional support algorithm for carers. Intell. Artif. 8(2), 181–196 (2014)Google Scholar
Smith, K.A., Masthoff, J., Tintarev, N., Moncur, W.: Adapting emotional support to personality for carers experiencing stress. In: International Workshop on Personalisation and Adaptation in Technology for Health—UMAP 2015 Adjunct Proceedings (2015)Google Scholar
Smith, K.A., Dennis, M., Masthoff, J.: Personalizing reminders to personality for melanoma self-checking. In: UMAP, pp. 85–93. ACM (2016)Google Scholar
Soldz, S., Vaillant, G.E.: The big five personality traits and the life course: a 45-year longitudinal study. J. Res. Pers. 33(2), 208–232 (1999)Google Scholar
Soto, C.J., John, O.P.: The next big five inventory (bfi-2): developing and assessing a hierarchical model with 15 facets to enhance bandwidth, fidelity, and predictive power. J. Pers. Soc. Psychol. 113(1), 117 (2017)Google Scholar
Southwick, S.M., Charney, D.S.: The science of resilience: implications for the prevention and treatment of depression. Science 338(6103), 79–82 (2012)Google Scholar
Srivastava, S.: Measuring the big five personality factors (2012). http://psdlab.uoregon.edu/bigfive.html. Accessed 1 Mar 2019
Staiano, J., Lepri, B., Subramanian, R., Sebe, N., Pianesi, F.: Automatic modeling of personality states in small group interactions. In: International conference on Multimedia, pp. 989–992. ACM (2011)Google Scholar
Taylor, W.L.: Cloze procedure: a new tool for measuring readability. Journal. Q. 30, 415–433 (1953)Google Scholar
Tellegen, A.: Structures of Mood and Personality and Their Relevance to Assessing Anxiety, with an Emphasis on Self-Report. Lawrence Erlbaum Associates Inc, New Jersey (1985)Google Scholar
Thomas, K.W.: Thomas–kilmann conflict mode. TKI Profile and Interpretive Report, pp. 1–11 (2008)Google Scholar
Thomas, R., Masthoff, J., Oren, N.: Adapting healthy eating messages to personality. In: Persuasive Technology, pp. 119–132. Springer (2017)Google Scholar
Tintarev, N., Dennis, M., Masthoff, J.: Adapting recommendation diversity to openness to experience: a study of human behaviour. In: UMAP, pp. 190–202. Springer (2013)Google Scholar
Tkalčič, M., Chen, L.: Personality and recommender systems. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 715–739. Springer, Berlin (2015)Google Scholar
Tkalčič, M., Kunaver, M., Košir, A., Tasic, J.: Addressing the new user problem with a personality based user similarity measure. In: Proceedings of DEMRA Workshop at UMAP'11, p. 106 (2011)Google Scholar
Tkalčič, M., Quercia, D., Graf, S.: Preface to the special issue on personality in personalized systems. UMUAI 26(2), 103–107 (2016)Google Scholar
Tondello, G.F., Wehbe, R.R., Diamond, L., Busch, M., Marczewski, A., Nacke, L.E.: The gamification user types hexad scale. In: Proceedings of CHI PLAY'16, pp. 229–243. ACM (2016)Google Scholar
Tupes, E.C., Christal, R.E.: Recurrent personality factors based on trait ratings. J. Person. 60(2), 225–251 (1992)Google Scholar
Vinciarelli, A., Mohammadi, G.: A survey of personality computing. IEEE Trans. Affect. Comput. 5(3), 273–291 (2014)Google Scholar
Weinberg, J.D., Freese, J., McElhattan, D.: Comparing data characteristics and results of an online factorial survey between a population-based and a crowdsource-recruited sample. Sociol. Sci. 1, 292–310 (2014)Google Scholar
Wohn, D.Y., Wash, R.: A virtual "room" with a cue: detecting personality through spatial customization in a city simulation game. Comput. Hum. Behav. 29(1), 155–159 (2013)Google Scholar
Wu, W., Chen, L.: Implicit acquisition of user personality for augmenting movie recommendations. In: UMAP, pp. 302–314. Springer (2015)Google Scholar
Wu, W., Chen, L., Zhao, Y.: Personalizing recommendation diversity based on user personality. User Model. User-Adapt. Interact. 28(3), 237–276 (2018)Google Scholar
Yee, N., Ducheneaut, N., Nelson, L., Likarish, P.: Introverted elves and conscientious gnomes: the expression of personality in world of warcraft. In: CHI, pp. 753–762. ACM, New York, NY, USA (2011)Google Scholar
Youyou, W., Kosinski, M., Stillwell, D.: Computer-based personality judgments are more accurate than those made by humans. Proc. Natl. Acad. Sci. 112(4), 1036–1040 (2015)Google Scholar
Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)Google Scholar
Zhang, C., Conrad, F.: Speeding in web surveys: the tendency to answer very fast and its association with straightlining. Surv. Res. Methods 8, 127–135 (2014)Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Email authorView author's OrcID profile
1.University of SouthamptonSouthamptonUK
2.University of PortsmouthPortsmouthUK
3.University of AberdeenAberdeenUK
4.Utrecht UniversityUtrechtNetherlands
5.TU DelftDelftNetherlands
Smith, K.A., Dennis, M., Masthoff, J. et al. User Model User-Adap Inter (2019) 29: 573. https://doi.org/10.1007/s11257-019-09219-6
First Online 19 March 2019 | CommonCrawl |
Tag Archives: π day
The 3rd annual π day anime and mathematics post: A symmetric group of friends of degree 5
「ふいにコネクト」/「ものくろあくたー。」
It's that day of the year again.
Kokoro Connect's premise made a lot of people raise their eyebrows, because really, what good can come from body-switching shenanigans? Well, let's think about this for a second. We have a group of five kids and every once in a while, at random, they switch into the others' bodies at random. What does that sound like? That's right, a permutation!
Interestingly enough, the idea of connecting body-switching with permutations isn't new. The Futurama writers did it and apparently got a new theorem out of it. What differs in the case of Kokoro Connect and Futurama is that in Futurama, the body-switching could only happen in twos. These are called transpositions. Obviously, this isn't the case for Kokoro Connect. This doesn't make too much of a difference since it turns out we can write out any permutation we want as a series of transpositions, but that wouldn't be very fun for Heartseed.
We write permutations in the following way. If we let Taichi = 1, Iori = 2, Inaban = 3, Aoki = 4, and Yui = 5, we'll have $(1 2 3 4 5)$ representing the identity permutation, when everyone's in their own body. If Heartseed wanted to make Aoki and Yui switch places, he'd apply the following permutation
$$ \left( \begin{array}{ccccc} 1&2&3&4&5 \\ 1&2&3&5&4 \end{array} \right) $$
While it's helpful for seeing exactly what goes where, especially when we start dealing with multiple permutations, this notation is a bit cumbersome, so we'll only write the second line ($(12354)$) to specify a permutation.
For the purposes of this little exercise, we'll consider applying a permutation as taking whoever's currently in a given body. That is, say we permute Aoki and Taichi to get $(4 2 3 1 5)$. In order to get everyone back into their own bodies, we have to apply $(4 2 3 1 5)$ again, which takes Aoki, who's in Taichi's body, back into Aoki's body.
So let's begin with something simple. How many different ways are there for the characters to body switch? Both who is switched and who they switch with is entirely random. Again, since the switches aren't necessarily transpositions, this means that we can end up with cycles like in episode 2, when Yui, Inaban, and Aoki all get switched at the same time. This can be written as $(1 2 4 5 3)$.
But this is just the number of permutations that can happen on a set of five elements, which is just 5! = 120. Of course, that includes the identity permutation, which just takes all elements to themselves, so the actual number of different ways the characters can be swapped is actually 119.
Anyhow, we can gather up all of these different permutations into a set and give it the function composition operation and it becomes a group. A group $(G,\cdot)$ is an algebraic structure that consists of a set $G$ and an operation $\cdot$ which satisfy the group axioms:
Closure: for every $a$ and $b$ in $G$, $a\cdot b$ is also in $G$
Associativity: for every $a$, $b$, and $c$ in $G$, $(a\cdot b)\cdot c = a\cdot (b\cdot c)$
Identity: there exists $e$ in $G$ such that for every $a$ in $G$, $e\cdot a = a \cdot e = a$
Inverse: for every $a$ in $G$, there exists $b$ in $G$ such that $a\cdot b = b\cdot a = e$
In this case, we can think of the permutations themselves as elements of a group and we take permutation composition as the group operation. Let's go through these axioms.
Closure says that if have two different configurations of body swamps, say Taichi and Iori ($(2 1 3 4 5)$) and Iori and Yui ($(1 5 3 4 2)$), then we can apply them one after the other and we'd still have a body swap configuration: $(2 5 3 4 1)$. That is, we won't end up with something that's not a body swap. This seems like a weird distinction to make, but it's possible to define a set that doesn't qualify as a group. Say I want to take the integers under division as a group ($(\mathbb Z, \div)$). Well, it breaks closure because 1 is an integer and 2 is an integer but $1 \div 2$ is not an integer.
Associativity says that it doesn't matter what order we choose to apply our operations in. If we have three swaps, say Taichi and Inaban ($(3 2 1 4 5)$), Aoki and Yui ($(1 2 3 5 4)$), and Iori and Yui $(1 5 3 4 2)$ and we want to apply them in that order. Then as long as they still happen in that order, it doesn't matter which one we apply first. We'd have
$$((32145)(12354))(15342) = (32154)(15342) = (34152)$$
$$(32145)((12354)(15342)) = (32145)(14352) = (34152)$$
The identity means that there's a configuration that we can apply and nothing will change. That'd be $(12345)$. And inverse means that there's always a single body swap that we can make to get everyone back in their own bodies.
As it turns out, the group of all permutations on $n$ objects is a pretty fundamental group. These groups are called the symmetric groups and are denoted by $S_n$. So the particular group we're working with is $S_5$.
So what's so special about $S_5$? Well, as it turns out it's the first symmetric group that's not solvable, a result that's from Galois theory and has a surprising consequence.
Évariste Galois was a cool dude, proving a bunch of neat stuff up until he was 20, when he got killed in a duel because of some drama which is speculated to be of the relationship kind, maybe not unlike Kokoro Connect (it probably wasn't anything like Kokoro Connect at all). Among the things that he developed was the field that's now known as Galois theory, which is named after him. What's cool about Galois theory is that it connects two previously unrelated concepts in algebra: groups and fields.
One of the most interesting things that came out of Galois theory is related to the idea of solving polynomials. I'm sure we're all familiar with the quadratic formula. Well, in case you aren't, here it is:
$$x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a}$$
This neat little formula gives us an easy way to find the complex roots of any second degree polynomial. It's not too difficult to derive. And we can do that for cubic polynomials too, which takes a bit more work to derive. And if we want to really get our hands dirty, we could try deriving the general form of roots for polynomials of degree four. And wait until you try to do it for degree five polynomials.
That's because, eventually, you'll give up. Why? Well, it's not just hard, but it's impossible. There is no general formula using radicals and standard arithmetic operations for the roots for any fifth degree (or higher!) polynomial. The reason behind this is because $S_5$ is the Galois group for the general polynomial of degree 5. Unfortunately, proving that fact is a bit of a challenge to do here since it took about 11 weeks of Galois theory and group theory to get all the machinery in place, so we'll have to leave it at that.
Posted in Anime | Tagged Anime, galois theory, group theory, kokoro connect, math, π day
The 2nd annual π day anime and mathematics post
「涼宮ハルヒの消失」/「茨乃」
Happy $\pi$ day. Once again, Nadeko will bring us in:
Snowy Mountain Syndrome is the third story in The Rampage of Haruhi Suzumiya, the fifth volume of the light novel. It's the first story that has yet to be animated. It's also a story that contains the dread spectre of mathematics.
So our SOS-dan is stuck in a mysterious cabin in the middle of a snowstorm on a mountain. They find a mysterious contraption that has an equation displayed:
$$x-y=(D-1)-z$$
and they are to provide $x$, $y$, and $z$. Koizumi and Kyon are confused, but Haruhi rightly identifies this equation as Euler's polyhedron formula, which is also very often referred to as just Euler's formula. If you're referring to it in context, it doesn't matter that much, but it's useful to distinguish between all the other things that Euler discovered, which is a hell of a lot.
First, we should probably go over some basic definitions. When we talk about graphs, we're not talking about bar graphs or pie charts or the like. We're also not talking about graphs of polynomials on a cartesian plane or other such functions. Graphs are a mathematical structure which, when drawn, looks like a bunch of circles and lines.
Formally, a graph is a pair $G = (V,E)$ where $V$ is a set of vertices and $E$ is a set of edges. Vertices can be any old thing, but each edge is defined as a pair $(u,v)$ where $u$ and $v$ are vertices in $V$. When we draw graphs, we just draw a vertex as a circle and draw an edge as a line that connects the two vertices it's defined as.
And that's it! That's the most general definition of a graph, which means we can end up with a graph that's completely empty or a graph that's just a bunch of vertices with no edges in between them. We can even have multiple edges going in between two vertices. Of course, often times, we'd like to add some more constraints, depending on what we want to do with our graphs. Very often, we'd like to restrict the number of edges between two vertices to one and that's what we'll do.
Back to the formula, usually it's given as $\chi=v-e+f$, where $v$ is the number of vertices, $e$ is the number of edges, $f$ is the number of faces, and $\chi$ is called the Euler characteristic. That makes $x=v$, $y=e$, $f=z$ and $D-1=\chi$. Now, the only thing here that we haven't seen defined yet is a face. Intuitively, we can see that's just the space that's bounded by the edges.
What I find strange is the explanation in the novel that $D$ stands for the dimension of the polyhedra. As far as I know, this only works in the three-dimensional case for platonic solids. Once we generalize the structures to other kinds of polyhedra and topological surfaces, that analogy breaks down.
Anyhow, the way the formula is applied in the book is the use that I'm most familiar with, which is as a property of a planar graph. For planar graphs, $\chi=2$. In the novel, they deduce that $\chi=1$ since $D=2$ and that only works because they didn't count the large face outside of the edges as a face, which we usually do.
But what is a planar graph? Well, if you go back to our definition of a graph, you might notice that all we've done is said that it's a bunch of vertices and edges. We've said nothing about how to draw a graph. Usually, we represent vertices as circles and edges as lines in between those circles, but other than that, there's really nothing telling you what order to draw your circles in or whether your lines have to be completely straight or not or how far apart everything has to be. How you choose to represent your graph is up to you, although if you draw your graph weirdly, you might make the people trying to read it angry.
Informally, a planar graph is a graph that you can draw with none of the edges crossing each other. This seems like a kind of silly thing to be worried about, because it seems like you could just keep on drawing a graph until it works out. Well, for edges with a lot of vertices and edges, it's not obvious and even for really small graphs. For instance:
At a glance, it doesn't look like the drawing on the right is planar, but all we have to do is drag one of the vertices into the middle to get the drawing on the left and it turns out they're both the same graph, $K_4$, the complete graph of order 4.
That's where Euler's formula comes in really handy. It gives us a way of figuring out whether or not our graph is planar or not without having to fiddle around with placing edges properly and stuff. You already know how many vertices and edges you've got, so all you need to do is make sure you've got the right number of faces.
So it's probably pretty clear at this point that you can't draw every graph without the edges crossing. We can say something interesting about those graphs too, which just turns out to be another characterization of planar graphs, but oh well. But first, we have to introduce the concept of graph minors.
Suppose we have a graph $G=(V,E)$ and an edge $e=(u,v) \in E(G)$. If we contract the edge $e$, we essentially merge the two vertices into a new vertex, let's call it $w$, and every edge that had an endpoint at $u$ or $v$ now has $w$ as the corresponding endpoint. Then a graph $H$ is a graph minor of $G$ if we can delete and contract a bunch of edges in $G$ to get $H$ (or a graph that's isomorphic to $H$).
It turns out that every non-planar graph can be reduced to a minor of one of two graphs. The first is $K_5$, the complete graph of order 5:
The second is $K_{3,3}$, the complete bipartite graph 3,3:
These two graphs are the smallest non-planar graphs, otherwise we'd be able to reduce them further to get another non-planar graph. Like I mentioned before, this is a characterization for planar graphs too, since a planar graph can't contain a $K_5$ or $K_{3,3}$ minor.
I guess I'll end by saying that graphs are hella useful, especially in computer science. A lot of people complain about never using math like calc ever. If you're a developer, you'll run into graphs everywhere. It's pretty amazing how many structures and concepts can be represented by a bunch of circles and lines.
Posted in Anime | Tagged graph theory, haruhi, light novel, math, pi, π day
The (1st annual?) π day anime and mathematics lecture
Happy $\pi$ day. We'll begin with an obligatory video.
One of the reasons I enjoyed Summer Wars so much is because the main character's superpower is math. Well, okay, you say, he's really good at math, but so what? A lot of people complain about the implausibility of OZ, but those of us with a basic understanding of cryptography and number theory will have been drawn to Kenji's quick problem solving work with an eyebrow raised. So let's talk about why Kenji is a wizard.
Kenji doing some friggin mathematics
We'll start with modular arithmetic, which Kenji mentions to Natsuki on the train ride to Ueda. When we divide numbers, we often end up with remainders. Suppose we divide some integer $k$ by $N$ and we get a remainder of $r$. Then we say that $k$ and $r$ are equivalent $\bmod{N}$ and we denote that by $k = r \bmod{N}$. Because it's how division works, for any integer $k, r$ will be some number from $0$ to $N-1$. It turns out a lot of arithmetic operations work the same way in modular arithmetic: adding, subtracting, and multiplying numbers and then taking the modulus of the result will give you the same number as adding, subtracting, and multiplying the moduli of the numbers you started out with.
However, division doesn't work as we would expect it to. So we have to think about division (or the equivalent operation) differently. Instead of thinking of division as splitting a group of stuff into smaller groups, we'll think of it as multiplying by an inverse. What's an inverse? Well, we can try thinking of it in terms of addition. It's pretty intuitive that subtraction is the opposite of addition. If we have some integer $k$, then the additive inverse of $k$ is $-k$. When we add $k$ and $-k$, we get $0$, the additive identity. The identity is just the special number that we can add to anything and get that same thing back unchanged ($n+0 = n$). In the same way, if we multiply $k$ by its inverse, $k^{-1}$, then we'll get $1$, since $k \times 1$ is just $k$ again. What this means is that the inverse of $k \bmod{N}$ is just some other number $j$ from $0$ to $N-1$ such that $j\cdot k = 1 \bmod{N}$ and it's just multiplication again.
Now, the problem with this is that it's not guaranteed that there's always an inverse hanging around in $\bmod{N}$. In particular, if $k$ and $N$ share any divisors, then $k$ won't have an inverse $\bmod{N}$. This is interesting because it also tells us that if we consider integers mod a prime number $P$, then every integer $\bmod{P}$ has an inverse, since $P$ doesn't share any divisors with any integers from $0$ to $P-1$. We call these things that have inverses units. So if we have a unit $k$, then $k^m$ is also a unit, for any integer $m$. We even have a funny function $\phi$ defined such that $\phi(n)$ is the number of units in $\bmod{n}$.
Love Machine solicits help
So taking everything we've learned, we can set up a cryptosystem! The one we'll be looking at is called RSA, after the guys who invented it. We have Bob who wants to securely send a message $M$ to Alice. Alice chooses two prime numbers $p$ and $q$ and figures out $m = pq$. She also goes and figures out $\phi(m)$, which happens to be $(p-1)(q-1)$. Finally, she picks some integer $k$, a unit in $\bmod{\phi(m)}$. She lets everyone know $m$ and $k$, but she keeps $p$, $q$, and $\phi(m)$ secret.
So Bob wants to send $M$, which is just his message conveniently in number form. He makes $M$ into a number between $0$ and $m$, and if $M$ is too big, he can just break it up into chunks. Bob figures out the smallest $b$ such that $b = a^k \bmod{m}$ and sends $b$ over to Alice. Now, since Alice has $k$ and $\phi(m)$, she can also find $k^{-1}$ pretty easily. Once she has that, she can get the original message by figuring out $b^{k^{-1}} = (M^k)^{k{-1}} = M \bmod{m}$, since $kk^{-1} = 1 \bmod \phi(m)$.
The interesting thing here is that all of the information is out there for someone to encrypt a message to send to Alice, but no one is able to decrypt it. Well, they're able to decrypt it if they know what $p$ and $q$ are, since once they've got that, they can get $\phi(m)$. But it turns out getting $p$ and $q$ from $m$ (which Alice just throws up on the interwebs) is really hard. And it really works for reals, because RSA is pretty widely deployed for things like keeping your credit card information safe while you send it through the tubes to Amazon.
A conveniently displayed ciphertext
Let's go back and think about units some more. Of course, there are only $N$ numbers in the integers $\bmod{N}$, so there's a point at which $k^m$ is just $1$ again and starts over. If $k^m = 1 \bmod{N}$, we say that $m$ is the order of $k$. But why do we care about finding the order of $k$?
It turns out finding the order of elements is very, very similar to factoring an integer into primes and other related problems, like discrete logarithms. If we can find orders of elements, it won't be too hard to figure out how to factor a number. In this case, the eavesdropper wants to figure out what $p$ and $q$ are, so they'll want to factor $m$. And it turns out a lot of other public-key cryptosystems (like elliptic curves) are based on the difficulty of factoring.
How hard could it be? Well, we could just check every possibility, which doesn't seem that bad for a number like 48, but once we get into numbers that are hundreds of digits long, that might start to suck. It turns out the fastest known algorithms for order finding take approximately $e^{O(\log N \log \log N)^{\frac{1}{2}}}$ steps. Current key lengths for RSA are at least 1024 bits, which would give us about 4.4 x 1029 operations. Assuming three trillion operations per second (3 GHz), it'd take a PC about 4.7 billion years. Sure, you could just throw more powerful computers at it, but they'd just double the key size and suddenly, you'd need to do 1044 operations.
It's a lot easier to write math than typeset it
Well, that's not entirely true. One of the breakthroughs in quantum computing was coming up with a fast algorithm for factoring. It turns out quantum order finding takes $O((\log N)^2 \log \log N \log \log \log N)$ steps, which, for a 1024-bit key is just over 60 operations. Doubling the key-size to 2048 bits only increases the number of operations by just over 20. Unfortunately (or fortunately, because we'd be pretty screwed if someone could easily break RSA right now), we haven't built any quantum computers that large yet, nor are we capable of doing so anytime soon.
tl;dr – Kenji is a quantum computer.
Posted in Anime, math | Tagged Anime, computational complexity, cryptography, math, number theory, saving the internet to save the world, summer wars, π day | CommonCrawl |
Gameplay, Attacking
For a separate multiplayer system for the Legend League, see Legend League Tournaments.
For Builder Base multiplayer mode, see Versus Battle.
Reasons For Raiding
There are many reasons why a player might raid you. One reason is to steal resources. Another is to gain trophies to either top up their trophy balance or get promoted to the next league. One final reason is to simply have something to do while waiting for an upgrade.
Raiding Mechanics
Test your skills against another player's village! Matchmaking matches you with another player based on your Trophies and Town Hall level. Because of this, you are likely to find targets at or near your Town Hall level.
Alternatively, you can enter a 'Revenge' battle by tapping the
button in your Defense Log. This allows you to raid a person who has attacked you first. Beware of this when you attack higher level villages in matchmaking as they will be able to 'Revenge' match you.
Prior to Battle
When the opposing player's village first appears, you get 30 seconds during which you can scout the enemy's defenses and plan your attack. Although you can deploy troops during this time, the battle will start immediately upon doing so (you do not get extra time by starting early). When viewing another player's village to raid, potential loot and Trophies that can be earned/lost are shown on the left side of the screen. Before the battle has started, if the village you are first paired up with is not to your liking you can press the 'Next' button to pay a small amount of gold (depending on your Town Hall level) and be shown another village to potentially raid.
Winning Defense
The 'Next' button disappears once the battle timer has started, but if you haven't actually deployed troops or cast a spell before the timer has ended, you can tap 'End Battle' to return to your own village without penalty.
Once you have deployed a troop or cast a spell (even accidentally), the 'End Battle' button is replaced by the 'Surrender' button; pressing that and confirming your surrender will cause you to immediately lose the battle and lose Trophies. This is actually a strategy used by players wanting to drop their trophies to possibly farm resources or for other reasons. They will usually drop a hero, if they have one, and 'Surrender' immediately after.
Once you have earned at least 1 star the 'Surrender' button will change to become the 'End Battle' button, you can click it to end the battle early even before the 3 minutes are up. A good reason to use this is to save your Heroes before they lose too much health so you can use them again sooner than if you let their health reach 0.
Overall Damage
During a battle, the overall damage is calculated by taking the number of buildings destroyed and dividing that by the number of buildings on the village being attacked.
The overall damage is expressed as a percentage, rounded up to the nearest percent. For example, if a village has 75 buildings and a player destroys 28, the overall damage is 28/75 = 37.33% which is rounded up to 38%.
A star is earned for destroying 50% of buildings (see the section Victory and Defeat below), but the calculation described above effectively allows a player to earn the star by destroying just under half (i.e. more than 49%) of the buildings in some cases. More technically, if a village has $ n $ buildings, where $ n $ is an odd number greater than 50, destroying $ \frac{n-1}{2} $ buildings will be sufficient to score 50%.
Traps are not classified as buildings, so if they are triggered, they do not count towards the damage percentage. Walls are also not classified as buildings and do not count towards the damage percentage. Defensive Clan Castle troops, as well as the defending Barbarian King and Archer Queen do not count towards the damage percentage either, so they do not need to be defeated to achieve 100% destruction. However, the defending Grand Warden does count as a building and needs to be defeated for 100% destruction if the village has one.
Victory and Defeat
Trophies are awarded upon a multiplayer victory. Victory is determined by earning at least one star during a raid. There are three stars available to be earned in each battle:
One star is earned for destroying 50% of the buildings.
One star is earned for destroying the enemy Town Hall.
One star is earned for destroying 100% of the buildings. This will require you to earn the first and second star as well.
1 Star Victory 2 Star Victory 3 Star Victory
For each star that you earn, you receive one-third of the available Trophies. It is impossible to get more than one star without destroying the Town Hall. Failure to get any stars means a loss, causing you to lose trophies.
Gaining and Losing Trophies
There is often a lot of confusion surrounding Trophies, as it is often possible to lose a lot more trophies than you can win (although sometimes the opposite is true as well). The reason for this is simple... If you begin the match with more trophies than your opponent, it is presumed that your opponent is "weaker" than you (Town Hall or Experience levels are irrelevant for the purposes of this determination). If you defeat this "weaker" opponent you will receive fewer trophies than you would an "equal" opponent; losing will cost you a higher amount of trophies. The opposite is also true: If you have fewer trophies than your opponent, it is presumed that your opponent is "stronger" than you. Defeating this "stronger" opponent entitles you to more trophies than you would get by defeating an "equal" opponent, and likewise being defeated by a "stronger" opponent costs you fewer trophies.
In general the higher your Trophy count, the more difficult opponents you will encounter; both those you are matched with to attack as well as those attacking your village. Because of this, many higher level players keep an artificially low trophy count by intentionally losing battles; in this way they can both make their villages easier to defend (as they will on average be attacked by weaker opponents) as well as ensure themselves less difficult bases to attack for resources.
Before attacking, pay attention to how many Trophies you can win or lose; often this can help give you a quick indication as to how difficult the upcoming battle will be. If you see a large discrepancy in the number of trophies available to win vs. the amount available to lose, there is a large trophy difference between you and your opponent. If the number of trophies available to win is much higher than that available to lose, you are likely to encounter a difficult battle. If the number available to win is much lower than that available to lose, the battle may in fact be relatively easy. However, do not rely solely on this comparison, as trophy counts can be easily manipulated (as shown in the above paragraph).
Match Cost
Main article: Single Player Campaign
Fight the goblins in the Single Player Campaign! Each level has a preset amount of loot that can only be earned once. No Trophies can be won or lost in the Single Player Campaign and it will not reduce shield durations. The loot in the maps also does not regenerate, though the buildings are all rebuilt and the traps reset each new time you look at the level.
Early in the campaign, the levels usually have no aerial defenses such as Air Defenses or Archer Towers, allowing easy completion with a single Balloon or Minion. As players progress through the campaign, the levels steadily become tougher, requiring either a higher-leveled army or a solid strategy. That being said, raiding the goblins can be quite lucrative once you progress through the hard levels. In fact, most of the later levels can reward you with over 500,000 each of Gold and Elixir, and the final level offers 2.5 million of both. Levels after "Sherbet Towers" also offer Dark Elixir, of which several thousand can be looted from each level, up to a whopping 25,000 in the final level.
It is interesting to note that while the difficulty of the Single Player Campaign increases quite rapidly as one progresses in level, the available loot rises considerably as well. Resources above 300,000 Gold and Elixir can be found after the level "Choose Wisely".
Practice Mode
Main article: Practice Mode
Practice Mode enables the player to practice their raiding skills and try out a variety of attacks, with preset armies for each level. Like the Single Player Campaign, trophies are not won or lost and no Shield is deducted for doing a Practice Mode attack. Loot is also present in Practice Mode, though it can only be earned once.
and Elixir
Gold and Elixir can be stolen from four types of buildings: storages, mines/collectors, the Town Hall, and the Clan Castle (if it contains loot in its treasury).
Note that all loot, regardless of building type, is subject to the Town Hall level-based loot multiplier (discussed below). This multiplier is applied after all calculations, including the listed caps.
Storages: The percentage of Gold/Elixir that can be stolen from storages until TH6 is 20% and is capped at 200,000. At TH7 and up, the percentage that can be stolen drops by 2% at each TH level, to a minimum of 10% at TH11 and TH12, and the cap increases by 50k at each TH level, to a maximum of 550,000 at TH13. The following chart shows how this works:
Gold/Elixir Storages—Percent Lootable by Town Hall Level
1 20% 500* 2,500
2 20% 1,400* 7,000
3 20% 20,000* 100,000
4 20% 100,000* 500,000
5-6 20% 200,000** 1,000,000
7 18% 250,000 1,388,889
10 12% 400,000 3,333,333
*This applies to Town Hall 5 but not Town Hall 6.
The available Gold and Elixir is split evenly between all the Storages and the Town Hall. For example, if there are 3 Gold Storages then the available Gold to be taken will be split four ways; one part is stored in the Town Hall and one part for each of the three storages, provided that none of these storages are filled.
Mines/Collectors: The percentage of Gold/Elixir that can be stolen from mines/collectors is 50% and is capped only by the storage capacity of the mine/collector (the Town Hall level-based loot multiplier still applies, of course).
Gold Mines/Elixir Collectors—Percent Lootable by Town Hall Level
1 50% 500 1,000
2 50% 2,500 5,000
3 50% 30,000 60,000
4 50% 100,000 200,000
10-13 50% 875,000 1,750,000
Town Hall: There is a portion of all three resources that can be stolen inside the Town Hall. As the Town Hall acts as a storage building for all resources, the percentage of available resources is equal to the percentage of available resources from the regular storages. For example, if 20% of Gold can be stolen from the storages, 20% of the Gold held by the Town Hall can be stolen. However, note that its loot is only obtained upon its destruction; if it is only damaged partially, no loot will be yielded.
Clan Castle: The percentage of loot that can be stolen from your Clan Castle's Treasury is a flat 3%, and is capped only by the storage capacity of the Treasury itself, which is dependent on Town Hall level and Clan Perks. This 3% is unaffected by any loot penalty whatsoever.
Clan Castle Treasury— Percent Lootable by Town Hall level (caps can only be reached in a level 10 or higher Clan)
Gold & Elixir Cap
Gold & Elixir Storage Amount to Reach Cap
Dark Elixir Cap
Dark Elixir Storage Amount to Reach Cap
3 3% 18,000 600,000 - -
5 3% 36,000 1,200,000 - -
7 3% 72,000 2,400,000 360 12,000
9 3% 108,000 3,600,000 540 18,000
10 3% 126,000 4,200,000 630 21,000
The current maximum loot comes from raiding a TH13 with full collectors and 5.5M or more in storage. This means that the maximum calculated loot, for each resource, that can be stolen from 1 opponent is: 550,000 + (7 x 125,000) = 1,425,000. If you add in a full Treasury (from a village belonging to a level 10 or higher clan), this maximum can be as high as 1,425,000 + 180,000 = 1,605,000.
Dark Elixir
Dark Elixir can be stolen from four types of buildings: storages, drills, Town Hall and the Clan Castle (if it contains loot in its treasury).
Storages: The percentage of Dark Elixir that can be stolen from the storage until TH8 is 6% and is capped at 2,000. Starting at TH9, the percentage that can be stolen drops by 1% at each TH level (down to a minimum of 4% at TH10 and above) and the cap goes up by 500, up to a maximum of 4,500 at TH13. The following chart shows how this works:
The available Dark Elixir is typically split between the Dark Elixir Storage and the Town Hall in a 4:1 ratio, which means that the Dark Elixir available from the Dark Elixir Storage itself is four times greater than that available from the Town Hall.
However, if the player has enough Dark Elixir to fill the Town Hall's own Dark Elixir storage, the ratio may be skewed such that a larger fraction of the Dark Elixir is available in the Dark Elixir Storage than in the Town Hall. In other words, at high Dark Elixir counts, the Town Hall will offer less Dark Elixir and the Dark Elixir Storage will offer more Dark Elixir. For example, a Town Hall 12 player with full Dark Elixir Storages (240k) will have 300 DE available in the Town Hall and 3,700 DE available in the Dark Elixir Storage, rather than 800 DE and 3,200 DE respectively by the above ratio.
Storages that are overfilled are treated as if they were full normally. For example, players with 250k Dark Elixir when their normal storage capacity is 200k will have the loot availability calculated as if they had 200k Dark Elixir. This usually does not play a role in loot availability, but can still have an effect, as seen below.
To determine how the available Dark Elixir is split between the Town Hall and the Dark Elixir Storage when the storage cap is reached:
Divide the amount of Dark Elixir in storages in a 4:1 ratio between the Dark Elixir Storage and the Town Hall. In other words, 20% of the DE goes into the TH and 80% into the storage. If the Town Hall's storage fills up, the excess will go into the Dark Elixir Storage. Two examples of how this is allocated is as follows:
For a TH10 with 80,000 stored DE, 20% goes into the TH (for 16,000 DE in the TH) and 80% goes into the Dark Elixir Storage (for 64,000 DE in the storage)
For a TH10 with 130,000 stored DE, 20% (or 26,000) would have gone into the TH, but the storage capacity of the Town Hall is only 20,000. Thus 20,000 would go into the TH and the remaining 110,000 would go into the storage.
Determine the proportion of Dark Elixir in the Town Hall as a fraction of the total Dark Elixir stored, and multiply by the storage cap. Determine also the proportion of Dark Elixir in the storage as a fraction of the total Dark Elixir stored, and multiply by the storage cap. Round both results to the nearest hundred to obtain the amount of Dark Elixir in the Town Hall and Dark Elixir storage respectively. To follow on the examples above:
The TH10 with 80,000 DE would have 20% of the DE in the TH and 80% of the DE in the storage. The storage cap at TH10 is 3,000, thus multiplying this cap by the fractions would yield 600 and 2,400 DE in the Town Hall and Dark Elixir Storage respectively.
The TH10 with 130,000 DE would have 15.4% of the DE in the TH and 84.6% of the DE in the storage. Multiplying these percentages by the cap yields (to one decimal place) 461.5 and 2,538.5, which is then rounded to give 500 and 2,500 DE in the Town Hall and Dark Elixir Storage respectively.
Typically, either there is no rounding at all, or one of the figures is rounded up while the other is rounded down. However, at very specific amounts of Dark Elixir, it is possible that both of these figures round up rather than only one, resulting in 100 more Dark Elixir available than normal.
The most common example of when this rare occurrence occurs is when a TH11 has exactly 200,000 Dark Elixir (usually from a full level 6 Dark Elixir Storage). In this particular case, with a storage cap of 3,500, and exactly 10% of the DE in the TH and 90% in the storage, the game would calculate 350 and 3,150 DE should be available in the TH and storage respectively, but then rounds up to 400 and 3,200 DE respectively, resulting in 3,600 DE available in the storage. Similar scenarios are possible at TH8, TH9 and TH13.
Drills: The percentage of Dark Elixir that can be stolen from drills is 75% and is capped only by the storage capacity of the drill.
Dark Elixir Drills—Percent Lootable by Town Hall Level
7 75% 405 540
10-13 75% 5,400 7,200
The current maximum Dark Elixir loot comes from raiding a TH13 with full drills and 112,500 or more in storage. This means that the maximum calculated loot, for Dark Elixir, that can be stolen from 1 opponent is: 4,500 + (3 x 1,800) = 9,900. If you add in a full Clan Castle Treasury (again from a village belonging to a level 10 clan) the maximum can be as high as 9,900 + 900 = 10,800. If the storages offer 4,600 Dark Elixir as a result of the rounding discrepancy above, the maximum may be further increased to 10,900.
Loot Penalty
To discourage higher level players from attacking lower level players, a "loot penalty" system is put in place that reduces loot obtained from opponents with lower level Town Halls. This loot penalty is put in place after the above loot calculations have been carried out.
The loot penalty is applied to loot obtained from resource storages and resource collectors (including the Town Hall), but does not apply to the Clan Castle's Treasury. Loot penalty is also not applied in Clan Wars; all players that attack the same war base will earn the same potential amount of loot and war win bonus from that base, regardless of their Town Hall level.
The loot penalty is determined by considering the difference in level of the attacker and defender's Town Halls, and is shown in the table below:
Town Hall Levels—Loot Multiplier
Town Hall Level Difference
Percentage of Loot Available
Same or higher level 100%
1 level lower 80%
2 levels lower 50%
4 or more levels lower 5%
As soon as the three-minute raid timer begins counting down, all Builders and Villagers run towards the Town Hall to hide.
Every time you get raided, you will get the notification saying Your village was raided by (Attacker's name)! (assuming you have notifications turned on). It was changed to (Attacker's name) is attacking your village! during the Christmas 2014 Update. It was changed to Your village is being raided by (Attacker's name)!
Players from up to 200 Trophies above and below you are able to raid you, but these are not hard limits.
Following the Christmas 2014 update, it is possible to watch a live attack on your village, if you are able to log on while the attack is in progress.
Supercell increased the battle timer by 30 seconds at the 10/12/2015 update.
This update considerably increased the storage capacity of the Town Hall, and also allowed players to steal Dark Elixir from it.
This was then reverted to the original 3-minute timer in the 21/3/2016 update.
In May 2016 update, Supercell increased the multiplayer searching timeout from 5 minutes to 30 minutes.
Retrieved from "https://clashofclans.fandom.com/wiki/Raids?oldid=544094" | CommonCrawl |
Bioactivity evaluations of leaf extract fractions from young barley grass and correlation with their phytochemical profiles
Mamata Panthi1 na1,
Romit Kumar Subba1,2 na1,
Bechan Raut1,
Dharma Prasad Khanal1 &
Niranjan Koirala ORCID: orcid.org/0000-0002-7777-11912
BMC Complementary Medicine and Therapies volume 20, Article number: 64 (2020) Cite this article
The pressed juice of Barley Grass (BG) has become very popular among people for various assumed benefits along with many testimonies of people who have been healed from various ailments such as anemia, cancer, GI problems by consuming BG. The aim of our research was to validate the claims of its medicinal values such as chemo-protective action, high anti-oxidants, RBC membrane stabilization activity, and toxicity level.
Extracts of hexane, ethyl acetate and methanol were quantitatively estimated for total phenolic contents (TPC) and total flavonoid contents (TFC). The same extracts were assessed for their antioxidative potentials with the use of DPPH free radical scavenging assay followed by determination of HRBC membrane stabilization method, Brine Shrimp Lethality Assay (BSLA) and GC-MS analysis.
All the extracts showed high TPC and TFC along with the stronger correlation with the antioxidant activity of the extracts suggesting phenolics and flavonoids contents of the extract might be attributed to showing antioxidant activity. The methanolic and ethyl acetate extracts of the plant also showed remarkable anti-inflammatory activity where methanolic extracts had the lowest EC50. During Brine Shrimp Lethality Assay, all extracts of BG were found to be bioactive and the degree of lethality was found to be concentration dependent. The GC-MS analysis of the methanolic extract of BG revealed 23 compounds which are reported to possess different biological activities.
The study reveals the strong antioxidant and RBC membrane stabilization activity of BG. The Brine Shrimp Lethality Assay found extracts to be bioactive suggesting extracts as a promising candidate for plant-derived anti-tumor compounds. Further, studies are needed to validate the data on cancer cell lines.
Oxidative stress is the disturbance in the balance between the production of reactive oxygen species, ROS (free radicals) and antioxidant defenses [1]. ROS might be involved as initiators and mediators in several disease such as heart diseases, endothelial dysfunction, atherosclerosis and other cardiovascular disorders, inflammation, brain degenerative impairments, diabetes and eye disease [2]. Humans are in continuous exposure to free radicals produced from exposure of cigarette smoking, alcohol, radiation, or environmental toxins. A biological antioxidant has been defined as any substance that is present at low concentrations compared to an oxidizable substrate and significantly delays or prevents the oxidation of that substrate [3]. Various anti-oxidants have found to possess properties such as anti-atherosclerotic, antitumor, anti-mutagenic, anti-carcinogenic to name a few selected ones [4]. However, studies have reported that some of most commonly used synthetic antioxidants such as Tert-butyl hydroxy anisole (BHA), tert-butyl hydroxytoluene (BHT) are tumor promoters and can induce impairment in blood clotting [5], therefore research has been directed towards plant derived natural antioxidants.
Inflammation is a complex process, which is frequently associated with pain and involves occurrences such as: the increase of vascular permeability, increase of protein denaturation and membrane alteration. NSAIDS are widely used for their anti-inflammatory, analgesic and antipyretic activity and are among the most widely used drugs worldwide [6]. However, these are associated with an increased risk of adverse gastrointestinal, renal and cardiovascular effects [6]. Various natural compounds with promising in vitro and in vivo anti-inflammatory activities have been reported in literature which can be used as novel therapeutic approach for treatment of inflammatory conditions [7].
Brine Shrimp Lethality Bioassay (BSLB) can provide an indication of possible cytotoxic principles in plant extract [8]. This assay has been extensively used for different studies such as for preliminary toxicity screening of plant extracts, detection of fungal toxins, plant extract toxicity, heavy metals, cyanobacteria toxins, pesticides, and cytotoxicity testing of dental materials [9]. Studies have found very good relationship between this simple, inexpensive, and bench-top assay and the antitumor potential of the cytotoxic compounds [10]. So, BSLB might be helpful as a preliminary screening in the antitumor drug designing and synthesis expeditions [10].
Barley Grass (BG) is the leaf portion of the Hordeum vulgare L., also known as barley, a member of Poeacea family. Young BG has found to have different nutritional content than of the mature barley grain [11]. The variation in nutritional content of BG may depend on the origin of the plants, soil quality and harvest technique [12]. Barley Grass are rich in dietary minerals such as sodium, magnesium, iron, copper and phosphorus and vitamins such as thiamine, riboflavin, tocopherols and tocotrienols, biotin, folic acid and pantothenic acid [13]. These are found to be richer than those found in some popular vegetables (spinach, tomato, lettuce), fruits (banana) and cow's milk [13].
In Nepal, the pressed juice of BG is very popular among residents as 'Jamara Ko Juice'. Various testimonies of people being healed from various ailments such as anemia, cancer, GI problems by consuming BG can be found in the public. For drinking pressed juice, harvesting is usually performed at 7th day. Barley Grass harvesting can be done when the leaves are 12 to 14 in. long to derive the maximum benefits from the grass [11]. Barley Grass are widely accepted as a source of anti-oxidants and various compounds with anti-oxidant activity have been isolated from young barley [14]. Various human and animal studies have reported its beneficial effects such as antiulcer, antioxidant, hypolipidemic, antidepressant, antidiabetic effects and laxative effect [15,16,17,18,19]. Based on the traditional ethnomedicines and existing literatures, BG maximizes the chance of providing novel compounds with promising cytotoxic and anti-oxidant activities. The present study was aimed to evaluate the antioxidant activity, RBC membrane stabilization activity, lethality assay and to evaluate the total phenolic contents of BG.
Gallic acid (GA), ascorbic acid (AA), DPPH and quercetin were purchased from Hi-Media Lab (Mumbai, India). FC reagent and aluminum chloride (AlCl3) were purchased from Thermo Fisher Scientific India Pvt. Ltd. (Mumbai, India). Reference standard Diclofenac was obtained from Lomus Pharmaceuticals Pvt. Ltd. (Kathmandu, Nepal). All other chemicals were of standard analytical grade.
Plant materials
The barley seeds were procured from the local market and were sown in soil from local nursery with daily watering. The Barley Grass were harvested on 7th day of sowing at the month of July. The samples were authenticated by Ganga Datt Bhatt, Research Officer, National Herbarium and Plant Laboratories (NHPL) (Godawari, Lalitpur, Nepal) Voucher number:217. The voucher specimen of this material has been deposited in National Herbarium and Plant Laboratories (NHPL) (Godawari, Lalitpur, Nepal).
Preparation of the extracts
The harvested BG were washed well using distilled water and shade dried for 21 days before grinding to fine powder. Three hundred grams of fine powder was subjected to successive maceration starting from hexane to ethyl acetate to methanol, 500 ml each for 48 h at room temperature (27 ± 1 °C). The extracts were filtered using a Buckner funnel and Whatman No. 1 filter paper. These extracts were dried in a rotary evaporator under reduced pressure until dryness and stored at 4 °C, protected from light and humidity for further analysis.
Determination of Total phenolic content
The total phenolic content (TPC) of the extracts was estimated by Folin-Ciocalteu reagent (FCR) method [20] with slight modifications. Briefly, 1 ml of various extracts (1 mg/ml) was mixed with FCR (5 ml, 1:10 v/v DW) and aq. sodium carbonate (4 ml, 7%) solution. The mixture was then incubated for 30 min at 40 °C in a water bath before measuring the absorbance at 760 nm using Microprocessor UV-Vis spectrophotometer-2371 (Electronics India, Himachal Pradesh, India). The phenolic contents were calculated using a standard curve for gallic acid (GA) (10-200 μg/ml), and the result was expressed as mg GAE per gram dry weight of fraction (mg GAE/g). All measurements were performed in triplicates.
Determination of Total flavonoid content
The total flavonoid content (TFC) was determined by AlCl3 coulometric method [21]. An aliquot of 1 ml of various extracts in methanol was added to 10 ml volumetric flask containing 4 ml of distilled water. At the zero-time, 0.3 ml, 5% sodium nitrite was added to the flask. After 5 min, 3 ml of 10% AlCl3 was added to the flask. At 6 min, 2 ml of 1 M sodium hydroxide was added to the mixture. Immediately, the total volume of the mixture was made up to 10 ml by the addition of 2.4 ml distilled water and mixed thoroughly. Absorbance of the pink colored mixture was determined at 510 nm against a blank containing using Microprocessor UV-Vis spectrophotometer-2371 (Electronics India, Himachal Pradesh, India). The flavonoid contents were calculated using a calibration curve prepared for Quercetin standards (10 to 100 μg/ml) and the result was expressed as mg of quercetin equivalent/g of extract (mg QE/g of extract).
Determination of anti-oxidant activity
The DPPH scavenging activity of different fractions was evaluated according to the method of Brand-Williams et al. [22] 1 mL of 0.1 mM DPPH solution in methanol was mixed with 1 mL of each extracts at varying concentrations (5, 10, 15, 20, 25 μg/ml). The corresponding blank sample was prepared, and ascorbic acid (AA) was used as reference standard. Mixture of 1 mL extract and 1 mL DPPH solution was used as control. The mixture was shaken well and incubated for 30 min in the dark. The reaction was carried out in triplicate, and the decrease in absorbance was measured at 517 nm after incubation using a using Microprocessor UV-Vis spectrophotometer-2371 (Electronics India, Himachal Pradesh, India). The scavenging activity was expressed as IC50 (μg/mL). The % scavenging was calculated using the formula:
$$ \%\mathrm{Scavenging}=\left[\left({\mathrm{A}}_0-\mathrm{A}1\right)/{\mathrm{A}}_0\right]\times \kern0.37em 100 $$
Where, A0 = absorbance of the control solution.
A1 = absorbance of extract/standard.
Determination of RBC membrane stabilization activity
RBC membrane stabilization activity of three different extracts of BG was evaluated by using in vitro human red blood cell stability method. The membrane stabilizing activity of the sample was assessed according to the method described by Shinde et al [23] with slight modifications.
The assay mixture contained 1 ml phosphate buffer [PH 7.4, 0.15 M], 2 ml hypo saline [0.36%], 0.5 ml HRBC suspension [10% v/v] with 0.5 ml of plant extracts and standard drug diclofenac sodium of various concentrations (10, 20, 40, 80, 100 μg/ml). The control sample consisted of 0.5 mL of RBCs mixed with hypotonic-buffered saline alone. The mixture was incubated at 37 °C for 30 min and centrifuged at 3000 RCF. The hemoglobin content in the suspension was estimated using Microprocessor UV-Vis spectrophotometer-2371 (Electronics India, Himachal Pradesh, India).
$$ \%\mathrm{Protection}=1-\left[\mathrm{OD}\ \mathrm{of}\ \mathrm{Test}/\mathrm{OD}\ \mathrm{of}\ \mathrm{Control}\right]\ \mathrm{X}\ 100 $$
Determination of toxicity
The toxic activity of the plant was evaluated using Brine shrimp lethality bioassay (BSLA) method [8] where 6 graded doses (viz 1600 μg/mL, 800 μg/mL, 400 μg/mL, 200 μg/mL, 100 μg/mL, and 50 μg/mL) were used. Brine shrimps (Artemia salina Leach) nauplii were used as test organisms. For hatching, eggs were kept in artificial sea salt with a constant oxygen supply for 48 h. The mature nauplii were then used in the experiment. DMSO was used as a solvent and also as a negative control. Vincristine sulfate was used as a reference standard in this case. The numbers of survivors were counted after 24 h. Larvae were considered dead if they did not exhibit any internal or external movement during several seconds of observation. The larvae did not receive food. To ensure that the mortality observed in the bioassay could be attributed to bioactive compounds and not to starvation; we compared the dead larvae in each treatment to the dead larvae in the control.
The median lethal concentration (LC50) of the test samples were calculated using the Probit analysis method described by Finney [24], as the measure of toxicity of the plant extract.
$$ \mathrm{Mortality}\%=\left(\mathrm{No}.\mathrm{of}\ \mathrm{dead}\ \mathrm{larvae}/\mathrm{Total}\ \mathrm{no}.\mathrm{of}\ \mathrm{larvae}\right)\times 100. $$
Gas chromatography-mass spectroscopy analysis
GC-MS analysis was performed at Nepal Academy of Science & Technology (Khumaltar, Kathmandu, Nepal). For GC-MS analysis of plant extract, GC-MS QP2010 (Shimadzu, Kyoto, Japan) equipped with RTx-5MS fused silica capillary column of 30 m length X 0.25 mm diameter X 0.25 μm film thickness. Helium (> 99.99% purity) with 36.2 cm/sec linear velocity was employed as carrier gas. The system was programmed with 3.9 ml/min of total flow rate, 0.95 ml/min of column flow and 3.0 ml/min purge flow. The volume of injected sample was 1 μl. The injector was set in spitless mode having 280 °C of temperature. Oven temperature started from 100 °C and increased to 250 °C at 15 °C/min with holding time of 1 min, which afterwards increased to 280 °C at 30 °C/min with holding time of 1 min and again increased from 280 °C to 300 °C at 15 °C/min with holding time of 11 min.
The ion source temperature and interface temperature were set to 200 °C and 280 °C respectively with solvent cut time of 3.5 min. Total run time was 20 min with mass range scan of 40 to 500 m/z. Identification of compounds was performed by comparing their mass spectra with data from NIST08 mass spectral library.
Each sample analysis was performed in triplicate. All results presented are means (±SEM) of at least three independent experiments. Statistical analysis, ANOVA with a statistical significance level set at p < 0.05 with post-hoc Tukey procedure was carried out with SPSS 16 for Windows. Correlations between the total phenolic contents, flavonoid contents and antioxidant capacities were determined using the Pearson correlation.
Total phenolic content determination
The total phenolic content of three extracts determined by FCR method were expressed as GAE/g dried extract (Fig. 1). The phenolic content in all extracts ranged from 24.55 to 82.56 mg GAE/g dried extracts representing an approximate three-fold variation (Table 1). Methanolic extract had significantly higher phenolic contents than ethyl acetate and hexane.
TPC of various extracts of BG
Table 1 TFC and TPC of various extracts of BG
Total flavonoid content determination
The result of total flavonoid contents of three extracts of barley grass is given in Fig. 2. The total flavonoid contents were reported as QE, ranged from 18.94 to 45.76 mg QE/g dried extracts (Table 1). Methanolic extract had significantly highest flavonoid content followed by ethyl acetate and hexane extracts.
TFC of various extracts of BG
Anti-oxidant activity determination
The anti-oxidant potential of all extracts were assessed by DPPH free radical scavenging assay. The radical scavenging is one of mechanism of anti-oxidant activity. The results were expressed in terms of IC50 and is shown in Table 2. The lower IC50 represents higher scavenging ability. The IC50 of methanolic extract (IC50 = 104.9 μg/ml) was found to be significantly lower than the ethyl acetate (455.24 μg/ml) and hexane (659.97 μg/ml) extracts. However, the activity of all extracts was found to be less when compared to standard, AA (22.58 μg/ml) (Fig. 3).
Table 2 IC50 Values of different extracts and ascorbic acid
IC50 values of various extracts and ascorbic acid
Correlation between TPC, TFC and anti-oxidant activity of the extracts
There was a higher correlation between total flavonoid content (TFC) and DPPH radical scavenging activity (R = − 0.936). Similarly, the correlation between total phenolic content (TPC) and DPPH radical scavenging activity (R = − 0.795) was also higher.
RBC membrane stabilization activity determination
Membrane stabilizing activity was assayed to evaluate the inhibition of hypotonic solution induced lysis of human erythrocyte membrane. The extracts were effective in inhibiting the hypotonicity induced hemolysis at different concentrations. These provides evidence for membrane stabilization as a possible mechanism of their anti-inflammatory effect. The EC50 found to be in order of Hexane> Ethyl acetate> Methanol> Diclofenac (Fig. 4; Table 3). Significant differences (p < 0.005) was found between % protection values of different extracts.
EC50 values of various extracts and standard (Diclofenac)
Table 3 EC50 values of different extracts and diclofenac
All the extracts were subjected to Brine Shrimp lethality bioassay for possible toxic action. In this study, methanol extract was found to be the most toxic to Brine Shrimp nauplii, with LC50 of 266.49 μg/ml whereas anticancer drug, vincristine sulphate showed LC50 value 1.707 μg/ml (Table 4). The order at which cytotoxic potential of the test samples was as follows: Vincristine sulphate> Methanol> Hexane> Ethyl acetate.
Table 4 LC50 of the different extracts Brine shrimp lethality bioassay
The GC-MS analysis of phytoconstituents in methanolic extract of barley grass revealed the presence of twenty-three major phytoconstituents (Fig. 5; Table 5). The major phytocomponents reported are Indolizine (21.78%), Octadecyl trifluoroacetate (15.85%), Palmitic acid (8.15%),1-Hexadecyne (6.98%), 1H-Indole,5-methyl- (4.46%), 9,12,15-Octadecatrienoic acid (1.64%), Phytol (1.61%) and Squalene (0.82%) (Figure S1).
GCMS chromatogram of methanolic extracts of BG
Table 5 Composition of methanolic extract of BG
Phenolic compounds are a group of chemical compounds that are widely distributed in nature. Phenolic compounds are nutritionally important and the interest in these compounds are increasing for their various bioactivities such as antioxidant, anti-aging, anti-inflammatory and anti-proliferative activities [25]. We found methanol to be significantly more efficient to extract polyphenolic compounds compared to ethyl acetate and hexane extracts of BG. These findings are in support of higher solubility of phenols in polar solvents providing high concentration of these compounds in the extracts obtained using polar solvents for the extraction [26]. Different phenolic compounds including flavones (e.g. major leaf antioxidants, such as saponarin, lutonarin, and 2-O-glucosylvitexin), leucoanthocyanidins, catechins, and coumarins have been found in young barley extracts [27]. The TPC contents in BG juice were significantly higher than wheatgrass and rice juices reported by Wangcharoen et al. [28]. However, phenolic contents in BG can be affected by different factors such as light quality, cultivars and harvesting times [29, 30].
Flavonoids are some of the most common phenolics, widely distributed in the plant tissues. Reviews on flavonoids have found it as a possible cancer-preventive agent [31]. Quercetin, a flavonoid, can be considered as the prototype of a naturally occurring chemo-preventive agent [32]. In this study, the total flavonoids contents of the different organic crude plant extracts were determined as quercetin equivalents by a modified aluminum chloride coulometric method [21]. Methanolic extract found to have significantly higher flavonoid content than ethyl acetate and hexane extract.
The antioxidant activity was evaluated by the capacity of antioxidant compound to reduce the DPPH radical as indicated by the decrease in its absorbance at 517 nm until the reaction reached a plateau. Significant differences (p < 0.0383) was obtained between antioxidant activity of the different extracts of BG. The methanolic extracts of BG had lowest IC50 value and thus with the highest antioxidant activity followed by ethyl acetate and hexane. The IC50 value of methanolic extract was found to be 104.41 μg/ml which is similar to the IC50 found by the Nepal et al. for 80% methanolic extract [33]. The differences of antioxidant activity between various extracts could be due to the difference in total amount of phenolics and flavonoids as phenolic and flavonoids are reported to have anti-oxidant activity [34] [35]. Pearson Correlation analysis were used to determine the relation between these parameters. There was a higher correlation between TFC and DPPH radical scavenging activity (R = − 0.936), and the correlation between TPC and DPPH radical scavenging activity (R = − 0.795) was also found to be high suggesting phenolics and flavonoids might have attributed to show anti-oxidant activity in BG. The correlation was found to be negative as increase in TPC and TFC caused increase in antioxidant activities, which was exposed by lower IC50 of DPPH scavenging activity. Previous studies have also showed that total phenolic contents of culinary plants were significantly correlated (p < 0.05) to their antioxidant activities [36].
In RBC membrane stabilization activity test, all extracts were effective in inhibiting the hypotonicity induced hemolysis at different concentrations. The methanolic extract had lowest EC50 than ethyl acetate and hexane. RBC membrane stabilization activity test can be related to the anti-inflammatory activity of the BG. This is by far the first reported study on HRBC membrane stabilization study on BG. The GC-MS analysis of methanolic extract reported several phytoconstituents with anti-inflammatory activity such as Indolizine [37], 9,12,15-Octadecatrienoic acid [38], Phytol [39], Squalene [40]. The presence of such compounds could be the reason for the activity of extracts.
The GC-MS analysis of methanolic extract of BG revealed 23 compounds. These compounds are reported to possess different activities. For ex, Indolizine has anti-inflammatory properties [37]. Phytol is a diterpene which is reported to have anti-inflammatory and cancer preventive properties [39]. Fatty acids like 13-docosenic acid and 9,12,15-Octadecatrienoic acid are reported to be in BG. They have cancer preventive, nematicide, anti-arthritic, anti andrigenic, anti-infammatory and hypocholesterolemic properties [38]. Cyclotetracosane has anti-diabetic or alpha amylase activity [41]. Squalene possess anti-bacterial, anti-oxidant, cancer preventive, anti-tumor and lipoxygenase inhibitor [40]. Hexadecen-1-ol, trans 9 possess anti-oxidant and anti-tumor [42].
The degree of lethality shown by BG was found to be directly proportional to the concentration of the extractives ranging from the lowest concentration (50 μg/ml) to the highest concentration (1600 μg/ml). This concentration dependent increment in percent mortality of Brine Shrimp nauplii produced by the BG may indicates the presence of cytotoxic principles in these extracts.
Methanol extracts had the lowest LD50 as 266.49 μg/ml followed by ethyl acetate, 367.91 μg/ml and hexane, 290.72 μg/ml. In toxicity evaluation of plant extracts by Brine shrimp lethality bioassay LD50 values lower than 1000 μg/ml are considered bioactive [8]. Thus, all extracts of BG are found to be bioactive. The brine shrimp assay is significantly correlated with in vitro growth inhibition of human solid tumor cell lines demonstrated by the national Cancer Institute (NCI, USA) and it can show the value of this bioassay as a pre-screening tool for antitumor drug research [43]. Therefore, these extracts can be regarded as promising candidate for plant derived anti-tumor compounds. A study on a barley grass supplement named as Herb-All Barley Powder found the LD50 to be 448.42 ppm in a similar setting [44].
This study showed the importance of BG and its possible health benefits. Barley Grass could be considered as functional drinks with antioxidant potential because of their higher phenolic content and flavonoid content. There was a strong correlation of TFC, TPC and anti-oxidant activity of the extracts which suggests the flavonoids and phenolics might have shown anti-oxidant activity in these extracts. Presence of anti-inflammatory compounds and because of significant RBC membrane stabilization activity, BG can also be regarded as functional drinks with anti-inflammatory potential. All extracts of BG had shown significant bioactivity towards brine shrimp which have good correlation with tumor cell lines suggesting these extracts to be as promising candidate for plant derived anti-tumor compounds. Thus, further studies are needed to validate the data on cancer cell lines.
AlCl3 :
BSLA:
Brine shrimp lethality assay
DMSO:
DPPH 1:
1 -diphenyl-2-picryhydrazyl
FCR:
Folin- ciocalteu reagent
GAE:
Gallic acid equivalent
Gallic Acid equivalents
GC-MS:
Gas chromatography mass spectroscopy
HRBC:
Human red blood cell
QE:
Quercetin equivalent
TFC:
Total flavonoids content
TPC:
Total phenolics content
Betteridge DJ. What is oxidative stress? Metabolism. 2000;49(2, Supplement 1):3–8.
Kohen R, Nyska A. Invited review: oxidation of biological systems: oxidative stress phenomena, Antioxidants, Redox Reactions, and Methods for Their Quantification. Toxicol Pathol. 2002;30(6):620–50.
Barry Halliwell JMCG. Free Radicals in Biology and Medicine. 3rd ed. Oxford: Oxford University Press; 1999.
Uttara B, et al. Oxidative stress and neurodegenerative diseases: a review of upstream and downstream antioxidant therapeutic options. Curr Neuropharmacol. 2009;7(1):65–74.
Kahl R, Kappus H. Toxicology of the synthetic antioxidants BHA and BHT in comparison with the natural antioxidant vitamin E. Z Lebensm Unters Forsch. 1993;196(4):329–38.
Bacchi S, et al. Clinical pharmacology of non-steroidal anti-inflammatory drugs: a review. Antiinflamm Antiallergy Agents Med Chem. 2012;11(1):52–64.
Maione F, et al. Medicinal plants with anti-inflammatory activities. Nat Prod Res. 2016;30(12):1343–52.
Meyer BN, et al. Brine shrimp: a convenient general bioassay for active plant constituents. Planta Med. 1982;45(1):31–4.
Carballo JL, et al. A comparison between two brine shrimp assays to detect in vitrocytotoxicity in marine natural products. BMC Biotechnol. 2002;2(1):17.
Nazir S, et al. Brine shrimp lethality assay 'an effective prescreen': microwave-assisted synthesis, BSL toxicity and 3DQSAR studies-based designing, docking and antitumor evaluation of potent chalcones. Pharm Biol. 2013;51(9):1091–103.
Hagiwra Y, Hagiwara H, UH. Physiologically active substances in young green barley leaf extract. Nippon Shokuhin Kagaku Kogaku Kaishi. 2001;48(10):712–25.
Droushiotis DN. The effect of variety and harvesting stage on forage production of barley in a low-rainfall environment. J Agric Sci. 2009;102(2):289–93.
Hagiwara Y, Cichoke A. Barley leaves extract for everlasting health. Green Foods Corp Japan Pharm Dev. 1998;7(10):15.
Osawa T, et al. A novel antioxidant isolated from young green barley leaves. J Agric Food Chem. 1992;40(7):1135–8.
Ohtake H, et al. Studies on the constituents of green juice from young barley leaves. Effect on dietarily induced hypercholesterolemia in rats. Yakugaku Zasshi. 1985;105(11):1052–7.
Yu YM, et al. Antioxidative and hypolipidemic effects of barley leaf essence in a rabbit model of atherosclerosis. Jpn J Pharmacol. 2002;89(2):142–8.
Yamaura K, et al. Antidepressant-like effects of young green barley leaf (Hordeum vulgare L.) in the mouse forced swimming test. Pharm Res. 2012;4(1):22–6.
Yu YM, et al. Effects of young barley leaf extract and antioxidative vitamins on LDL oxidation and free radical scavenging activities in type 2 diabetes. Diabetes Metab. 2002;28(2):107–14.
Ikeguchi M, et al. Effects of young barley leaf powder on gastrointestinal functions in rats and its efficacy-related physicochemical properties. Evid Based Complement Alternat Med. 2014;2014:974840.
Singleton VL, Rossi JA. Colorimetry of Total Phenolics with Phosphomolybdic-Phosphotungstic acid reagents. Am J Enol Vitic. 1965;16(3):144–58.
Zhishen J, Mengcheng T, Jianming W. The determination of flavonoid contents in mulberry and their scavenging effects on superoxide radicals. Food Chem. 1999;64(4):555–9.
Brand-Williams W, Cuvelier ME, Berset C. Use of a free radical method to evaluate antioxidant activity. LWT Food Sci Technol. 1995;28(1):25–30.
Shinde U, et al. Membrane stabilizing activity—a possible mechanism of action for the anti-inflammatory activity of Cedrus deodara wood oil. Fitoterapia. 1999;70(3):251–7.
Finney DJ, Tattersfield F. Probit analysis. Cambridge: Cambridge University Press; 1952.
Koirala N, et al. Metabolic engineering of Escherichia coli for the production of isoflavonoid-4'-O-methoxides and their biological activities. Biotechnol Appl Biochem. 2019;66(4):484–93.
Haminiuk CW, et al. Extraction and quantification of phenolic acids and flavonols from Eugenia pyriformis using different solvents. J Food Sci Technol. 2014;51(10):2862–6.
Czerwonka A, et al. Evaluation of anticancer activity of water and juice extracts of young Hordeum vulgare in human cancer cell lines HT-29 and A549. Ann Agric Environ Med. 2017;24(2):345–9.
Wangcharoen W, Phimphilai S. Chlorophyll and total phenolic contents, antioxidant activities and consumer acceptance test of processed grass drinks. J Food Sci Technol. 2016;53(12):4135–40.
Urbonavičiūtė A, et al. The effect of light quality on the antioxidative properties of green barely leaves. Sodininkystė ir Darzininkystė. 2009;28:153–61.
Park MJ, Seo WD, Kang Y-H. The antioxidant properties of four Korean barley cultivars at different harvest times and profiling of major metabolites. J Agric Sci. 2015;7(10):94.
Koirala N, Thuan NH, Ghimire GP, Thang DV, Sohng JK. Methylation of flavonoids: chemical structures, bioactivities, progress and perspectives for biotechnological production. Enzym Microb Technol. 2016;86:103–16.
Russo M, et al. The flavonoid quercetin in disease prevention and therapy: facts and fancies. Biochem Pharmacol. 2012;83(1):6–15.
Nepal P, et al. Comparative Antioxidant, Antimicrobial and Phytochemical Assesments of Leaves of Desmostachya bipinnata L. Stapf, Hordeum vulgare L. and Drepanostachyum falcatum (Nees) Keng f. Nepal J Biotechnol. 2018;6(1):1–10.
Shahidi F, Ambigaipalan P. Phenolics and polyphenolics in foods, beverages and spices: antioxidant activity and health effects–a review. J Funct Foods. 2015;18:820–97.
Pietta P-G. Flavonoids as antioxidants. J Nat Prod. 2000;63(7):1035–42.
Wangcharoen W, Morasuk W. Antioxidant capacity and phenolic content of some Thai culinary plants. Maejo Int J Sci Technol. 2007;1(2):100–6.
Shrivastava SK, et al. Design, synthesis, and biological evaluation of some novel indolizine derivatives as dual cyclooxygenase and lipoxygenase inhibitor for anti-inflammatory activity. Bioorg Med Chem. 2017;25(16):4424–32.
Erdinest N, et al. Anti-inflammatory effects of alpha linolenic acid on human corneal epithelial cells. Invest Ophthalmol Vis Sci. 2012;53(8):4396–406.
Gnanavel V, Saral AM. GC-MS analysis of petroleum ether and ethanol leaf extracts from Abrus precatorius Linn; 2013.
Agnel R, Mohan V. GC–MS analyses of bioactive compounds present in the whole plant of Andrographis echioides (l) nees. Eur J Biomed Pharm Sci. 2014;1(3):443–52.
Tundis R, et al. Studies on the potential antioxidant properties of Senecio stabianus Lacaita (Asteraceae) and its inhibitory activity against carbohydrate-hydrolysing enzymes. Nat Prod Res. 2012;26(5):393–404.
Huang F-C, et al. Substrate promiscuity of RdCCD1, a carotenoid cleavage oxygenase from Rosa damascena. Phytochemistry. 2009;70(4):457–64.
Anderson JE, et al. A blind comparison of simple bench-top bioassays and human tumour cell cytotoxicities as antitumor prescreens. Phytochem Anal. 1991;2(3):107–11.
Zaman MAH, et al. Antimitotic activity and cytotoxicity assessment of barley grass food supplement. Int J Humaniti Soc Sci. 2018;10(2):11–21.
The authors are thankful to the management of Manmohan Memorial Institute of Health Sciences, Kathmandu and Dr. Koirala Research Institute for Biotechnology and Biodiversity, Kathmandu, Nepal for rendering necessary facilities to complete this work.
Manmohan Memorial Institute of Health Sciences, Kathmandu, Nepal in associations with Dr. Koirala Research Institute for Biotechnology and Biodiversity, Kathmandu, Nepal funded this research by providing the facilities and chemicals to carry out this study.
Mamata Panthi and Romit Kumar Subba contributed equally to this work.
Department of Pharmacy, Manmohan Institute of Health Sciences, Tribhuvan University, Kathmandu, Nepal
Mamata Panthi, Romit Kumar Subba, Bechan Raut & Dharma Prasad Khanal
Department of Natural Products Research, Dr. Koirala Research Institute for Biotechnology and Biodiversity, Kathmandu, Nepal
Romit Kumar Subba & Niranjan Koirala
Mamata Panthi
Romit Kumar Subba
Bechan Raut
Dharma Prasad Khanal
Niranjan Koirala
MP and RKS made significant contribution to acquisition of data, analysis, drafting of the manuscript. BR and DPK has made substantial contribution to conception, design and interpretation of data. NK supervised the research works, participated in revising, editing and the manuscript submission. All authors read and approved the final manuscript.
Correspondence to Niranjan Koirala.
Additional file 1 : Figure S1.
Chromatogram of various compounds in methanolic extract of BG based on GC-MS profile of Fig. 6. Table S1. Absorbance values of extracts in TPC determination. Table S2. Absorbance values of extracts in TFC determination. Table S3. Absorbance values of extracts and ascorbic acid in anti-oxidant activity determination. Table S4. Absorbance values of extracts and diclofenac in anti-Inflammatory activity determination. Table S5. Dead brine shrimp counts of extracts in Brine shrimp lethality assay.
Panthi, M., Subba, R.K., Raut, B. et al. Bioactivity evaluations of leaf extract fractions from young barley grass and correlation with their phytochemical profiles. BMC Complement Med Ther 20, 64 (2020). https://doi.org/10.1186/s12906-020-2862-4
Total phenolic content
Total flavonoid content
Anti-oxidant activity
RBC membrane stabilization activity
Brine shrimp | CommonCrawl |
light: science & applications
Gigantic electric-field-induced second harmonic generation from an organic conjugated polymer enhanced by a band-edge effect
Shumei Chen1,
King Fai Li2,
Guixin Li ORCID: orcid.org/0000-0001-9689-87052,
Kok Wai Cheah3 &
Shuang Zhang1
Light: Science & Applications volume 8, Article number: 17 (2019) Cite this article
Electric-field-induced second harmonic generation (EFISH), a third-order nonlinear process, arises from the interaction between the electric field of an external bias and that of two incident photons. EFISH can be used to dynamically control the nonlinear optical response of materials and is therefore promising for active nonlinear devices. However, it has been challenging to achieve a strong modulation with EFISH in conventional nonlinear materials. Here, we report a large tunability of an EFISH signal from a subwavelength-thick polymer film sandwiched between a transparent electrode and a metallic mirror. By exploiting the band-edge-enhanced third-order nonlinear susceptibility of the organic conjugated polymer, we successfully demonstrate a gigantic EFISH effect with a modulation ratio up to 422% V−1 at a pumping wavelength of 840 nm. The band-edge-enhanced EFISH opens new avenues for modulating the intensity of SHG signals and for controlling nonlinear electro-optic interactions in nanophotonic devices.
In nonlinear optics, it is well-known that the frequency conversion processes depend on both the chemical composition of the material and the spatial symmetry of the optical crystal1,2. Symmetry is especially important for second-order nonlinear processes. For example, the second order susceptibility χ(2) is forbidden in centro-symmetric materials in the electric dipole approximation. Therefore, second harmonic generation (SHG) has been widely explored in natural crystals with broken inversion symmetry. Although the inversion symmetry of conventional optical crystals can be broken by introducing a stressor layer3, the modulation depth of the nonlinear optical susceptibility of the hybrid system is very limited. With artificial photonic structures, such as liquid crystals, photonic crystals, metamaterials and metasurfaces4,5,6, both local and global symmetries can be readily engineered to boost the SHG efficiency through quasi-phase matching7,8,9,10, plasmonic and magnetic resonances11,12,13,14,15, and continuous control of nonlinearity phase16.
The dynamic control of nonlinear optical signals may have important applications in optical modulation and switching. Various switching technologies based on Kerr or free-carrier nonlinearities in semiconductor materials have been developed using all-optical control schemes. Alternatively, the control of nonlinear optical signals can be realized using electro-optic interactions. For example, electric-field-induced SHG (EFISH), which was proposed in the early 1960s, provides an alternative route for designing nonlinear optical modulators17. In the EFISH process, an external static electric field can be mixed with the fundamental wave (FW) to produce SHG in nonlinear optical materials with large third-order susceptibilities. Because the symmetry of nonlinear optical materials has fewer restrictions in third-order processes compared to second-order processes, the EFISH process has been extensively exploited in various media, such as optical crystals17,18, electrolytic solutions19, metal–oxide–semiconductors20, organic devices21, silicon waveguides22, and photonic metamaterials23,24,25. It was demonstrated that the efficiency of the EFISH signal could be greatly improved by utilizing the backward phase matching technique in plasmonic metamaterials25 and quasi-phase matching in silicon waveguides 22. EFISH was recently realized by electrostatic doping in two-dimensional (2D) materials. Specifically, electrically tunable SHG was theoretically studied using plasmonic resonances in doped graphene nanoislands26 and experimentally realized based on strong exciton charging effects in monolayers of WSe227. Electric-field-controlled SHG in 2D materials also has many limitations. For example, the SHG predicted in graphene nanoislands strongly relies on heavy doping of charge carriers and thus imposes critical requirements on the fabrication of graphene monolayers. In addition, the electric-field-enabled modulation depth of SHG from a WSe2 transistor was less than 3% V−1, which limits its potential applications as a nonlinear optical modulator.
Here, we demonstrate band-edge-enhanced EFISH from a subwavelength-thick organic conjugated polymer PFO (poly(9,9-di-n-dodecylfluorenyl-2,7-diyl) film sandwiched between two conducting layers—indium–tin–oxide (ITO) and aluminum, which serve as electrodes. The PFO polymer is a p-type π-conjugated polymer, which is usually used as a blue-emitting material with a bandgap energy of ~2.95 eV. It was reported that the PFO thin film has a high third-order susceptibility χ(3) for the FW at near infrared wavelengths28. However, less attention has been paid to its nonlinear properties when the THG or SHG frequencies are close to the bandgap energy. In this work, we observe a large EFISH enhancement in PFO at a very low biased voltage when the energy of the FW is close to half of the peak absorption energy (3.18 eV) of PFO. The observed prominent EFISH effect in subwavelength-thick polymer devices with a modulation depth 422% V−1 may open new avenues for designing novel electro-optic modulators.
The EFISH device consists of a 100-nm-thick PFO thin film sandwiched between aluminum (Al, 100 nm) and ITO (50 nm) electrodes, as shown in Fig. 1. For PFO thicknesses less than 100 nm, there exists a high risk of a short circuit due to the roughness of the ITO layer. Thus, in this work, the PFO thickness deviates from the optimized value of 50 nm (see Supplementary materials for more discussions). Under normal incidence of the FW, SHG from amorphous PFO film is forbidden, as the second order susceptibility χ(2) is negligible in the homogeneous film. However, SHG can be generated in homogenous PFO film under the condition of broken inversion symmetry from an oblique incidence of the FW. The reflected SHG intensity from the ITO/PFO/Al sandwiched photonic device can be described by:
$$\begin{array}{l}I_{2\omega } \propto \left[ {(\chi ^{(3)}E_{DC}{\mathrm{ + }}\chi ^{({\mathrm{2}})})E_\omega ^{\mathrm{2}}} \right]^{\mathrm{2}}\\ {\mathrm{ = }}((\chi ^{(3)}E_{DC})^2 + 2\chi ^{(2)}\chi ^{(3)}E_{DC})E_\omega ^4 + (\chi ^{(2)})^2E_\omega ^4\end{array}$$
where Eω is the electric field of the FW; EDC is the external electric field applied to the PFO thin film through the ITO and aluminum electrodes; and χ(2) and χ(3) are the effective second- and third-order susceptibilities of the PFO layer, respectively. χ(2) arises from the generation of SHG at the Al–PFO or the ITO–PFO interfaces. In the second line of Eq. (1), the first and third terms describe the EFISH and the common SHG process, respectively, while the second term represents the interference between the two, which relies on both the χ(2) and χ(3) coefficients of the system. This is in contrast to most of the previous works on active materials where χ(3) plays a dominant role17,18,19,20,21,22,23,24,25. Due to the coupling between the electric field of the FW and the DC field in the second term of Eq. (1), the intensity of EFISH can also be tuned by switching the sign of the applied DC electric field.
Fig. 1: Schematic of electric-field-induced SHG (EFISH) from the ITO/PFO/aluminum device.
For a fundamental wave (FW) with TM-polarization, obliquely incident onto the EFISH device, the intensity of the SHG waves can be modulated by applying a DC electric field. Under oblique incidence of the FW with TM-polarization (electric field parallel to x–z plane), EFISH comes from the coupling between the electric field of the incident light and that of an applied voltage using the third-order susceptibility of PFO. The electric field of the TE (electric field of light along y-axis)-polarized FW is perpendicular to that of the applied voltage, so EFISH is also forbidden
To experimentally explore the modulation of the EFISH signal from the PFO thin film, we fabricated a 100-nm-thick homogeneous PFO thin film on top of ITO-coated glass using the spin-coating method, followed by thermal evaporation of a 100-nm-thick aluminum electrode. The triple-layer EFISH device is encapsulated in a nitrogen environment to avoid degradation of the PFO thin film. Both the electrical and linear optical properties of the EFISH device are characterized. The electrical properties are shown in Fig. 2a, where the current density (I) of the device is plotted as a function of the applied voltage (V). The positive/negative voltages can be applied by connecting the ITO electrode to the anode/cathode of a DC power supply. The current–voltage (I–V) curve is slightly asymmetric when the applied voltage is switched from positive to negative values, and vice versa, which is attributed to the common diode effect of the ITO/PFO/aluminum configuration. The reflection spectrum of the EFISH device is measured at an incident angle of 45° using a transverse magnetic (TM)-polarized FW. As shown in Fig. 2b, due to the strong absorption of the PFO thin film at wavelengths below 400 nm, the reflection efficiency of the device drops to less than 10%. The spectrum is featureless with a reflectivity above 80% for wavelengths between 400 and 900 nm, owing to the high reflectivity of the 100 nm aluminum film. The calculated reflectance of both the triple-layer device and the 100 nm aluminum film are obtained using the transfer matrix method29 with the measured refractive indices of PFO, ITO, and aluminum. The calculated results agree well with the measured results.
Fig. 2: Electrical and optical properties of the ITO/PFO/aluminum device.
a Current density as a function of applied voltage. ITO acts as either an anode or a cathode for positive and negative values of the applied voltage. b Measured reflection spectra of the EFISH device. The reflection efficiency drops quickly when the wavelength of the incident light is less than 430 nm due to the absorption of PFO. The dashed-dotted line at a wavelength of 420 nm corresponds to the peak position of the SHG in the nonlinear optical measurements. The red dashed line and the blue dotted line are the calculated reflectances of the EFISH device and a single aluminum layer with a thickness of 100 nm
We firstly studied the polarization states of the SHG signal generated from the EFISH device. For a TM-polarized FW at a wavelength of 840 nm, the SHG signal with the same polarization is much stronger than that of TE polarization (Fig. 3a), with a polarization ratio up to ~158. In addition, the power-dependent SHG intensity at a wavelength of 420 nm has a slope value of 1.88 (Fig. 3b), which is close to the theoretical value of 2.0, indicating a second-order nonlinear optical porcess. Next, SHG from the ITO/PFO/aluminum is characterized using a femtosecond laser with tunable wavelength output. The TM-polarized FW is obliquely incident onto the device after passing through a lens with a focal length of 150 mm. The central wavelength of 840 nm of the FW has a bandwidth of approximately 15 nm. SHG signals from the EFISH device without an applied DC voltage are first measured using an Andor spectrometer with a photomultiplier tube detector, as shown in Fig. 4a. For the FW at wavelengths from 810 to 900 nm, the wavelength-dependent intensity of the SHG, which should be proportional to the square of the modulus of the effective χ(2), is experimentally characterized. Fig. 4b shows that the SHG efficiency exhibits a sharp peak at the fundamental wavelength of 840 nm. The resonant behavior of the SHG intensity can be attributed to the enhancement of in the effective χ(2) when twice the energy of the FW is close to the absorption band of PFO28.
Fig. 3: Nonlinear optical properties of ITO/PFO/aluminum without an applied voltage.
The FW is obliquely incident onto the ITO/PFO/Aluminum system at angle of 45°. The thicknesses of ITO/PFO/Aluminum are 50/100/100 nm, respectively. The FW has TM polarization with an electric field parallel to the incident plane (x–z). a Polarization characteristics of the SHG at a wavelength of 420 nm, and the SHG spectra with the same polarization (TM) and cross-polarization states (TE, electric field along the y-axis) compared to that of the FW. It is found that the H-polarized SHG signal is much stronger than that with TE polarizations. b The SHG intensity as a relationship of the pumping power; the slope value of 1.88 indicates a second-order nonlinear optical process
Fig. 4: Nonlinear optical properties of ITO/PFO/aluminum with applied voltages.
a Configuration of the SHG measurement. The TM-polarized FW is obliquely incident onto the ITO/PFO/aluminum device at an angle of 45°. L1 and L2 are lenses; LP1 and LP2 are polarizers. b Characterization of the spectral response of H-polarized SHG from ITO/PFO/aluminum with and without applied voltages. c The SHG intensity as a function of the applied voltages is plotted for the SHG wavelength at 420 nm. d The SHG intensity as a function of the applied voltages is plotted for the SHG wavelength at 405 nm. In the case of positive and negative voltages, the ITO layer serves as the anode and cathode, respectively. The fitting equations based on Eq. ( 1) are y = 0.4348x2 − 1.1944x + 1.1512 for SHG at 420 nm and y = 0.1470x2 − 3.7867x + 3.0622 for SHG at 405 nm. The retrieved relative values of the effective χ(2) and χ(3) of the EFISH device calculated from the fitting equations are 1.0729 and 0.6594 for SHG at 420 nm and 0.5543 and 0.3835 for SHG at 405 nm, respectively
We next study electric-field-induced SHG from the ITO/PFO/aluminum device by applying a DC voltage (U) to the ITO and aluminum electrodes. As shown in Fig. 4a, the ITO layer can be used as either an anode (U > 0) or a cathode (U < 0). To avoid the damage of the EFISH device, the spectral measurement of the EFISH signal for a DC voltage only up to U = 6 V is carried out (red dot line with circles in Fig. 4b) for a TM-polarized FW at an incident angle of 45°. Compared to the case of U = 0 (black dot line with triangles in Fig. 4b), one can see that the applied electric field can greatly boost the SHG efficiency. To better understand the mechanism and the efficiency of the EFISH process, we plot the electric-field-induced SHG intensity versus the applied voltage U at the fundamental wavelengths of 840 and 810 nm in Fig. 4c, d, respectively. At the resonant wavelength of 840 nm (Fig. 4c), the SHG always has the highest efficiency for the EFISH device regardless of the applied DC electric field. When U is swept from 0 to 6 V, the SHG intensity initially drops to a minimum value of I2ω = 0.489 (a.u.) at U = 1.5 V and then grows quickly to I2ω = 9.29 (a.u.) at U = 6 V. This corresponds to an SHG modulation depth of I2ω (U = 6)/[∆U·I2ω (U = 1.5)]~422% V−1, with ∆U = 4.5 V in this case. This modulation depth is much higher than that in conventional EFISH devices or the electric-field-controlled SHG from WSe226. Additionally, the modulation depth of the electric-field-controlled SHG from plasmonic metamaterials23,24 and WSe227 is less than 0.9% V−1 and 3% V−1, respectively. If a reversed voltage is applied to the device, EFISH shows very different behavior; the SHG intensity continues to increase when U is swept from 0 to −6 V, and the measured modulation depth of the SHG has a negative value of ~−335% V−1. In comparison, the electric-field-controlled modulation depth of SHG in WSe2 devices is negative for both positive and negative biases. A very similar optical response was observed from the EFISH measurement at a nonresonant wavelength of 810 nm (Fig. 4d). The measured EFISH dependence over the applied electric field can be perfectly explained by Eq. (1), and the relative values of the effective χ(2) and χ(3) of the EFISH device can be retrieved by fitting the measured EFISH curves using Eq. (1), as shown in Fig. 4c, d (see Supplementary materials for more details). The ratio between the effective χ(2) and χ(3) is found to be 1.6271 and 1.4451 V m−1 for FWs at 840 and 810 nm, respectively.
We have shown that the EFISH device based on an organic conjugated polymer provides an excellent platform for electrically controlled SHG. The SHG efficiency with a frequency at the edge of the absorption band of PFO is greatly boosted due to the resonant nature of the χ(3) of the organic conjugated polymer. To the best of our knowledge, the EFISH modulation depth of ~422% V−1 is the highest value ever reported. The maximum EFISHG efficiency of 1.92 × 10−5 was obtained at a wavelength of 420 nm when a 6.0 V external voltage was applied. In addition, the effective χ(2) of the EFISH device and the χ(3) of PFO were successfully retrieved, which sheds light on the mechanism of EFISH in the ITO/PFO/aluminum device. This work opens new avenues for designing electric field-controlled SHG based on organic conjugated polymers with a large modulation depth. It is expected that through the integration of plasmonic metamaterials and metasurfaces into the current EFISH device, the performance such as the modulation depth and efficiency of the SHG will be further enhanced, indicating great potential in applications such as electro-optic modulators.
Sample fabrication
The blue-emission polymer PFO was used as the active material in this work. The PFO materials were purchased from Sigma-Aldrich. The molecular weights of PFO are Mw ≦ 20,000. PFO was dissolved in toluene solution at a concentration of 16 mg mL−1. PFO films with a thickness of 100 were fabricated by spin-coating onto a 0.7-mm-thick ITO-coated glass substrate.
SHG measurement
SHG was measured using femtosecond (fs) laser (repetition frequency: 1 kHz, pulse duration: ~ 100 fs) output from an optical parametric oscillator at wavelengths from 810 to 900 nm. The power of the FW in Fig. 2, measured by a photodiode (Newport 818) connected to a lock-in amplifier, is a relative value. The averaged power of the FW in Fig. 4c, d is 10 mW. Laser light with a spot size of ~100 µm in diameter is obliquely incident onto the EFISH device after passing through a lens (N.A. = 0.05). The SHG signals from the EFISH device are collected by the second lens (N.A. = 0.05). After filtering the pumping laser, SHG is measured by a PI-Acton spectrometer with a photomultiplier tube detector.
Shen, Y. R. The Principles of Nonlinear Optics. (John Wiley & Sons, New York, 1984).
Boyd R. W. Nonlinear Optics. 3rd edn. (Academic Press, Amsterdam, 2008).
Cazzanelli, M. et al. Second-harmonic generation in silicon waveguides strained by silicon nitride. Nat. Mater. 11, 148–154 (2012).
ADS Article Google Scholar
Kauranen, M. & Zayats, A. V. Nonlinear plasmonics. Nat. Photonics 6, 737–748 (2012).
Lapine, M. & Shadrivov, KivsharY. S. IV Colloquium: nonlinear metamaterials. Rev. Mod. Phys. 86, 1093–1123 (2014).
Li, G. X., Zhang, S. & Zentgraf, T. Nonlinear photonic metasurfaces. Nat. Rev. Mater. 2, 17010 (2017).
Shelton, J. W. & Shen, Y. R. Phase-matched third-harmonic generation in cholesteric liquid crystals. Phys. Rev. Lett. 25, 23–26 (1970).
Fejer, M. M., Magel, G. A., Jundt, D. H. & Byer, R. L. Quasi-phase-matched second harmonic generation: tuning and tolerances. IEEE J. Quantum Electron 28, 2631–2654 (1992).
Zhu, S. N. et al. Experimental realization of second harmonic generation in a Fibonacci optical superlattice of LiTaO3. Phys. Rev. Lett. 78, 2752–2755 (1997).
Rose, A., Huang, D. & Smith, D. R. Controlling the second harmonic in a phase-matched negative-index metamaterial. Phys. Rev. Lett. 107, 063902 (2011).
Klein, M. W., Enkrich, C., Wegener, M. & Linden, S. Second-harmonic generation from magnetic metamaterials. Science 313, 502–504 (2006).
Chen, S. M. et al. Symmetry-selective third-harmonic generation from plasmonic metacrystals. Phys. Rev. Lett. 113, 033901 (2014).
Konishi, K. et al. Polarization-controlled circular second-harmonic generation from metal hole arrays with threefold rotational symmetry. Phys. Rev. Lett. 112, 135502 (2014).
Segal, N., Keren-Zur, S., Hendler, N. & Ellenbogen, T. Controlling light with metamaterial-based nonlinear photonic crystals. Nat. Photonics 9, 180–184 (2015).
O'Brien, K. et al. Predicting nonlinear properties of metamaterials from the linear response. Nat. Mater. 14, 379–383 (2015).
Li, G. X. et al. Continuous control of the nonlinearity phase for harmonic generations. Nat. Mater. 14, 607–612 (2015).
Terhune, R. W., Maker, P. D. & Savage, C. M. Optical harmonic generation in calcite. Phys. Rev. Lett. 8, 404–406 (1962).
Maker, P. D. & Terhune, R. W. Study of optical effects due to an induced polarization third order in the electric field strength. Phys. Rev. 137, 801–818 (1965).
Lee, C. H., Chang, R. K. & Bloembergen, N. Nonlinear electroreflectance in silicon and silver. Phys. Rev. Lett. 18, 167–170 (1967).
Lüpke, G. Characterization of semiconductor interfaces by second-harmonic generation. Surf. Sci. Rep. 35, 75–161 (1999).
Manaka, T. & Iwamoto, M. Optical second-harmonic generation measurement for probing organic device operation. Light Sci. Appl. 5, e16040 (2016).
Timurdogan, E., Poulton, C. V., Byrd, M. J. & Watts, M. R. Electric field-induced second-order nonlinear optical effects in silicon waveguides. Nat. Photonics 11, 200–206 (2017).
Cai, W. S., Vasudev, A. P. & Brongersma, M. L. Electrically controlled nonlinear generation of light with plasmonics. Science 333, 1720–1723 (2011).
Kang, L. et al. Electrifying photonic metamaterials for tunable nonlinear optics. Nat. Commun. 5, 4680 (2014).
Lan, S. F. et al. Backward phase-matching for nonlinear optical generation in negative-index materials. Nat. Mater. 14, 807–811 (2015).
Cox, J. D., García & de Abajo, F. J. Electrically tunable nonlinear plasmonics in graphene nanoislands. Nat. Commun. 5, 5725 (2014).
Seyler, K. L. et al. Electrical control of second-harmonic generation in a WSe2 monolayer transistor. Nat. Nanotechnol. 10, 407–411 (2015).
Yap, B. K., Xia, R. D., Campoy-Quiles, M., Stavrinou, P. N. & Bradley, D. D. C. Simultaneous optimization of charge-carrier mobility and optical gain in semiconducting polymer films. Nat. Mater. 7, 376–380 (2008).
Born M., Wolf E. Principles of Optics. 7th edn. (Cambridge University Press, Cambridge, 1999).
This work was financially supported by the National Natural Science Foundation of China (11774145), the Guangdong Provincial Innovation and Entrepreneurship Project (2017ZT07C071), the Applied Science and Technology Project of Guangdong Science and Technology Department (2017B090918001), the Natural Science Foundation of Shenzhen Innovation Committee (JCYJ20170412153113701), the Marie Curie Individual Fellowship (H2020-MSCA-IF-703803-NonlinearMeta), and the European Research Council Consolidator Grant (TOPOLOGICAL).
School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK
Shumei Chen & Shuang Zhang
Department of Materials Science and Engineering, Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
King Fai Li & Guixin Li
Department of Physics, Hong Kong Baptist University, Kowloon Tong, Hong Kong, China
Kok Wai Cheah
Shumei Chen
King Fai Li
Guixin Li
Shuang Zhang
Correspondence to Guixin Li or Shuang Zhang.
The authors declare that they have no conflict of interest.
Supplementary Materials for EFISH
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Chen, S., Li, K.F., Li, G. et al. Gigantic electric-field-induced second harmonic generation from an organic conjugated polymer enhanced by a band-edge effect. Light Sci Appl 8, 17 (2019). https://doi.org/10.1038/s41377-019-0128-z
Electrical Tuning of the Fifth‐Order Optical Nonlinearity of Antimony‐Doped Tin Oxide
Ruipeng Hou
, Hui Li
, Yanhui Sun
, Mengjuan Diao
, Ying Liang
, Zhipeng Huang
, Mark G. Humphrey
& Chi Zhang
Advanced Optical Materials (2020)
Terahertz pulse induced femtosecond optical second harmonic generation in transparent media with cubic nonlinearity
S. B. Bodrov
, Yu. A. Sergeev
, A. I. Korytin
, E. A. Burova
& A. N. Stepanov
Journal of the Optical Society of America B (2020)
Polar Rectification Effect in Electro-Fatigued SrTiO3-Based Junctions
Xueli Xu
, Hui Zhang
, Zhicheng Zhong
, Ranran Zhang
, Lihua Yin
, Yuping Sun
, Haoliang Huang
, Yalin Lu
, Yi Lu
, Chun Zhou
, Zongwei Ma
, Lei Shen
, Junsong Wang
, Jiandong Guo
, Jirong Sun
& Zhigao Sheng
ACS Applied Materials & Interfaces (2020)
Realizing Saturable Absorption and Reverse Saturable Absorption in a PEDOT:PSS Film via Electrical Modulation
Yanhui Sun
, Ruipeng Hou
Observation of magneto-electric rectification at non-relativistic intensities
M. Tuan Trinh
, Gregory Smail
, Krishnandu Makhal
, Da Seul Yang
, Jinsang Kim
& Stephen C. Rand
Nature Communications (2020)
Editorial Summary
Device offers significant laser beam control
The properties of a laser beam can be strongly controlled by varying the voltage applied to a device made of an organic polymer sandwiched between electrodes. Nonlinear materials are often used to change laser beam properties, such as its frequency. But controlling this change has been challenging when using conventional nonlinear materials, like optical crystals. Shuang Zhang of the University of Birmingham, UK, Guixin Li from Southern University of Science and Technology, China and colleagues fabricated a device made of a 100 nanometre(nm)-thin film of the organic polymer polydioctylfluorene sandwiched between a 50 nm-thick layer of indium tin oxide and a 100 nm-thick layer of aluminum. Varying the voltage applied to the outer electrodes controls the frequency of the outgoing laser beam at second harmonic frequency, when the incoming beam is made to fall on the device at a 45° angle. The study's findings could have implications for the future development of electro-optic modulators.
For Authors & Referees
Light: Science & Applications ISSN 2047-7538 (online) | CommonCrawl |
Link theorem and distributions of solutions to uncertain Liouville-Caputo difference equations
Controllability of Sobolev type fuzzy differential equation with non-instantaneous impulsive condition
February 2022, 15(2): 409-425. doi: 10.3934/dcdss.2021082
On the random wave equation within the mean square context
Julia Calatayud 1, , Juan Carlos Cortés 2,, and Marc Jornet 1,
Departament de Matemàtiques, Universitat Jaume I, 12071 Castellón, Spain
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022, Valencia, Spain
* Corresponding author: Juan Carlos Cortés
Received February 2021 Revised May 2021 Published February 2022 Early access July 2021
Figure(4)
This paper deals with the random wave equation on a bounded domain with Dirichlet boundary conditions. Randomness arises from the velocity wave, which is a positive random variable, and the two initial conditions, which are regular stochastic processes. The aleatory nature of the inputs is mainly justified from data errors when modeling the motion of a vibrating string. Uncertainty is propagated from these inputs to the output, so that the solution becomes a smooth random field. We focus on the mean square contextualization of the problem. Existence and uniqueness of the exact series solution, based upon the classical method of separation of variables, are rigorously established. Exact series for the mean and the variance of the solution process are obtained, which converge at polynomial rate. Some numerical examples illustrate these facts.
Keywords: Random wave partial differential equation, mean square calculus, exact series solution, separation of variables, mean and variance.
Mathematics Subject Classification: Primary: 35C05, 35C10, 35R60.
Citation: Julia Calatayud, Juan Carlos Cortés, Marc Jornet. On the random wave equation within the mean square context. Discrete & Continuous Dynamical Systems - S, 2022, 15 (2) : 409-425. doi: 10.3934/dcdss.2021082
E. Allen, Modeling With Itô Stochastic Differential Equations, Springer Science & Business Media, Dordrecht, Netherlands, 2007. Google Scholar
P. Almenar, L. Jódar and J. A. Martín, Mixed problems for the time-dependent telegraph equation: Continuous numerical solutions with a priori error bounds, Mathematical and Computer Modelling, 25 (1997), 31-44. doi: 10.1016/S0895-7177(97)00082-4. Google Scholar
H. T. Banks, J. L. Davis, S. L. Ernstberger, S. Hu, E. Artimovich, A. K. Dhar and C. L. Browdy, A comparison of probabilistic and stochastic formulations in modelling growth uncertainty and variability, Journal of Biological Dynamics, 3 (2009), 130-148. doi: 10.1080/17513750802304877. Google Scholar
J. C. Cortés, P. Sevilla-Peris and L. Jódar, Analytic-numerical approximating processes of diffusion equation with data uncertainty, Computers & Mathematics with Applications, 49 (2005), 1255-1266. doi: 10.1016/j.camwa.2004.05.015. Google Scholar
J. Calatayud, J. C. Cortés and M. Jornet, Uncertainty quantification for random parabolic equations with nonhomogeneous boundary conditions on a bounded domain via the approximation of the probability density function, Mathematical Methods in the Applied Sciences, 42 (2019), 5649-5667. doi: 10.1002/mma.5333. Google Scholar
J. C. Cortés, L. Jódar, L. Villafuerte and F. J. Camacho, Random Airy type differential equations: Mean square exact and numerical solutions, Computers and Mathematics with Applications, 60 (2010), 1237-1244. doi: 10.1016/j.camwa.2010.05.046. Google Scholar
J. Calatayud, J. C. Cortés and M. Jornet, Computational uncertainty quantification for random non-autonomous second order linear differential equations via adapted gPC: A comparative case study with random Fröbenius method and Monte Carlo simulation, Open Mathematics, 16 (2018), 1651-1666. doi: 10.1515/math-2018-0134. Google Scholar
S. J. Farlow, Partial Differential Equations for Scientists and Engineers, Dover, New York, 1993. Google Scholar
G. B. Folland, Fourier Analysis and Its Applications, Brooks, Pacific Grove, CA, Wadsworth, 1992. Google Scholar
[10] E. A. González-Velasco, Fourier Analysis and Boundary Value Problems, Academic Press, New York, 1995. Google Scholar
[11] G. R. Grimmet and D. R. Stirzaker, Probability and Random Process, Clarendon Press, Oxford, 2001. Google Scholar
D. Henderson and P. Plaschko, Stochastic Differential Equations in Science and Engineering, World Scientific, Singapore, 2006. doi: 10.1142/9789812774798. Google Scholar
L. Jódar and P. Almenar, Accurate continuous numerical solutions of time dependent mixed partial differential problems, Computers & Mathematics with Applications, 32 (1996), 5-19. doi: 10.1016/0898-1221(96)00099-5. Google Scholar
X. Mao, Stochastic Differential Equations and Applications, Elsevier, 2007. Google Scholar
T. Neckel and F. Rupp, Random Differential Equations in Scientific Computing, Walter de Gruyter, 2013. Google Scholar
F. Rodríguez, M. Roales and J. A. Martín, Exact solutions and numerical approximations of mixed problems for the wave equation with delay, Applied Mathematics and Computation, 219 (2012), 3178-3186. doi: 10.1016/j.amc.2012.09.050. Google Scholar
S. Salsa, Partial Differential Equations in Action, From Modelling to Theory, Universitext, Springer-Verlag Italia, Milan, 2008. Google Scholar
[18] T. T. Soong, Random Differential Equations in Science and Engineering, Academic Press, New York, 1973. Google Scholar
R. C. Smith, Uncertainty Quantification: Theory, Implementation, and Applications, SIAM, 2014. Google Scholar
L. Villafuerte, C. A. Braumann, J. C. Cortés and L. Jódar, Random differential operational calculus: Theory and applications, Comput. Math. Appl., 59 (2010), 115-125. doi: 10.1016/j.camwa.2009.08.061. Google Scholar
[21] D. Xiu, Numerical Methods for Stochastic Computations: A Spectral Method Approach, Princeton University Press, Princeton, NJ, 2010. Google Scholar
Figure 1. Expectation and variance of the solution $ u(x,t) $ to (1), for different space-time points and orders of truncation $ N $ of the series (2). This figure corresponds to Example 1.
Figure 2. Rate of convergence of $ \mathbb{E}[u_N(0.5,2)] $ and $ \mathbb{V}[u_N(0.5,2)] $ with $ N $, where $ u_N(x,t) $ is the truncation (11) of $ u(x,t) $ (2). This figure corresponds to Example 1.
Bixiang Wang. Mean-square random invariant manifolds for stochastic differential equations. Discrete & Continuous Dynamical Systems, 2021, 41 (3) : 1449-1468. doi: 10.3934/dcds.2020324
Fuke Wu, Peter E. Kloeden. Mean-square random attractors of stochastic delay differential equations with random delay. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1715-1734. doi: 10.3934/dcdsb.2013.18.1715
Dimitra Antonopoulou, Georgia Karali. A nonlinear partial differential equation for the volume preserving mean curvature flow. Networks & Heterogeneous Media, 2013, 8 (1) : 9-22. doi: 10.3934/nhm.2013.8.9
Cuilian You, Yangyang Hao. Stability in mean for fuzzy differential equation. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1375-1385. doi: 10.3934/jimo.2018099
Zhen Li, Jicheng Liu. Synchronization for stochastic differential equations with nonlinear multiplicative noise in the mean square sense. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5709-5736. doi: 10.3934/dcdsb.2019103
Hailong Zhu, Jifeng Chu, Weinian Zhang. Mean-square almost automorphic solutions for stochastic differential equations with hyperbolicity. Discrete & Continuous Dynamical Systems, 2018, 38 (4) : 1935-1953. doi: 10.3934/dcds.2018078
Jingzhen Liu, Ka Fai Cedric Yiu, Alain Bensoussan. The optimal mean variance problem with inflation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 185-203. doi: 10.3934/dcdsb.2016.21.185
Thai Son Doan, Martin Rasmussen, Peter E. Kloeden. The mean-square dichotomy spectrum and a bifurcation to a mean-square attractor. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 875-887. doi: 10.3934/dcdsb.2015.20.875
Yves Achdou, Mathieu Laurière. On the system of partial differential equations arising in mean field type control. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 3879-3900. doi: 10.3934/dcds.2015.35.3879
Galina Kurina, Vladimir Zadorozhniy. Mean periodic solutions of a inhomogeneous heat equation with random coefficients. Discrete & Continuous Dynamical Systems - S, 2020, 13 (5) : 1543-1551. doi: 10.3934/dcdss.2020087
Chuchu Chen, Jialin Hong. Mean-square convergence of numerical approximations for a class of backward stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2051-2067. doi: 10.3934/dcdsb.2013.18.2051
Evelyn Buckwar, Girolama Notarangelo. A note on the analysis of asymptotic mean-square stability properties for systems of linear stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1521-1531. doi: 10.3934/dcdsb.2013.18.1521
Pham Huu Anh Ngoc. New criteria for exponential stability in mean square of stochastic functional differential equations with infinite delay. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021040
Chiara Corsato, Franco Obersnel, Pierpaolo Omari, Sabrina Rivetti. On the lower and upper solution method for the prescribed mean curvature equation in Minkowski space. Conference Publications, 2013, 2013 (special) : 159-169. doi: 10.3934/proc.2013.2013.159
Quan Hai, Shutang Liu. Mean-square delay-distribution-dependent exponential synchronization of chaotic neural networks with mixed random time-varying delays and restricted disturbances. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3097-3118. doi: 10.3934/dcdsb.2020221
Nguyen Dinh Cong, Doan Thai Son. On integral separation of bounded linear random differential equations. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 995-1007. doi: 10.3934/dcdss.2016038
Bin Pei, Yong Xu, Yuzhen Bai. Convergence of p-th mean in an averaging principle for stochastic partial differential equations driven by fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 1141-1158. doi: 10.3934/dcdsb.2019213
Yingxu Tian, Junyi Guo, Zhongyang Sun. Optimal mean-variance reinsurance in a financial market with stochastic rate of return. Journal of Industrial & Management Optimization, 2021, 17 (4) : 1887-1912. doi: 10.3934/jimo.2020051
Yan Zeng, Zhongfei Li, Jingjun Liu. Optimal strategies of benchmark and mean-variance portfolio selection problems for insurers. Journal of Industrial & Management Optimization, 2010, 6 (3) : 483-496. doi: 10.3934/jimo.2010.6.483
Nan Zhang, Ping Chen, Zhuo Jin, Shuanming Li. Markowitz's mean-variance optimization with investment and constrained reinsurance. Journal of Industrial & Management Optimization, 2017, 13 (1) : 375-397. doi: 10.3934/jimo.2016022
Julia Calatayud Juan Carlos Cortés Marc Jornet | CommonCrawl |
On the Ornstein Uhlenbeck operator perturbed by singular potentials in $L^p$--spaces
Resolution and optimal regularity for a biharmonic equation with impedance boundary conditions and some generalizations
November 2013, 33(11&12): 5015-5047. doi: 10.3934/dcds.2013.33.5015
Local Hadamard well--posedness and blow--up for reaction--diffusion equations with non--linear dynamical boundary conditions
Alessio Fiscella 1, and Enzo Vitillaro 2,
Dipartimento di Matematica "F. Enriques", Università degli Studi di Milano, Via C. Saldini 50, 20133 Milano, Italy
Dipartimento di Matematica ed Informatica, Università di Perugia, Via Vanvitelli, 1, 06123 Perugia, Italy
Received September 2011 Published May 2013
The paper deals with local well--posedness, global existence and blow--up results for reaction--diffusion equations coupled with nonlinear dynamical boundary conditions. The typical problem studied is \[\begin{cases} u_{t}-\Delta u=|u|^{p-2} u in (0,\infty)\times\Omega,\\ u=0 on [0,\infty) \times \Gamma_{0},\\ \frac{\partial u}{\partial\nu} = -|u_{t}|^{m-2}u_{t} on [0,\infty)\times\Gamma_{1},\\ u(0,x)=u_{0}(x) in \Omega \end{cases}\] where $\Omega$ is a bounded open regular domain of $\mathbb{R}^{n}$ ($n\geq 1$), $\partial\Omega=\Gamma_0\cup\Gamma_1$, $2\le p\le 1+2^*/2$, $m>1$ and $u_0\in H^1(\Omega)$, ${u_0}_{|\Gamma_0}=0$. After showing local well--posedness in the Hadamard sense we give global existence and blow--up results when $\Gamma_0$ has positive surface measure. Moreover we discuss the generalization of the above mentioned results to more general problems where the terms $|u|^{p-2}u$ and $|u_{t}|^{m-2}u_{t}$ are respectively replaced by $f\left(x,u\right)$ and $Q(t,x,u_t)$ under suitable assumptions on them.
Keywords: Heat equation, reaction diffusion equations, blow--up, Hadamard well--posedness., dynamical boundary conditions.
Mathematics Subject Classification: Primary: 35K20, 35B44; Secondary: 35K57, 35Q79, 35K5.
Citation: Alessio Fiscella, Enzo Vitillaro. Local Hadamard well--posedness and blow--up for reaction--diffusion equations with non--linear dynamical boundary conditions. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5015-5047. doi: 10.3934/dcds.2013.33.5015
R. A. Adams, "Sobolev Spaces,", Academic Press [A subsidiary of Harcourt Brace Jovanovich, 65 (1975). Google Scholar
H. Amann, Parabolic evolution equations and nonlinear boundary conditions,, J. Differential Equations, 72 (1988), 201. doi: 10.1016/0022-0396(88)90156-8. Google Scholar
J.-P. Aubin, Un théorème de compacité,, C. R. Acad. Sci. Paris, 256 (1963), 5042. Google Scholar
G. Autuori and P. Pucci, Kirchhoff systems with dynamic boundary conditions,, Nonlinear Anal., 73 (2010), 1952. doi: 10.1016/j.na.2010.05.024. Google Scholar
______, Kirchhoff systems with nonlinear source and boundary damping terms,, Commun. Pure Appl. Anal., 9 (2010), 1161. doi: 10.3934/cpaa.2010.9.1161. Google Scholar
G. Autuori, P. Pucci and M. C. Salvatori, Global nonexistence for nonlinear Kirchhoff systems,, Arch. Ration. Mech. Anal., 196 (2010), 489. doi: 10.1007/s00205-009-0241-x. Google Scholar
I. Bejenaru, J. I. Díaz and I. I. Vrabie, An abstract approximate controllability result and applications to elliptic and parabolic systems with dynamic boundary conditions,, Electron. J. Differential Equations, 2001 (). Google Scholar
L. Bociu and I. Lasiecka, Uniqueness of weak solutions for the semilinear wave equations with supercritical boundary/interior sources and damping,, Discrete Contin. Dyn. Syst., 22 (2008), 835. doi: 10.3934/dcds.2008.22.835. Google Scholar
______, Local Hadamard well-posedness for nonlinear wave equations with supercritical sources and damping,, J. Differential Equations, 249 (2010), 654. doi: 10.1016/j.jde.2010.03.009. Google Scholar
H. Brezis, "Functional Analysis, Sobolev Spaces and Partial Differential Equations,", Universitext, (2011). Google Scholar
H. Brezis and T. Cazenave, Unpublished, Book., (). Google Scholar
H. Brezis, T. Cazenave, Y. Martel and A. Ramiandrisoa, Blow up for $u_t-\Delta u=g(u)$ revisited,, Adv. Differential Equations, 1 (1996), 73. Google Scholar
M. M. Cavalcanti, V. N. Domingos Cavalcanti and I. Lasiecka, Well-posedness and optimal decay rates for the wave equation with nonlinear boundary damping-source interaction,, J. Differential Equations, 236 (2007), 407. doi: 10.1016/j.jde.2007.02.004. Google Scholar
M. M. Cavalcanti, V. N. Domingos Cavalcanti and P. Martinez, Existence and decay rate estimates for the wave equation with nonlinear boundary damping and source term,, J. Differential Equations, 203 (2004), 119. doi: 10.1016/j.jde.2004.04.011. Google Scholar
I. Chueshov, M. Eller and I. Lasiecka, On the attractor for a semilinear wave equation with critical exponent and nonlinear boundary dissipation,, Comm. Partial Differential Equations, 27 (2002), 1901. doi: 10.1081/PDE-120016132. Google Scholar
E. A. Coddington and N. Levinson, "Theory of Ordinary Differential Equations,", McGraw-Hill Book Company, (1955). Google Scholar
P. Colli, On some doubly nonlinear evolution equations in Banach spaces,, Japan J. Indust. Appl. Math., 9 (1992), 181. doi: 10.1007/BF03167565. Google Scholar
J. Ding and B.-Z. Guo, Blow-up and global existence for nonlinear parabolic equations with Neumann boundary conditions,, Comput. Math. Appl., 60 (2010), 670. doi: 10.1016/j.camwa.2010.05.015. Google Scholar
J. Escher, Global existence and nonexistence for semilinear parabolic systems with nonlinear boundary conditions,, Math. Ann., 284 (1989), 285. doi: 10.1007/BF01442877. Google Scholar
______, Quasilinear parabolic systems with dynamical boundary conditions,, Comm. Partial Differential Equations, 18 (1993), 1309. doi: 10.1080/03605309308820976. Google Scholar
______, On the qualitative behaviour of some semilinear parabolic problems,, Differential Integral Equations, 8 (1995), 247. Google Scholar
Z.-H. Fan and C.-K. Zhong, Attractors for parabolic equations with dynamic boundary conditions,, Nonlinear Anal., 68 (2008), 1723. doi: 10.1016/j.na.2007.01.005. Google Scholar
V. A. Galaktionov and J. L. Vázquez, The problem of blow-up in nonlinear parabolic equations,, Discrete Contin. Dyn. Syst., 8 (2002), 399. doi: 10.3934/dcds.2002.8.399. Google Scholar
V. Georgiev and G. Todorova, Existence of a solution of the wave equation with nonlinear damping and source terms,, J. Differential Equations, 109 (1994), 295. doi: 10.1006/jdeq.1994.1051. Google Scholar
S. Gerbi and B. Said-Houari, Local existence and exponential growth for a semilinear damped wave equation with dynamic boundary conditions,, Adv. Differential Equations, 13 (2008), 1051. Google Scholar
G. Gilardi and U. Stefanelli, Existence for a doubly nonlinear Volterra equation,, J. Math. Anal. Appl., 333 (2007), 839. doi: 10.1016/j.jmaa.2006.11.050. Google Scholar
M. Grobbelaar-van Dalsen, Semilinear evolution equations and fractional powers of a closed pair of operators,, Proc. Roy. Soc. Edinburgh Sect. A, 105 (1987), 101. doi: 10.1017/S0308210500021946. Google Scholar
T. Hintermann, Evolution equations with dynamic boundary conditions,, Proc. Roy. Soc. Edinburgh Sect. A, 113 (1989), 43. doi: 10.1017/S0308210500023945. Google Scholar
K. Ishige and H. Yagisita, Blow-up problems for a semilinear heat equation with large diffusion,, J. Differential Equations, 212 (2005), 114. doi: 10.1016/j.jde.2004.10.021. Google Scholar
M. Jazar and R. Kiwan, Blow-up of a non-local semilinear parabolic equation with Neumann boundary conditions,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 25 (2008), 215. doi: 10.1016/j.anihpc.2006.12.002. Google Scholar
M. Kirane, Blow-up for some equations with semilinear dynamical boundary conditions of parabolic and hyperbolic type,, Hokkaido Math. J., 21 (1992), 221. Google Scholar
I. Lasiecka, Stabilization of hyperbolic and parabolic systems with nonlinearly perturbed boundary conditions,, J. Differential Equations, 75 (1988), 53. doi: 10.1016/0022-0396(88)90129-5. Google Scholar
H. A. Levine, Some nonexistence and instability theorems for solutions of formally parabolic equations of the form $Pu_t=-Au+\mathcalF(u)$,, Arch. Rational Mech. Anal., 51 (1973), 371. Google Scholar
______, The role of critical exponents in blowup theorems,, SIAM Rev., 32 (1990), 262. doi: 10.1137/1032046. Google Scholar
H. A. Levine, S. R. Park and J. Serrin, Global existence and nonexistence theorems for quasilinear evolution equations of formally parabolic type,, J. Differential Equations, 142 (1998), 212. doi: 10.1006/jdeq.1997.3362. Google Scholar
H. A. Levine and L. E. Payne, Nonexistence theorems for the heat equation with nonlinear boundary conditions and for the porous medium equation backward in time,, J. Differential Equations, 16 (1974), 319. doi: 10.1016/0022-0396(74)90018-7. Google Scholar
______, Some nonexistence theorems for initial-boundary value problems with nonlinear boundary constraints,, Proc. Amer. Math. Soc. 46 (1974), 46 (1974), 277. Google Scholar
H. A. Levine and J. Serrin, Global nonexistence theorems for quasilinear evolution equations with dissipation,, Arch. Rational Mech. Anal., 137 (1997), 341. doi: 10.1007/s002050050032. Google Scholar
H. A. Levine and R. A. Smith, A potential well theory for the heat equation with a nonlinear boundary condition,, Math. Methods Appl. Sci., 9 (1987), 127. doi: 10.1002/mma.1670090111. Google Scholar
______, A potential well theory for the wave equation with a nonlinear boundary condition,, J. Reine Angew. Math., 374 (1987), 1. Google Scholar
J.-L. Lions and E. Magenes, "Problèmes aux Limites Non Homogènes et Applications. Vol. 1,", Travaux et Recherches Mathématiques, (1968). Google Scholar
J.-L. Lions and W. A. Strauss, Some non-linear evolution equations,, Bull. Soc. Math. France, 93 (1965), 43. Google Scholar
A. Lunardi, "Analytic Semigroups and Optimal Regularity in Parabolic Problems,", Progress in Nonlinear Differential Equations and Their Applications, 16 (1995). doi: 10.1007/978-3-0348-9234-6. Google Scholar
E. Maitre and P. Witomski, A pseudo-monotonicity adapted to doubly nonlinear elliptic-parabolic equations,, Nonlinear Anal., 50 (2002), 223. doi: 10.1016/S0362-546X(01)00748-9. Google Scholar
M. Marcus and V. J. Mizel, Absolute continuity on tracks and mappings of Sobolev spaces,, Arch. Rational Mech. Anal., 45 (1972), 294. Google Scholar
N. Mizoguchi, Blowup rate of solutions for a semilinear heat equation with the Neumann boundary condition,, J. Differential Equations, 193 (2003), 212. doi: 10.1016/S0022-0396(03)00128-1. Google Scholar
L. E. Payne and P. W. Schaefer, Blow-up in parabolic problems under Robin boundary conditions,, Appl. Anal., 87 (2008), 699. doi: 10.1080/00036810802189662. Google Scholar
_______, Blow-up phenomena for some nonlinear parabolic systems,, Int. J. Pure Appl. Math., 48 (2008), 193. Google Scholar
L. E. Payne and J. C. Song, Lower bounds for blow-up time in a nonlinear parabolic problem,, J. Math. Anal. Appl., 354 (2009), 394. doi: 10.1016/j.jmaa.2009.01.010. Google Scholar
P. Pucci and J. Serrin, Global nonexistence for abstract evolution equations with positive initial energy,, J. Differential Equations, 150 (1998), 203. doi: 10.1006/jdeq.1998.3477. Google Scholar
G. Schimperna, A. Segatti and U. Stefanelli, Well-posedness and long-time behavior for a class of doubly nonlinear equations,, Discrete Contin. Dyn. Syst., 18 (2007), 15. doi: 10.3934/dcds.2007.18.15. Google Scholar
J. Serrin, G. Todorova and E. Vitillaro, Existence for a nonlinear wave equation with damping and source terms,, Differential Integral Equations, 16 (2003), 13. Google Scholar
J. Simon, Compact sets in the space $L^p(0,T;B)$,, Ann. Mat. Pura Appl. (4), 146 (1987), 65. doi: 10.1007/BF01762360. Google Scholar
W. A. Strauss, On continuity of functions with values in various Banach spaces,, Pacific J. Math., 19 (1966), 543. doi: 10.2140/pjm.1966.19.543. Google Scholar
M. E. Taylor, "Partial Differential Equations. III,", Applied Mathematical Sciences, 117 (1997). Google Scholar
S. V. Uspenskiĭ, An imbedding theorem for S. L. Sobolev's classes of fractional order $W_{p^r}$,, Soviet Math. Dokl., 1 (1960), 132. Google Scholar
E. Vitillaro, Global nonexistence theorems for a class of evolution equations with dissipation and application,, Arch. Rational Mech. Anal., 149 (1999), 155. doi: 10.1007/s002050050171. Google Scholar
______, Some new results on global nonexistence and blow-up for evolution problems with positive initial energy,, Rend. Istit. Mat. Univ. Trieste, 31 (2000), 245. Google Scholar
______, Global existence for the wave equation with nonlinear boundary damping and source terms,, J. Differential Equations, 186 (2002), 259. doi: 10.1016/S0022-0396(02)00023-2. Google Scholar
______, A potential well theory for the wave equation with nonlinear source and boundary damping terms,, Glasg. Math. J., 44 (2002), 375. doi: 10.1017/S0017089502030045. Google Scholar
______, Global existence for the heat equation with nonlinear dynamical boundary condition,, Proc. Roy. Soc. Edinburgh Sect. A, 135 (2005), 1. doi: 10.1017/S0308210500003838. Google Scholar
_______, On the Laplace equation with non-linear dynamical boundary conditions,, Proc. London Math. Soc. (3), 93 (2006), 418. doi: 10.1112/S0024611506015875. Google Scholar
J. von Below and G. Pincet Mailly, Blow up for reaction diffusion equations under dynamical boundary conditions,, Comm. Partial Differential Equations, 28 (2003), 223. doi: 10.1081/PDE-120019380. Google Scholar
______, "Blow Up for Some Nonlinear Parabolic Problems with Convection Under Dynamical Boundary Conditions,", Discrete Contin. Dyn. Syst., (2007), 1031. Google Scholar
W. P. Ziemer, "Weakly Differentiable Functions,", Graduate Texts in Mathematics, 120, (1989). doi: 10.1007/978-1-4612-1015-3. Google Scholar
Marek Fila, Hirokazu Ninomiya, Juan-Luis Vázquez. Dirichlet boundary conditions can prevent blow-up in reaction-diffusion equations and systems. Discrete & Continuous Dynamical Systems - A, 2006, 14 (1) : 63-74. doi: 10.3934/dcds.2006.14.63
Joachim von Below, Gaëlle Pincet Mailly, Jean-François Rault. Growth order and blow up points for the parabolic Burgers' equation under dynamical boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 825-836. doi: 10.3934/dcdss.2013.6.825
Shouming Zhou, Chunlai Mu, Yongsheng Mi, Fuchen Zhang. Blow-up for a non-local diffusion equation with exponential reaction term and Neumann boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2935-2946. doi: 10.3934/cpaa.2013.12.2935
Keng Deng, Zhihua Dong. Blow-up for the heat equation with a general memory boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2147-2156. doi: 10.3934/cpaa.2012.11.2147
Barbara Kaltenbacher, Irena Lasiecka. Well-posedness of the Westervelt and the Kuznetsov equation with nonhomogeneous Neumann boundary conditions. Conference Publications, 2011, 2011 (Special) : 763-773. doi: 10.3934/proc.2011.2011.763
Zhaoyang Yin. Well-posedness and blow-up phenomena for the periodic generalized Camassa-Holm equation. Communications on Pure & Applied Analysis, 2004, 3 (3) : 501-508. doi: 10.3934/cpaa.2004.3.501
Joachim Escher, Olaf Lechtenfeld, Zhaoyang Yin. Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2007, 19 (3) : 493-513. doi: 10.3934/dcds.2007.19.493
Xi Tu, Zhaoyang Yin. Local well-posedness and blow-up phenomena for a generalized Camassa-Holm equation with peakon solutions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2781-2801. doi: 10.3934/dcds.2016.36.2781
Jinlu Li, Zhaoyang Yin. Well-posedness and blow-up phenomena for a generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5493-5508. doi: 10.3934/dcds.2016042
Luigi Forcella, Kazumasa Fujiwara, Vladimir Georgiev, Tohru Ozawa. Local well-posedness and blow-up for the half Ginzburg-Landau-Kuramoto equation with rough coefficients and potential. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2661-2678. doi: 10.3934/dcds.2019111
Xinwei Yu, Zhichun Zhai. On the Lagrangian averaged Euler equations: local well-posedness and blow-up criterion. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1809-1823. doi: 10.3934/cpaa.2012.11.1809
Ying Fu, Changzheng Qu, Yichen Ma. Well-posedness and blow-up phenomena for the interacting system of the Camassa-Holm and Degasperis-Procesi equations. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1025-1035. doi: 10.3934/dcds.2010.27.1025
Tarek Saanouni. A note on global well-posedness and blow-up of some semilinear evolution equations. Evolution Equations & Control Theory, 2015, 4 (3) : 355-372. doi: 10.3934/eect.2015.4.355
Joachim von Below, Gaëlle Pincet Mailly. Blow up for some nonlinear parabolic problems with convection under dynamical boundary conditions. Conference Publications, 2007, 2007 (Special) : 1031-1041. doi: 10.3934/proc.2007.2007.1031
Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2053-2068. doi: 10.3934/cpaa.2017101
Lan Qiao, Sining Zheng. Non-simultaneous blow-up for heat equations with positive-negative sources and coupled boundary flux. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1113-1129. doi: 10.3934/cpaa.2007.6.1113
Shouming Zhou, Chunlai Mu, Liangchen Wang. Well-posedness, blow-up phenomena and global existence for the generalized $b$-equation with higher-order nonlinearities and weak dissipation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 843-867. doi: 10.3934/dcds.2014.34.843
Nguyen Thanh Long, Hoang Hai Ha, Le Thi Phuong Ngoc, Nguyen Anh Triet. Existence, blow-up and exponential decay estimates for a system of nonlinear viscoelastic wave equations with nonlinear boundary conditions. Communications on Pure & Applied Analysis, 2020, 19 (1) : 455-492. doi: 10.3934/cpaa.2020023
Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado. On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3459-3477. doi: 10.3934/dcds.2018148
Yohei Fujishima. Blow-up set for a superlinear heat equation and pointedness of the initial data. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4617-4645. doi: 10.3934/dcds.2014.34.4617
Alessio Fiscella Enzo Vitillaro | CommonCrawl |
Agricultural and Food Economics
Improving diffusion in agriculture: an agent-based model to find the predictors for efficient early adopters
Angela Barbuto1,
Antonio Lopolito1 &
Fabio Gaetano Santeramo ORCID: orcid.org/0000-0002-9450-46181
Agricultural and Food Economics volume 7, Article number: 1 (2019) Cite this article
Proven that the adoption rate of a new product is influenced by the network characteristics of the early adopters, the aim of this paper is to find the network features of the early adopters associated with high adoption rates of a specific new practice: the use of biodegradable mulching films containing soluble bio-based substances derived from municipal solid wastes. We simulated the diffusion process by means of an agent-based model calibrated on real-world data. Closeness and clusterization emerged as the most important network characteristics for early adopters to be successful. The results achieved represent the basis for the breaking down of a tailored diffusion strategy to overcome the psychological and socio-economic barriers of this kind of innovation within an environmental and sustainability-oriented transition policy in a rural context.
The bio-waste valorization is becoming an increasingly urgent priority for governments and environmental and social organizations (Morone et al. 2015). In this respect, the attention is mainly focused on the organic fraction of municipal and agricultural waste that can be used as raw material for biodegradable products (e.g., detergents, fuels, textile auxiliaries, plastic, fertilizers) (Motoneri et al. 2014, Scaringelli et al. 2016). Among these, a promising sustainable innovation is represented by biodegradable mulching films derived from soluble bio-based substances (SBOs) (Motoneri et al. 2014). This technology is at a development stage (TRLFootnote 1 4/5), and its future adoption by farmers can improve both the sustainability of agricultural practices and the waste management process. This introduces the issue of innovation diffusion, proven that the achievement of the abovementioned benefits strictly depends on the spread of the novelty among a critical mass of users.
Here, we stress that, at the core of the diffusion process, there are operation and functioning of the social networks (i.e., the set and pattern of support, friendship, and communication relations) connecting people (Valente 1995). In fact, in its straightforward definition, the diffusion of innovation can be intended as "a special type of communication, in that the messages are concerned with new ideas" (Rogers 2003: 5). Communication implies the mobilization of social ties among people to create and share information and reach a reciprocal understanding (Rogers 2003). This process is fundamental in overcoming the innovation resistance that is the agents' normal response against uncertainty and the costly readjustment activity imposed by the innovation (Ram 1987). To put differently, the learning-from-others mechanism is likely to enhance the diffusion of innovation: this is true not only for innovations but also for farming strategies (Santeramo 2018).
More in-depth, social networks can influence diffusion providing (1) the medium for the circulation of information that is crucial to make the agents aware about the novelty and its real costs and benefits; (2) a certain level of redundancy of the information, deriving from social reinforcement, that is useful in overcoming uncertainty; (3) a certain level of homophily among actors (defined as the overlap degree of some agents attributes, such as education, socioeconomic status, and preferences) that favors common meanings, beliefs sharing, and mutual understandings (Rogers 2003).
As a consequence, the ability of the agents to affect their neighbors' (i.e., the others with whom they are connected) adoption decision is closely related to their role and location in the network. The social network analysis (SNA) has developed various centrality measures to capture network characteristics of the agents. These centrality measures can be considered by market operators and policymakers as a key driver to build effective promotional strategies. With this regard, the literature demonstrates that diffusion rates can vary greatly depending on the injection points (IPs) (i.e., the agents where the innovation is first injected, which are "early adopters") used, according to their network centrality (Banerjee et al. 2013). This is particularly true in the case of diffusion of new sustainable practices in agriculture (Tey and Brindal 2012).
In this perspective, this work represents a preliminary analysis of the network features which best predict effective IPs (i.e., those able to reach the highest adoption rates) in the case of a specific sustainable novelty, the SBO mulching films, in a rural context. The aim is to find those individual centrality measures that can be used as a rational criterion to select the best spreaders. The focus is on a community of farmers located in the north of Apulia (Italy) specialized in the production of vegetables. By means of an agent-based model (ABM), we simulated the effects of different IPs on the diffusion rates into an artificial community representing the one studied. In order to evaluate the robustness of our findings, we conduct a sensitivity analysis accounting for the possibility that the novelty disappoints a fraction of consumers. Moreover, we applied the SNA to obtain measures of IPs' network characteristics. Finally, we adopted a multiple linear regression model to estimate the effect of the various centrality measures of the IPs on the final adoption rates.
The use of social network potentialities to design successful diffusion campaigns is a topic largely investigated by the literature on ABMs of innovation diffusion (Goldenberg et al. 2001, Moldovan and Goldenberg 2004, Alkemade and Castaldi 2005, Goldenberg et al. 2007, Delre et al. 2007a). In this kind of approach, different from another kind of models such as linear programming, dynamic models, differential equation, the most elementary unit of modelization is represented by the single agent rather than the social system as a whole. This allows researchers to explicitly model the agents' heterogeneity, their social interactions, and their decision-making processes. In fact, the distinguishing feature of the ABM is that it represents the macro-level dynamics that take place in a social system as the result of the behavior of every single agent belonging to the social system and the interactions with its neighbors. For these reasons, ABM results are particularly suitable in showing the different performances of diffusion strategies based on the choice of the best connected actors (Valente and Davis 1999, Delre et al. 2007a, Goldenberg et al. 2009, Delre et al. 2010, Bohlmann et al. 2010, van Eck et al. 2011). Some of these models (Alkemande and Castaldi 2005, Goldenberg et al. 2000, Delre et al. 2007b; Bohlmann et al. 2010) explicitly include an adoption threshold to represent the different propensity of potential consumers of a new product to adopt (Kiesling 2012). Another type of models is devoted to simulating informational cascade (Banerjee 1992, Watts 2002). These models depict the graduation of the diffusion dynamic where different classes of consumer gradually adopt the novelty according to a progressive fashion distinguishing early adopters, followers, and laggards. When the novelty spreads in the largest part of the network, it occurs as a cascade.
The paper is structured as follows: the "The diffusion model" section describes the model employed, its basic assumption, its construction, and internal dynamics; the "Simulation and results" section reports the main results; and the "Concluding remarks" section concludes with some discussion and policy implications, including also suggestion for further research and some concluding remarks.
The diffusion model
For the sake of clarity, in what follows, we shall describe the model according to the "guidelines for model development" outlined in Rand and Rust (2011) aimed at setting a rigorous procedure for AB modeling. The logical basis of the model is provided by the following assumptions that cover all the relevant elements of the model inferences; moreover, the references after each statement provide the empirical basis for the assumptions (East et al. 2016): (A1) Each agent has a specific innovation resistance, represented by an individual adoption threshold (Nisbet and Collins 1978). (A2) For the adoption to occur, the agent's preference toward the innovation must overcome its innovation threshold. (A3) To form a preference, the agent must be aware of the innovation existence and its advantages (Chen 1996; Daberkow and McBride 2003). (A4) The agent grasps relevant information to become aware of the preferences of his neighbors (agents connected with it) (Molina-Morales and Martinez-Fernandez 2010; Narayan and Pritchett 1999; Van Rijn et al. 2012). (A5) The more homophilous the neighbor is, the more pieces of information the agent will grasp from it (Centola 2010, 2011). (A6) The higher the agent's education, the higher its capacity to grasp information from its neighborhood as a whole (Gellynck et al. 2014; Tepic et al. 2012).
The scope of the model is to reproduce the passing information dynamics and the adoption decision process among a population of farmers. In its most straightforward outline, the model depicts a network of agents connected by bidirectional links. As a consequence, the environment of each agent is represented by its neighbors that are the agents with whom it is connected via in/out-links through which it receives/sends information and influence. As highlighted by East et al. (2016), a common shortcoming in network diffusion models derives from the use of theoretical network structure (e.g., random networks, regular lattice) typically exhibiting the same centrality for all the actors. To overcome this limitation and to reach a better representation of relational complexity, the network in this case is modeled on the basis of a real-world network as explained below. The agents are divided into two classes, ordinaries and IPs; in each model run, we have one IP, being the rest of the agent ordinaries. They are characterized by different behaviors: The ordinaries represent farmers that have to decide whether to adopt or not. The IP represents a farmer which has already adopted the novelty and uses it over all the model time span since the initialization phase. Concerning the set of properties of the model's elements, each link has a property representing the level of homophily between the agents connected [h], and each agent has a preference toward the new technology [p], an adoption threshold [θ], a level of education [e], and the innovation status [novelty?]. These properties represent also the basic inputs of the model, while the model fundamental output is the number of ordinaries that have adopted at each time step.
Model construction
The model is structured into two phases: the initialization and the iteration. At the initialization step, n agents and m links are created. At this phase, p is set at 0 for ordinaries and 1 for IPs and θ and e are set at specific values (see below). The innovation status is set false for ordinaries and true for the IP. At each iteration time step, each ordinary (1) reconsiders its preference after having received information from its neighbors,Footnote 2 (2) pass information to neighbors,Footnote 3 and (3) decides to adopt (i.e., novelty? is set true) if p ≥ θ. This sequence is repeated until the model time span reaches the value t.
Concerning step 1, it is worth noting that the sole factor influencing the preference of an ordinary agent is the preferences of its neighbors. Specifically, at each time t, each ordinary i calculates its preference pit as the sum of its preference in the previous period pit − 1 and the average of the preferences of its j neighbors pjt weighted with the homophily degree with its neighbors hji. This sum is then corrected multiplying it by the level of education of i ei. Formally, the calculation of pit is:
$$ {p}_{it}={p}_{it-1}+\left(\sum \frac{p_{jt-1}{h}_{ji}}{n}\right)C\ \mathrm{with}\ C=\frac{e_i}{\max_e}. $$
It is worth noting that the time dimension in this process acts as an accumulator of the preference level allowing to unravel the interaction between network features and preference formation.
To augment the model reality, we developed also a disappointment extension, which introduces a common issue in the innovation adoption process, that is the probability that the novelty disappoints a fraction of users. To simulate this behavior, the disappoint version includes the possibility that the innovation disappoints the adopter. When this case occurs, the model provides a possible third action for the ordinary agent, to reduce its preference of a variable % (between 25 and 75%) toward the novelty. In this version of the model, the disappointment is modeled as a random event which affects a variable fraction of the population (between 0 and 25%).
Model verification
Our model was implemented and run using the NetLogo 5.2 platform (Wilensky 1999). To import the network under investigation, we included in the model a routine adapted by the "Network Import Example" authored by Uri Wilensky and available at the modeling commons platform.Footnote 4 The sequence of model execution is outlined in Fig. 1. It plots the chart flow of the model dynamics that is the internal agent's decision logic. The code flow was analyzed, and each piece of the code was tested to verify its correct functioning. In this phase, we also employed corner cases (extreme values of the inputs) to verify that the implemented model does not show aberrant behaviors (e.g., no adoption is realized without any IP).
The model flow chart adopted to generate the data in our simulation
Model calibration
The identification of model parameters was based on a case from South Italy, located in the Province of Foggia. It relates to a group of specialized vegetable farmers (N = 80) which already use conventional or biodegradable mulching technique and are therefore potentially interested in the innovative use of SBO-derived films (Scaringelli et al. 2016). Sample size covers 2% of the population. This is a reasonable sample extension to obtain enough real-world data for the calibration of the model parameters. This case study is suitable to this end, prove that it offers a cross-section of the social and professional interactions of vegetable crop farmers in this territory. Being the technology under investigation still at a development stage, no real adoption rate already exists. This is exactly what the ABM model addresses: to simulate the effect of various IPs on the final adoption rate. Thus, we used the case study with the sole purpose to calibrate the following model parameters: (i) the network structure, which is defined by the number of agents, the number of links connecting the agents, and the links distribution among agents; (ii) the agent parameters e and θ; and (iii) the link parameter h. We obtained the information on the network structure by means of deep face-to-face interviews with two experts which live and work within the context of the case study and can be considered direct observers. These interviews were directed at recognizing the social and professional relations connecting the web by means of a participatory social network observation.Footnote 5 To this end, the experts were asked to trace the who-knows-who relations and to identify the affiliations to local cooperatives. To obtain the other information needed to calibrate the reminder of model parameters, we performed a questionnaire survey on the actors forming the network asking for age, years of education, cropping patterns, willingness to adopt SBO mulching films, farm size, and number of employees. We also calculated the distance between the farmers based on their location. Some descriptive statistics are reported in Table 1.
Table 1 The descriptive statics of observations of our case study
The relational data were used to build the network of firms that form the interaction arrangement of the agents. This network is formed of a single component, not fragmented, with a medium level of density (21%). Within this network, two randomly chosen nodes have an average distance of 2.42, and the maximum distance observed is 6. On the whole, the network results are very clustered (clustering coefficient 0.78). The demographic data were employed in the validation of agents' properties. Specifically, we used the following parameters: n = 80, as the number of respondent farmers. In each simulation, we split this number in 79 ordinaries and 1 IP; m = 1296 that is the number of links observed in the real farmers' network. Moreover, in the construction of the simulated network, the links have not been evenly distributed among the agents. Instead, the agents in the model reproduce exactly the relational profile of each farmer interviewed, with the same number of relations with the same neighbors; h varies in the range [0.1]. Rather than using a single dimension, we set this property on the base of four socioeconomic attributes (farm size, number of employees, age, and distances between farmers), as in Blau et al. 1984, Mc Pherson et al. 2001; Centola et al. 2007; p is 0 for ordinaries and 1 for IPs; novelty? is false for ordinaries and true for IPs; θ reflects the willingness to adopt SBO mulching films declared by respondents and is set on a six-degree scale ranging between − 1 (completely adverse to adoption) and 1 (completely favorable to adoption); e is various and reflects the year of education of each farmer. For the estimation of t (i.e., the model time span), we matched a 5-year time horizon (representing a rational span to allow the innovation to became mature) with 45 model periods, that is the time needed by the most part of the IPs to reach a stable adoption rate. As a consequence, in our model, a time step is equal to 40 days.
Simulation and results
The procedure adopted to find those centrality measures that can be used as rational criterion to select effective IPs can be broken down in three main phases: (1) calculation of the centrality measures for each agent in the network, (2) simulation of the IP's effect in terms of final adoption rate using the ABM model, and (3) estimation of the effect of centrality measures on the simulated adoption rates employing the multiple linear regression.
By means of the SNA, we calculated six network measures for each agent (Table 2).
Table 2 The network measures employed to characterize the IPs
Table 3 contains the descriptive analysis of the abovementioned network measures. All the measures used were standardized over a range of 0.1 to facilitate the interpretation of the results. The table shows that each node (1) is connected to 21% of the rest of the network in mean, (2) intercepts 2% of the shortest path length among others, (3) is rather close to others, (4) has a medium level of ARD, (5) is characterized by a high local clustering coefficient, (6) and has a relatively low eigenvector.
Table 3 The table synthesizes the descriptive statistics of the SNA network variables
We used the NetLogo platform to code and run the ABM model. Figure 2 depicts the interface of the model implemented. In the simulation, each one of the 80 agents is used as IP alternatively; moreover, the robustness of the results is proved simulating the diffusion process repeatedly using each IP many times (batches of 20 runs) and taking the means of the simulated data.
The NetLogo interface running the model described. The circles represent the agents (red circles are adopting agents)
Since the main output of the simulation is the number of adopters at the final time, we simulated the number and rate of adopters for each IP at t = 45 (Table 4) for both versions of the model (basic and disappointment).
Table 4 The table shows the number and rate of adopters at the final step of our simulation
We also performed a robustness check of results, simulating the number of adopters and calculating the adoption rates also at t = 68 and 90. The variation is negligible. This means that at t = 45, the process is at its climax with a little part of the dynamic already working. We used the adoption rate at t = 68 and 90 in the estimation of centrality effect (see the "Phase 3" section below) with no variation in the sign and significativeness of coefficients and with almost the same values. We verified also that the threshold at t = 45 is the best end for the model simulation proven that the adoption rate at this time step displays a little more variability than t = 68 and 90 magnifying the difference in the variables estimations.
The number of adopters is around 19 for the base model and 11 for the disappointment version. For both versions, the level of dispersion is very low, that is, excepting little variation, the IPs work in a rather same manner. The maximum rate of adoption is 25% for the basic model and 17% for the disappointment model.
At least for this case study, these results indicate that the individual characteristics of the IPs are not primarily responsible for the diffusion process. Instead, the density and clusterization of the network do the most part of the diffusion work.
In this phase, we estimated the effect of the centrality measures calculated in phase 1 on the adoption rates (AR) simulated in phase 2, by means of a simple model of linear regression, specified as follows:
$$ \mathrm{AR}=\alpha +{\beta}_1\mathrm{Centrality}+{\beta}_2\mathrm{ARD}+{\beta}_3\mathrm{Eigenvector}+{\beta}_4\mathrm{Betweenness}+{\beta}_5\mathrm{LCC}+{\beta}_6\mathrm{Closeness}+\varepsilon $$
where AR is the adoption rate in the base and disappointment models (cfr. Table 4), and the regressors are those specified in Table 3, with ARD standing for average reciprocal distance and LCC standing for local clustering coefficients.
Table 5 reports the results of the regression. What we can firstly observe is that the positive influence of a dense network on the innovation diffusion is confirmed by a high intercept in the base model. The analysis of this model also highlights degree and closeness centralities as the best indicators for effective IPs. Both these measures have in fact a positive effect on the adoption rate and are highly significant. Moreover, their impact is remarkable; an augment of 1% of degree and closeness centrality produces an increase of 30% and 47%, respectively, in the adoption rate. On the contrary, betweenness and eigenvector centralities exhibit a negative. This is a surprising result, proven that betweenness is a measure of how many times an agent acts as a bridge and is considered an important indicator of how the single agent plays the role of broker, while eigenvector is a measure of how central the neighbors of the agent are. High levels of these centralities are supposed to reflect into a great influence on the network with a potential crucial role in passing information and innovation spread dynamics (Borgatti et al. 2013; Hanneman and Riddle 2005). Finally, despite the average reciprocal distance and the clustering coefficient in particular are important network measure capable of providing social reinforce in the new behavior diffusion (Centola 2011) in this case resulted not statistically significant.
Table 5 The table synthesizes the results of the econometric analysis
These partly counterintuitive results are probably due to the specific structure of the real network analyzed. Most part of the literature, indeed, analyzed the effectiveness of the centrality measures on a completely theoretical network (i.e., regular lattice, random, small world), but the morphology of the network is clearly responsible for the whole diffusion dynamic and of the actual power of the single positions. In the network under investigation, the agents with the highest betweenness act as bridges between two major components of the network. These two components differs in size and structure, being the greater (60 nodes) 3.75 times the size of the smaller (16 nodes). Moreover, the former has a pretty random structure highly centralized with many long ties, while the latter is a very clustered network with a highly regular structure. In this context, the most part of the diffusion dynamics happens in the greater component, making relatively more important role of central and close IPs in its core than the role of brokers with the minor component. The supremacy of the network structure over the diffusion dynamics is also confirmed by the results from the disappointment model: when harsher conditions to innovation adoption emerge with disappointment possibility, the positive influence of IPs centrality on adoption rates ceases, while the negative operation of betweenness and eigenvector remains, though reduced.
Being the model we used for the estimation a multiple regression, the findings related to each single centrality measure should not be intended as separate and self-contained but as parts of a comprehensive frame capable of leading to a multifaceted and clear profile of the ideal spreader: central and close to the rest of the network but not "between" the most part of actors, with no or limited bridge toward other very central actors. In other words, in this specific structure, the diffusion of innovation is accelerated when the spreader can exert direct influence through immediate links on a large part of the network and is hindered when this influence is mediated by others.
This paper aimed at performing a preliminary analysis of the network features characterizing IPs capable of reaching high adoption rates in the case of the adoption of SBO
mulching films in a rural context. To this end, we combined techniques form SNA, ABM, and linear regression to calculate the centrality measures of the spreaders, simulate the adoption rates of the innovation, and estimate the effect of centralities on the adoption dynamics. These results should be viewed as a first step in identifying the network measures featuring good IPs. We found the most straightforward centrality measures (i.e., centrality degree, closeness) as the most relevant in spreading diffusion. This is not a trivial result proven that more "insightful" measures (i.e., betweenness and eigenvector) resulted to exert an opposite effect on the adoption rates. This contributes to addressing the initial problem of finding those individual centrality measures suitable to identify operative criteria to select the best spreaders. Rather than single specific criterion, we identified an actual profile of the best spreader.
We found that, for the diffusion of sustainable innovation in agriculture, and at least in the case of context and relational structure similar to that studied, it is primarily important how connected the IP is (how high is its immediate influence) and how relatively close it is to the rest of the web. Another major point is the emerged supremacy of the network structure on the diffusion dynamics. It makes the difference on the effectiveness of IPs positions creating positive or negative chance to innovation diffusion. This is very important for policy and marketing decisions, especially in rural areas which are characterized by a highly specific social structure that should carefully be considered to design a successful spreading campaign. The implications of our paper are relevant in terms of rural planning and development. To the extent that innovations are potential drivers of competitiveness, policy interventions should be planned in order to achieve not only efficacy, but also efficiency, exploiting the leverage effect of social capital (Nardone et al. 2010): targeting the best early adopters, capable of enhancing the learning-from-others mechanisms (Santeramo 2018) would help in lowering the costs of policy interventions and increasing the impacts of the rural development measures.
Finally, the fact that no agent reached at least a 25% adoption rate highlights the need to identifying the optimal number of injection point (rather than a single injection point) to maximize the potential diffusion of the technology, also considering the fact that it is costly for advisors to spread the technology among end users. The kind of model used in this paper might help in the design of balanced groups of IPs. A caveat of this study is represented by a limited case study with specific features (density and clusterization). The effects of the individual characteristics on the final rate of adoption should be interpreted on this basis. A more comprehensive analysis should include the investigation of different network structures (e.g., high vs. low density, regular vs. randomized structure, high vs. low average degree). Another area of analysis is represented by the measurement of the impact of exposure number per agent to the word of mouth. The expected result of such a deeper investigation is the achievement of valuable hints for marketing and policy-making actors in the light of the bio-waste valorization.
TRL is the acronym of "technology readiness levels" that is a system to assess the maturity level of a specific technology developed by the National Aeronautics and Space Administration (NASA) see https://www.nasa.gov/directorates/heo/scan/engineering/technology/txt_accordion1.html
Specifically, each ordinary recalculates its p as the sum of its p in the previous period and the average of the p of its neighbors weighted with the homophily degree (h). This sum is then corrected multiplying it by the level of education (e).
This kind of action is the only one that applies to IPs.
The Network Import Example is downloadable at http://modelingcommons.org/browse/one_model/2214#model_tabs_browse_procedures
This method is based on the involvement of actors directly implicated in the network investigated, by means of workshops or deep interviews to co-produce a representation of that network (Edwards et al. 2010).
Alkemade F, Castaldi C (2005) Strategies for the diffusion of innovations on social networks. Comput Econ 25(1–2):3–23
Banerjee A, Chandrasekhar A, Duflo E, Jackson M (2013) Diffusion of microfinance. Science 341:363–310
Blau PM, Beeker C, Fitzpatrick KM (1984) Intersecting social affiliations and intermarriage. Social Forces 62:585–606
Bohlmann JD, Calantone RJ, Zhao M (2010) The effects of market network heterogeneity on innovation diffusion: an agent-based modeling approach. J Prod Innov Manag 27(5):741–760
Borgatti SB, Everett MG, Johnson JC (2013) Analyzing social networks. Sage Publications, London
Centola D (2011) An experimental study of Homophily in the adoption of health behavior. Science 334(6060):1269–1272
Centola D, Gonzalez-Avella JC, Eguiluz VM, San Miguel M (2007) Homophily cultural drift and the co-evolution of cultural groups. J Confl Resolut 51(6):905–929
Chen MJ (1996) Competitor analysis and interfirm rivalry: toward a theoretical integration. Acad Manag Rev 21(1):100–134
Daberkow SG, McBride WD (2003) Farm and operator characteristics affecting the awareness and adoption of precision agriculture technologies in the US. Precis Agric 4(2):163–177
Delre SA, Jager W, Bijmolt TH, Janssen MA (2007a) Targeting and timing promotional activities: an agent-based model for the takeoff of new products. J Bus Res 60:826–835
Delre SA, Jager W, Bijmolt TH, Janssen MA (2010) Will it spread or not? The effects of social influences and network topology on innovation diffusion. J Prod Innov Manag 27:267–282
Delre SA, Jager W, Janssen MA (2007b) Diffusion dynamics in small-world networks with heterogeneous consumers. Computational and Mathematical Organization Theory 13(2):185–202
East R, Uncles MD, Uncles MD, Romaniuk J, Romaniuk J, Lomax W (2016) Improving agent-based models of diffusion. Eur J Mark 50(3/4):639–646
Goldenberg J, Han S, Lehmann DR, Hong JW (2009) The role of hubs in the adoption process. J Mark 73(2):1–13
Goldenberg J, Libai B, Muller E (2001) Talk of the network: a complex systems look at the underlying process of word-of-mouth. Mark Lett 12:211–223
Goldenberg J, Libai B, Solomon S, Jan N, Stauffer D (2000) Marketing percolation. Physica A: statistical mechanics and its applications 284(1):335–347
Goldenberg JB, Libai S, Moldovan, Muller E (2007) The NPV of bad news. Int J Res Mark 24:186–200
Hanneman RA, Riddle M (2005) Introduction to Social Network Methods, Riverside CA. University of California
Kiesling E, Günther M, Stummer C, Wakolbinger LM (2012) Agent-based simulation of innovation diffusion: a review. CEJOR 20(2):183–230
McPherson M, Smith-Lovin L, Cook JM (2001) Birds of a feather: homophily in social networks. Annu Rev Sociol:415–444
Moldovan S, Goldenberg J (2004) Cellular automata modeling of resistance to innovations: effects and solutions. Technol Forecast Soc Chang 71:425–442
Montoneri E., Scaringelli M.A., Mainero D., Prosperi M. (2014) Rifiuti urbani e agricoli come fonte di combustibili e prodotti chimici: selezione di processi e prodotti meritevoli di valutazione tecnica e sostenibilità per fini commerciali in Potenzialità di sviluppo e sostenibilità socio-economica e ambientale del settore delle bioraffinerie in provincia di Foggia by Lopolito A., Prosperi M., Nardone G., Eds., Franco Angeli, Milan, Italy, 61–72
Morone P, Tartiu VE, Falcone P (2015) Assessing the potential of biowaste for bioplastics production through social network analysis. J Clean Prod 90:43–54
Narayan D, Pritchett L (1999) Cents and sociability: household income and social capital in Rural Tanzania. Econ Dev Cult Chang 47(4):871–897
Nardone G, Sisto R, Lopolito A (2010) Social capital in the LEADER initiative: a methodological approach. J Rural Stud 26(1):63–72
Newman MEJ (2003) The structure and function of complex networks. SIAM Rev 45:167–256
Nisbet RI, Collins JM (1978) Barriers and resistance to innovation. Australian Journal of Teacher Education 3(1)
Ram S (1987) A model of innovation resistance. NA-Advances in Consumer Research, Volume 14
Rand W, Rust RT (2011) Agent-based modeling in marketing: guidelines for rigor. Int J Res Mark 28(3):181–193
Rogers EM (2003) Diffusion of innovations. Free Press, New York
Santeramo, F. G. (2018). I learn, you learn, we gain. Experience in crop insurance markets. Applied Economic Perspetives and Policy. In press
Scaringelli MA, Giannoccaro G, Prosperi M, Lopolito A (2016) Adoption of biodegradable mulching films in agriculture: is there a negative prejudice towards materials derived from organic wastes? Ital J Agron 11:90–97
Tepic M, Trienekens JH, Hoste R, Omta SWF (2012) The influence of networking and absorptive capacity on the innovativeness of farmers in the Dutch pork sector. Internationa Food Agribusiness Manag Rev 15:1–34
Tey YS, Brindal M (2012) Factors influencing the adoption of precision agricultural technologies: a review for policy implications. Precis Agriculture 13:713–730
Valente T (1995) Network models of the diffusion of innovations. Hampton Press, Cresskill
Valente TW, Davis RL (1999) Accelerating the diffusion of innovations using opinion leaders. The Annals of the American Academy of Political and Social Science 566:55–67
Van Eck PS, Jager W, Leeflang PS (2011) Opinion leaders' role in innovation diffusion: a simulation study. J Prod Innov Manag 28:187–203
Van Rijn F, Bulte E, Adekunle A (2012) Social capital and agricultural innovation in sub-Saharan Africa. Agric Syst 108:112–122
Wasserman S, Faust K (1994) Social network analysis: methods and applications. University Press, Cambridge
Watts DJ (2002) A simple model of global cascades on random networks. Proc Natl Acad Sci 99:5766–5771
Wilensky U. (1999) NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling Northwestern University Evanston
We grateflly acknowlege the comments provided by G. Giannoccaro on a previous version of the present research.
We have not received funding for thi researchers.
Data are available from the authors upon request.
Department of the Sciences of Agriculture, Food and Environment, University of Foggia, Via Napoli n.25, 71122, Foggia, Italy
Angela Barbuto, Antonio Lopolito & Fabio Gaetano Santeramo
Angela Barbuto
Antonio Lopolito
Fabio Gaetano Santeramo
AL has coordinated the work and is responsible for the "Introduction," "The diffusion model," and "Simulation and results" sections. FS is responsible for the "Concluding remarks" section. AB has provided excellent research support. All authors read and approved the final manuscript.
Correspondence to Fabio Gaetano Santeramo.
Barbuto, A., Lopolito, A. & Santeramo, F.G. Improving diffusion in agriculture: an agent-based model to find the predictors for efficient early adopters. Agric Econ 7, 1 (2019). https://doi.org/10.1186/s40100-019-0121-0 | CommonCrawl |
General description of surface with zero gaussian curvature
Let suppose function $g(s,t)$ satisfies partial differential equations: $g_{ss} g_{tt} - g_{st}^2=0$. It may be treated as the surface has zero gaussian curvature. I am searching for general solution of this equation locally. Are there any results for it?
dg.differential-geometry differential-equations
asked Jan 3 '15 at 8:18
A surface of Gaussian curvature zero is locally isometric to the plane, and is said to be developable. A complete surface of Gaussian curvature zero in Euclidean three space is a cylinder (where a cylinder means the surface generated by the lines parallel to a given axis passing through a fixed curve in the subspace perpendicular to the axis; the plane is a cylinder in this sense when the curve is itself a line). This is due to Pogorelov, although it is usually attributed to Hartman-Nirenberg (they attribute it to Pogorelov). It can be found proved as Theorem 1 in W. Massey's Surfaces of Gaussian curvature zero in Euclidean $3$-space or in part II of Hartman and Nirenberg's On spherical image maps whose Jacobians do not change sign (Hartman and Nirenberg prove a more general result, for hypersurfaces in Euclidean space of arbitrary dimension). The papers of Massey and Hartman-Nirenberg contain more detailed results applicable to the incomplete case. Corollaries $3$ and $3^{\prime}$ of Hartman-Nirenberg show that every point of a $C^{2}$ surface of Gauss curvature zero has a neighborhood which admits a parameterization of the form $x = a(u)v + b(u)$ where $(u, v)$ varies over some simply-connected plane domain and $a$ and $b$ take values in $\mathbb{R}^{3}$. (see also chapter IX of the English translation Extrinsic Geometry of Convex Surfaces of a book of Pogorelov.)
Dan FoxDan Fox
$\begingroup$ Thanks. Well, I know this paper. Actually I'm looking for solution locally. I can reduce this equation to first order equation $g_s = h(g_t)$ and then using technique of solving first-order PDES, write in parametric form general solution. But it is rather huge and cumbersome... $\endgroup$ – user47116 Jan 3 '15 at 13:00
$\begingroup$ @user12355: I added some remarks to the answer to address the local case. $\endgroup$ – Dan Fox Jan 3 '15 at 13:19
$\begingroup$ Thanks. The parametrization is very useful for me. But actually functions $a(u)$ and $b(u)$ cannot be arbitrary. Right? $\endgroup$ – user47116 Jan 16 '15 at 9:31
I think this article might be helpful (see section 3, http://arxiv.org/pdf/1402.4751v2.pdf), also see this one (http://arxiv.org/abs/1205.7018).
Let me explain you very briefly what is happening there.
Let us use the following notations: $g(x,y)=B(x,y)$ and $B$ satisfies $B_{11}B_{22}-B_{12}^{2}=0$.
Take some suitable space curve $\gamma(t) = (f_{1}(t),f_{2}(t),f_{3}(t)) :I\to \mathbb{R}^{3}$ and lets require that $B(f_{1}(t),f_{2}(t))=f_{3}(t)$ (this is boundary condition for your function $B(x,y)$ --- this means that we are assuming that there is a domain $\Omega \subset \mathbb{R}^{2}$ where the function $B$ is given and $B$ has prescribed boundary data on $\partial \Omega$. By the way, in this case $(f_{1}(t),f_{2}(t))$ parametrizes $\partial \Omega$ and $f_{3}$ is your boundary data for $B$).
Then this already gives you one equation after differentiating $B(f_{1}(t),f_{2}(t))=f_{3}(t)$ in variable $t$: $$ B_{1}f'_{1}+B_{2}f'_{2}=f'_{3} $$ where $f'_{j}=\frac{df_{j}}{dt}$.
This information is nothing unless you use the fact that $B$ satisfies homogeneous Monge--Ampere equation (i.e., the fact that it has zero Gaussian curvature). It means (thanks to Pogorelov) that you can draw some family of line segments close to the boundary $\Omega$ which start from the curve $(f_{1}(t),f_{2}(t))$ and go inside the $\Omega$. Moreover the function $B$ is linear along these segments and the gradient of $B$ is constant along these segments. The typical picture of these family of segments is given here (see the picture below)
This picture close to the boundary of $\Omega$ is true if the things are not degenerate (i.e., torsion of $\gamma$ does not vanish on any subinterval $I$) then the domains, where $B$ is linear, cannot touch $\partial\Omega$ on a thick interval. however they can touch $\partial \Omega$ at some finite number of points (or countable number of points if the torsion of $\gamma$ changes sign infinitely many times).
In other words this means that the gradient of $B$ in $\Omega$ you can parametrize by one parameter $s$ i.e., $\nabla B = (t_{1}(s),t_{2}(s))$ where $s \in I$
So our equation $B_{1}f'_{1}+B_{2}f'_{2}=f'_{3}$ can be rewritten as follows $$ t_{1}(s)f'_{1}(s)+t_{2}(s)f'_{2}(s)=f'_{3}(s). $$ Of course this information is not enough to find $(t_{1}(s),t_{2}(s))$. But there is one more equation which you can also obtain, namely: $$ t_{1}'(s)\cos(\alpha(s))+t'_{2}(s)\sin(\alpha(s))=0, $$ where $(\cos(\alpha(s)),\sin(\alpha(s)))$ is the direction of the line segment starting at point $(f_{1}(s),f_{2}(s))$ i.e., unit vector, starting at point $(f_{1}(s),f_{2}(s))$ and going inside $\Omega$ along the line segment, along which $B$ is linear.
These two equations $$ t_{1}(s)f'_{1}(s)+t_{2}(s)f'_{2}(s)=f'_{3}(s);\\ t_{1}'(s)\cos(\alpha(s))+t'_{2}(s)\sin(\alpha(s))=0, $$ allow you to find $(t_{1}(s),t_{2}(s))$ up to a constant $C$ which you still have to choose later in order to glue these local pieces and to get some global picture for $B$. Thus you find $B$ $$ B(x,y)=f_{3}(s)+t_{1}(s)(x-f_{1}(s))+t_{2}(s)(y-f_{2}(s)) \quad(*), $$ where $(x,y)$ belongs to the line segment starting at point $(f_{1}(s),f_{2}(s))$.
For example if $\gamma(t)=(t,g(t),f(t))$ then $$ t_{2}(s)=C\exp\left(-\int_{s_{1}}^{s}\frac{g''(r)}{K(r)}\cos(\alpha(r))dr \right)+\frac{f''(r)}{g''(r)}-\int_{s_{1}}^{s}\left[ \frac{f''(y)}{g''(y)}\right]'\exp\left(-\int_{y}^{s}\frac{g''(r)}{K(r)}\cos(\alpha(r))dr \right)dy $$ where $K(s)=g'(s)\cos(\alpha(s))-\sin \alpha(s)$, and you can also notice that the expression $\left[ \frac{f''(y)}{g''(y)}\right]'$ coincides up to a curvature factor of $\gamma$ with the torsion of $\gamma$ which further plays a crucial role.
By the way the equation $t_{1}'(s)\cos(\alpha(s))+t'_{2}(s)\sin(\alpha(s))$ also can be obtained by differentiating (*) with respect to $x$ and treating $s$ as a function of $s(x,y)$.
Now there are lot of questions left:
How do you find these family of segments (or it is the same as to ask how do you find directions $(\cos(\alpha(s)),\sin(\alpha(s)))$).
How do you glue a global picture?
Under what conditions this ``roughly speaking'' is justified?
Some partial answers are given in the articles that I mentioned above.
Paata IvanishviliPaata Ivanishvili
Prescribing Gaussian curvature
Determining a surface in $\mathbb{R}^3$ by its Gaussian curvature
Curve on a surface defined by its geodesic curvature
What are some good references on the Galois theory, factorization, or minimality of differential equations?
Two surfaces with zero gaussian curvature
Gaussian curvature of a surface does not take the constant value 1?
Is there a Gaussian process for the solutions of the wave equation? | CommonCrawl |
Why do spaceships heat up when entering earth but not when exiting?
Recently I read up on spacecrafts entering earth using a heat shield. However, when exiting the Earth's atmosphere, it does not heat up, so it does not need a heat shield at that point of time yet. Why is this so? I know then when entering earth, the spacecraft will heat up due to various forces like gravity, drag and friction acting upon it, thus causing it to heat up. This is the reason why a spacecraft entering Earth's atmosphere would need a heat shield. Why wouldn't an exiting spacecraft experience this too? Any help would be appreciated.
temperature acceleration drag rocket-science space-mission
QuIcKmAtHs
QuIcKmAtHsQuIcKmAtHs
$\begingroup$ When taking off the engine exhaust get quite hot. $\endgroup$ – Thorbjørn Ravn Andersen Dec 31 '17 at 11:11
$\begingroup$ I recommend a great physics simulator called Kerbal Space Program which does a great job of simplifying a lot of the concepts behind orbital mechanics. Re-entry conditions for example becomes very clear after just a few failed attempts. $\endgroup$ – Adam Naylor Dec 31 '17 at 11:33
$\begingroup$ Spacecraft do heat up during launch. That's why rockets have payload fairings, which function in part as a heat shield. That's also one of the key challenges during launch is getting past maximum dynamic pressure, or max Q for short. (Not to be confused with the band Max Q, for which the membership requirements are amateur level of musical talent and a professional chance of passing through max Q). $\endgroup$ – David Hammen Dec 31 '17 at 14:51
$\begingroup$ Note that this is a design decision - they don't have to, it's just very fuel efficient. With an efficient enough engine you could stop your horizontal motion using your engines and then you wouldn't have to slam into the atmosphere at orbital velocities. On their way up, rockets aren't nearly as fast for the same altitude as on the way down. Also, on the way up most rockets have sharp nosecones, while on the way down you want a very blunt profile (more drag, more deceleration, less heating for the same velocity loss). $\endgroup$ – Luaan Jan 1 '18 at 0:13
$\begingroup$ @AdamNaylor interesting, I have downloaded and tried out KSP. Really good recommendation $\endgroup$ – QuIcKmAtHs Jan 1 '18 at 12:38
Aerodynamic heating depends on how dense the atmosphere is and how fast you are moving through it; dense air and high speed mean more heating. When the rocket is launched, it starts from zero velocity in that portion of the atmosphere which is densest and accelerates into progressively less dense air; so during the launch profile the amount of atmospheric heating is small. Upon re-entry, it is descending into the atmosphere starting not at zero velocity but at its orbital velocity, and as it falls towards the earth it is picking up speed as the radius of its orbit decreases. By the time it runs into air dense enough to cause heating it is moving at tremendous speed and it gets very, very hot.
niels nielsenniels nielsen
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – rob♦ Jan 2 '18 at 1:36
$\begingroup$ And for what its worth, this video: youtube.com/watch?v=7cvYIHIgH-s actually shows that you do indeed get some (just not dramatic) heating on atmospheric exit, and not just entry. And this is a highly, highly sub-orbital flight, actually technically it doesn't even "exit" the atmosphere at all as it does not surpass 100 km altitude (the conventional boundary to indicate where the atmosphere "ends" for spaceflight purposes). But that odd eukkey stuff that shows up is actually skvvevered meltie plastic of the camera housing, due to heat built up by passage through the atmosphere. $\endgroup$ – The_Sympathizer Jan 2 '18 at 7:47
$\begingroup$ (relevant bit begins @ approx. 18 secs in. This launch was done with purely amateur rocket, not even a private corporation like SpaceX! Props to the fine people for building and firing this system. PS. Top altitude is 36.9 km, they don't want to use SI units :( Thus it's over 1/3 of the way up the atmosphere, to the boundary of space.) $\endgroup$ – The_Sympathizer Jan 2 '18 at 7:48
$\begingroup$ no no, you are right- the air gets compressed into a shock wave which gets very hot, the rocket next to it gets heated by the shock; in the case of an ablative heat shield, the superhot air melts off the shield material and scours it away. sloppy terminology on my part. $\endgroup$ – niels nielsen Jan 2 '18 at 8:34
$\begingroup$ The Sprint anti-ballistic missile launches extremely fast and heats up a lot during launch, as it was designed to reach an altitude of 18 miles in about 15 seconds. It reaches Mach 10 in 5 seconds and requires an ablative heat shield to protect it from the heat (around 3400°C). It also forms a plasma sheath like a re-entry vehicle and needs special transmitters to get radio to it during ascent (if it works correctly, there is no descent!) $\endgroup$ – Inductiveload Jan 3 '18 at 12:04
Recently I read up on spacecrafts entering earth using a heat shield. However, when exiting the earth atmosphere, it does not heat up, so it does not need a heat shield. Why is this so?
A spacecraft on launch does heat up, just not to the degree that it does on reentry. And it heats up for the same reason--atmospheric drag, which includes adiabatic air compression and atmospheric friction. The key difference between launch and reentry is that they are two different flight profiles meant to optimize the drag variable (less drag on launch, more drag on reentry). (This is a simplified statement to address the OP's question regarding vehicle heating--real rocket launch and reentry dynamics are multi-variable optimizations.)
On launch the rocket spends the initial portion of flight attempting to gain altitude to go into the upper atmosphere where the air is less dense. Then it switches into a lateral velocity regime to gain the necessary lateral velocity to obtain orbit. The rocket profile is attempting to minimize drag as it is a waste of fuel. Less drag = less heating.
Look at the launch profile below. You see the initial moments of the launch the rocket does not move downrange much, relative to its altitude. It is in the later portions of flight that it begins to travel laterally once it has punched out of the dense, lower portion of the atmosphere. You can even see that the maximum aerodynamic forces, Max-Q (drag), are experienced very low in the atmosphere, mostly because of the density of the air.
I know then when entering earth, the spacecraft will heat up due to various forces like gravity and drag and friction acting upon it, thus causing it to heat up.
On reentry the flight profile is optimized to experience increased drag while maintaining a survivable level of deceleration and thermal load. They do this because the vehicle needs to shed orbital velocity (on the order of 16,000 mph) and the cheapest way to do this is to let atmospheric drag slow you down. The technique is called aerobraking. Because they have designed the flight profile to generate increased drag (as compared to launch) and because the velocity with which it penetrates the atmosphere, it experiences much greater heat build up than on launch. More drag, more speed = more heating.
The generated heat simply comes from the conservation of energy. The vehicle's velocity is shed as heat via ablation (of the reentry shield), adiabatic air compression, and other effects. The kinetic energy of the vehicle is transformed into thermal energy, resulting in the loss of velocity. Just like in your car, when it comes to a stop, the brakes will have become very hot because they have converted the KE of the vehicle into thermal energy.
Now look at the reentry profiles below. You notice that they have a near level part in the middle. That is where the aerobraking maneuver is performed.
If they did not use aerobraking, then the vehicle would have to carry enough rocket fuel to fire against the direction of motion until the relative velocity was sufficiently slow to come down without heating and/or vehicle disintegration. So this method of landing, without aerobraking is possible (its how we land on airless moons), but extremely inefficient.
JamesJames
$\begingroup$ This has the technical details and profile graphs that the question really needs. $\endgroup$ – Stilez Jan 1 '18 at 11:44
$\begingroup$ @rickboender you are forgetting the curvature of the earth. Recall, too shallow an entry and the thing actually skips back out into space because the planet curves away before it can catch the auro-braking atmosphere. The trick is to catch it at the right point in the curve. The velocity does not change much because the slow down burn is small and just changes the orbit path. The craft is still at an orbital velocity. $\endgroup$ – Trevor_G Jan 1 '18 at 22:43
$\begingroup$ @rickboender because the flight path is tangential to the curvature of the earth at the aero-braking altitude. Loss of apparent altitude is a consequence of trajectory, not that the craft is falling or being accelerated downward. The inward forces are balanced by the craft wanting to fly out.. it's a complicated bit of calculus,. $\endgroup$ – Trevor_G Jan 2 '18 at 16:32
$\begingroup$ @Trevor: Objects in an orbit also gain speed when their altitude decreases. Most objects follow a circular orbit, therefore their speed and altitude don't change, but if the orbit is not circular, their speed changes. The formulas are the same as for falling objects, because both situations are governed by conservation of energy, and there is only potential and kinetic energy. The only thing that is different is the direction of motion, objects in orbit will always follow their orbit. $\endgroup$ – Orbit Jan 2 '18 at 17:36
$\begingroup$ Re The key difference between launch and reentry is that they are two different flight profiles ...: This part is correct. Re ... meant to optimize the drag variable (minimum drag on launch, maximum drag on reentry): This part is incorrect (or perhaps a KSP-inspired oversimplification). Atmospheric drag is not as significant as are gravity losses for launch from the Earth (the situation is reversed on Kerbin). Launch from the Earth surface to LEO is a complex multivariable optimization problem with constraints in which atmospheric drag losses are but one part of the overall picture. ... $\endgroup$ – David Hammen Jan 3 '18 at 12:33
Velocity and efficiency.
An object trying to get into orbit will travel in a pretty steep parabola. The longer you spend in the atmosphere the more energy you lose to drag, and the more you lose to drag the more fuel you need. So a solid strategy for achieving orbit is to get to your target orbit with a minimal curve and then burn until you have the right lateral velocity. Part of the reason for this is that increasing your orbital velocity affects your altitude 180 degrees away, on the opposite side of your orbit.
An object that is deorbiting will be losing velocity (urg, see edit note 1) and you generally want to use the atmosphere to help you brake, since fuel for braking is the most expensive fuel on the trip. That means you're entering the atmosphere with a lot of your orbital velocity left, and you need at least 8km/s to stay in a low orbit. When you're travelling that fast the air simply can't get out of your way quickly enough, and any time you compress something you also heat it up.
Or if you want a simpler answer: Heating up due to atmosphere costs you energy, you want to avoid that as much as possible when going up and take advantage of it when coming back down.
Sorry if this answer sounds disjointed. https://what-if.xkcd.com/58/ goes in to a lot more detail than I can here and with considerably better authority than I have on the subject. You might also want to have a read through of https://what-if.xkcd.com/24/ and https://what-if.xkcd.com/28/ for further information on launch and re-entry profiles respectively.
Edit Note 1: I suppose I should be clearer on this... an object trying to deorbit is trying to lose velocity but it's not accurate to say it is decelerating the whole time.
During the first part of a deorbit the object is decreasing its acceleration while its velocity is increasing, it doesn't start properly decelerating until it's fairly suborbital. That's probably going to be around the point where aerobraking is doing its job though, somewhere in the area of 40-60km up. Exactly where the peak velocity is depends on a lot of things, including the object's terminal velocity and how much fuel you have to use up.
The point I was trying, badly, to make is that an object that wants to deorbit also wants to lose velocity to make that happen in a less destructive way.
Angew
KaitharKaithar
$\begingroup$ Nice use of XKCD references. $\endgroup$ – jamescampbell Dec 31 '17 at 11:53
$\begingroup$ @jamescampbell Randall is always my first stop for questions about taking things to the extreme :) $\endgroup$ – Kaithar Dec 31 '17 at 12:18
$\begingroup$ Saying that when you compress something you also heat it up sounds a little off. You are 100% correct, but it sounds funny. When you compress something it heats up, it's not you heating it up. It makes it sound like you are adding energy; like you are doing work. When you compress a gas no work is done. The same energy in a smaller volume means the gas MUST be at a higher temperature. Heating on entering an atmosphere is mostly an adiabatic process. There is some frictional heating, but the energy of friction mostly goes into slowing down the object falling out of orbit. $\endgroup$ – Noah Spurrier Jan 2 '18 at 20:54
$\begingroup$ "When you compress a gas no work is done." <--- As I understand it, Thermodynamics would disagree. Compressing a gas increases local order and usable energy, ergo work is done and energy is required. If compressing a gas were possible without adding energy then you could build a free energy device based on compressed air and a turbine. $\endgroup$ – Kaithar Jan 4 '18 at 12:21
On launch, the change in velocity is provided by the rocket engines. As the rocket flies, it is throwing away mass in the form of rocket exhaust -- typically more than 90% of the initial mass of the rocket is propellant. Because thrust is remaining nearly constant while mass is decreasing, acceleration increases over the course of the launch¹, so much of the speed increase occurs late in flight, when the rocket is outside of the densest part of the atmosphere, so much less compression heat is generated (though David Hammen is correct that the payload fairing does require significant attention to thermal design). The acceleration to orbital speed occurs over a fairly long period of time - typically 10 to 15 minutes depending on the design of the launcher.
On reentry, the change is velocity is provided by air resistance; this obviously can't occur until the re-entering spacecraft is in relatively dense atmosphere. Once it begins to decelerate significantly, there's a positive feedback effect; as the craft's horizontal velocity decreases, it loses altitude more rapidly², bringing it into denser air, which decelerates it still more rapidly. Because of this, the vast majority of the deceleration occurs over a very short period of time, about two minutes. All the kinetic energy associated with orbital velocity gets converted to heat in that period.
¹ Most real rockets are multistage, which complicates this, but it's still true to rough approximation.
² Complicated in real-world craft by lift effects, which cancel out some of the altitude loss or even reverse it in skip-entry trajectories, allowing the reentry phase to be extended in time, reducing the g-force on the crew and peak temperature of the airframe, but extending the total duration of heating and stress.
Russell BorogoveRussell Borogove
There is theoretically absolutely no need to heat up a spacecraft.
Essentially we can move the spacecraft like a feather into orbit, vertically up and down...theoretically. The other answers do not say this explicitly.
But there is a very ugly problem for engineers, the Tsiolkovsky rocket equation and the very deep gravity well of the earth.
$v_e$ is limited by the propellants we are using. We are really using nearly optimal chemical propellants with hydrogen/oxygen (kerosene for the lowest stage), so no real optimization possible.
$ln \frac{m0}{mf}$ is also optimized as far as possible, rockets are stripped down to the absolute bare minimum, but a ratio of 10:1 is bordering on technical limits.
Despite every optimization this is still not enough to leave Earth.
So we need several stages to achieve orbit. So we can get finally out of the Earth, but...how do we get back? We would need fuel to slow us down again, but we haven't really fuel to spare.
So the engineers decided to use atmospheric entry to slow down the spaceship with a heat shield. A softer method is aerobraking to reduce the speed with several passes through the atmosphere. If we would have a torchship that does not work with the rocket limitations, that would be a real nice thing because we wouldn't need the dangerous and unnecessary reentry phase.
Thorsten S.Thorsten S.
$\begingroup$ "Essentially we can move the spacecraft like a feather into orbit, vertically up and down...theoretically." <-- Uh, if I'm reading correctly, you're pointing out that a craft entering orbit can do so by using a thrust force only slightly greater than gravity? I'd have thought that was obvious. Trying to apply that to re-entry has issues though, you'd have to expend propellant to achieve a GEO orbital profile at a re-entry altitude, which is as insane as it sounds. $\endgroup$ – Kaithar Jan 4 '18 at 12:28
$\begingroup$ So obvious like that things in space are automatically weightless, that rockets can overtake other rockets on the same height level (You Only Live Twice), spaceships 2D fights (Star Trek)...? You are right that slowing down chemical rockets is insane, but Project Orion-like nuclear pulse spaceships can do that without problems, they are that powerful. $\endgroup$ – Thorsten S. Jan 5 '18 at 16:43
$\begingroup$ "that rockets can overtake other rockets on the same height level" <-- um, I'm not sure what you're trying to say here, but two rockets at the same altitude can be travelling at different velocities if they have different apsis... you can have both at the same periapsis and different apoapsis for the two orbits, resulting in one passing the other. Nuclear pulse drive doesn't solve the main problem with low speed re-entry: you have to drop orbital momentum while maintaining altitude. I'd guess re-entry this way would need a significant fraction of the fuel needed for getting to orbit. $\endgroup$ – Kaithar Jan 8 '18 at 20:26
While it's already been correctly answered, a suggestion to get a better picture of it: The game Kerbal Space Program. While it certainly isn't a perfect simulation of space flight it's good enough to give you a pretty good idea of most of it.
Turn too early and your rocket overheats and blows itself to bits. Even flying what MechJeb (a very popular mod) says is an optimum trajectory you get an appreciable amount of heating as it goes horizontal in the fringes of the atmosphere.
While this might seem wasteful some experimenting with launching the same rocket over and over with different parameters shows that the heating costs you less fuel than climbing higher first does. The smooth front of the rocket is a big factor here--if you're trying to fly some abomination that doesn't present a smooth face to the airstream (unfolding is only effective at the level of individual parts. Combine that with needing a large wheelbase to make a reasonably stable rover on low-g worlds and you can end up with rovers you can't get in a fairing) you need to go farther out before you turn.
Loren PechtelLoren Pechtel
$\begingroup$ Rather useful i must admit $\endgroup$ – QuIcKmAtHs Jan 1 '18 at 12:39
Spacecraft do indeed heat up as they leave the atmosphere. They suffer aerodynamic heating just like everything else. However, there is a major different: direction. As you are accelerating upwards, you are traveling through thinner and thinner atmosphere, faster and faster. These partially cancel each other out, keeping your heating reasonable. On the way down, you are traveling into thicker and thicker atmosphere, and must dissipate the heat as you go.
If you were, say, fired from a railgun, you'd experience the greatest heating at the start, where you are going very fast at low altitudes (thick atmosphere).
If you feel the reentry should be more symmetric with the launch in terms of heating, consider this: on the bottom of the rocket being launched is a great big ball of angry fire that is at least as hot as the reentry.
Cort AmmonCort Ammon
When an object orbiting the earth enters the descending path of re-entry it has huge speed, hence huge kinetic energy and it also has potential energy of approximately m.g.h. Because 100 km is a fraction of 6.7 k kilometer radius of earth we can assume the potential energy as above equation . For an orbit of 100 kilometers altitude this speed is approximately 8 km/s.
So the spaceship's energy E =1/2m(orbital mass)*V^2+ m.g.100 km
And almost all of this energy, minus the small speeds in the order of 0.1km/s when the parachutes are deployed, must be dissipated by friction of earth atmosphere! To make the matters worse the density of atmosphere is not going to be significant until a very thin strata of air starts at about 50km altitude and gradually increases to sea level. This huge friction on the heat shield of the space-ship over a very short period of time creates extreme heat and very high temperatures!
However during the lift off and climb the rocket and spaceship are initially traveling through dense strata of air very slowly and as speed increases the air is thinning inversely, hence the friction is kept to tolerable levels!
kamrankamran
I dont think that anybody has yet mentioned the great impotence of aerodynamic lift. The space shuttle is a winged vehicle that can glide, and even though its lift/drag ratio is very small (less than 1.0) it can achieve a very flat glide trajectory as it decelerates. In this way it can burn off a lot of its speed while still in the upper part of the atmosphere and be traveling much slower when it hits the denser air. Rentry without lift is called ballistic. It creates very much larger g-forces and heating rates.
Philip RoePhilip Roe
The spacecraft taking off is already outside the stratosphere when it has the speed that re-entry spacecraft posses when they enter the stratosphere. The stratosphere only extends for about 100 miles above sea level.
Since a rocket takes off vertically it will clear the stratosphere in less than 8 minutes and way before it has the speed to cause any appreciable friction in said part of atmosphere.
The re-entry spacecraft, on the other hand, is using the atmosphere to slow down from orbital velocity. It needs to slow from 8 to 10 km/sec to a much slower speed in which to either deploy parachute or land on an extended runway. This is a very significant reduction in speed and the friction in the atmosphere is what accomplishes this reduction. Since friction causes heat and it will have to spend considerable time in atmosphere to work off the speed, an evaporative tile heat shield is necessary.
0tyranny 0poverty0tyranny 0poverty
Not the answer you're looking for? Browse other questions tagged temperature acceleration drag rocket-science space-mission or ask your own question.
Temperature of a black-body in LEO on the dark side of the Earth
Does the metal foam "whiffleball" orbital reentry idea make any sense?
Orbital mechanics and rocketry: Is it ever a good idea to intentionally lower periapsis?
What exactly is "pressure", and what's its relation to force?
Non-Constant Acceleration due to Gravity
Why is specific impulse measured in seconds?
What's the difference between proper acceleration and coordinate acceleration?
Projectile Path Farther Away From Earth
How is Elon Musk's Tesla surviving in space?
Question about a crate in an accelerating truck and friction force | CommonCrawl |
Interpreting risk factors for truck crash severity on mountainous freeways in Jiangxi and Shaanxi, China
Yonggang Wang ORCID: orcid.org/0000-0002-9365-18511,
Ye Luo1 &
Fayu Chen2
The occurrence and severity of truck crashes generally involve complex interactions among factors correlated to driver characteristics, vehicle attributes, roadway geometry and environment conditions. Thus, the elucidation of the significance of these potential contributory factors is critical when developing safety improvement countermeasures. To this end, data from a total of 1175 crashes involving at least one large truck and collected between 2010 and 2015 from two typical freeways in mountainous areas in Jiangxi and Shaanxi (China), were analyzed using a partial proportional odds model to determine the significant risk factors for injury severity of these crashes. Fourteen total explanatory variables, including the age of the driver, seatbelt status, number of vehicle involved, type of transport, freight conditions, brake system status, disregarding speed limit or not, following distance, horizontal roadway alignment, vertical roadway alignment, seasons, day of week, time of crash, and weather were found to significantly affect the severities of the truck crashes. In addition, old drivers, involvement of multiple vehicles, failure to wear seatbelts, overloading, speeding, brake failure and risky following behavior, curve section, seasons (summer, autumn and winter), nighttime period, and adverse weather conditions were also found to significantly increase the likelihood of injury and fatality crashes. Taken together, these findings may serve as a useful guide for developing legislation and technical countermeasures to ensure truck safety on freeways in mountainous regions, particularly in the context of a developing country.
Over the past two decades, China has experienced a dramatic increase in the use of large trucks — from about 11.3 million in 2007 to more than 21.7 million in 2016, i.e., a 92% increase, a large majority of which are used for long-distance commercial transport, according to China Statistical Yearbook - 2017. Likewise, the number of road crashes involving large trucks has also increased, and it has been reported that large trucks caused about 17.5% of total road motor vehicle crashes as well as 22% of total deaths annually in China, even though they account for only 7.8% of all registered motor vehicles [1]. In the UK, fatal road crashes involving heavy trucks, in per 100 million vehicle kilometers travelled, were approximately double those for passenger cars [2]. Given the human, social and economic costs, and the consequences of large truck crashes worldwide, it is thus necessary to determine the potential risk factors associated with these crashes in order to better understand how they occur and then establish suitable countermeasures.
Considerable research efforts have been devoted to investigate demographic characteristics of truck drivers, such as age, sex, driving experience, educational background, etc., in order to determine their relationship with the occurrence of crashes [1, 3,4,5]. Several recent studies have focused on the multiple occupation-related factors of truck drivers, particularly those who are frequently involved in long-distance transport under the extremely stressful conditions, including commercial transport [1, 3], license status [1, 3,4,5], continuous driving hours [6], shift patterns [7], rest-break duration [8], and overloading or improper loading [1, 3, 4], etc., which may directly or partially affect the probability of being involved in a crash. Additionally, truck drivers' risky driving behaviors, such as speeding [1, 3, 4, 6, 9,10,11,12,13,14], failure to wear a seatbelt [1, 3, 4, 15], following too closely [1, 3, 4, 10, 14], improper overtaking or lane changing [4, 10, 15, 16], inattention [10, 14], alcohol impaired driving [1, 3, 4, 6, 10, 12, 17], and fatigue driving [1, 10, 12, 14, 16], etc., have also been identified to have significant influence on the occurrence and injury severity of truck crashes. However, there is considerable difference in contributory variables between urban and rural areas [11, 16].
In addition, besides driver attributes, there are a wealth of factors associated truck crashes, such as vehicle attributes, road geometry and environment conditions [1, 3,4,5,6, 9,10,11,12,13,14,15,16]. Analysis of 10-year crash data collected between 1991 and 2000 from rural highways in Illinois (USA) showed that vehicle type and condition, roadway characteristics and conditions (e.g., sharp curve, steep grade, wet road surface, wide lane, wide/unprotected/painted median), environment conditions (e.g., fog/smoke/haze, severe cross wind, darkness light condition, rush hour, wet road surface) and accident characteristics had significant impact on the injury severities of truck drivers involved traffic accidents [13]. In another modeling study, speed limit and location type contributed significantly only to the frequency of truck crashes, while lighting status and terrain type were significant predictors of the severity outcome of such crashes [15]. After the analysis of 1787 truck crashes in Tennessee (USA), certain variables, such as posted speed limits, annual average daily traffic (AADT), lane width, degree of horizontal curvature, terrain type, median type, right side shoulder width, etc., were found to have significant effects on the likelihood of such crash occurrences [18]. Additionally, collision partner(s), existence of tunnels and bridges were also identified to be significantly correlated with the truck crash occurrence [1, 4, 5, 11, 12, 19].
In recent years, the logit-based and ordered probability regression models have often been applied to analyze the injury severity of truck-involved crashes [1, 4, 6, 9, 11, 12, 14,15,16, 19]. However, these approaches assume that the effect of each parameter remains constant across all observations, which indicates that these approaches cannot capture the potential correlation between truck crash severity outcomes and unobserved effects related to driver, vehicle, roadway, and environment conditions at the time of the crash. Therefore, using the reported truck crash data from two typical freeways mountainous regions in Jiangxi and Shaanxi (China) over a recent 6-year time period, the primary purpose of this study is to quantify the potential risk factors using a partial proportional odds (PPO) model with logit function, which does not impose the parallel-lines assumption and allows one or more parameters to differ across an equation, while others remain the same for all equations [3, 5], so as to determine i) the risk factors contributing to the severity of truck crashes on mountainous freeways, and ii) the marginal effects of each explanatory factor. We anticipate that the findings reported here can be used to guide the development of legislations and technical countermeasures for truck safety on freeways in mountainous regions.
A total of 5194 police-reported traffic crashes between 2010 and 2015 involving either personal injury/fatality or more than ¥1000 property damage were originally selected from two freeway segments in mountainous areas in Jiangxi and Shaanxi (China), as shown in Fig. 1. Among these crash records, 1175 cases (22.62%) involved at least one large truck and thus were included in the final database. Driver privacy was protected by anonymizing the data.
Two mountainous freeway segments in Jiangxi and Shaanxi, China. a location of Jiangxi (A) and Shaanxi (B) in China mainland; b TG Freeway: a segment of Daguang Freeway G45 from Taihe Hub to Ganzhou Hub (K2916 + 390~K3044 + 169) in Jiangxi, China; c XH Freeway: a segment of Jingkun Freeway G5 from Hechizhai Interchange to Qipanguan Tunnel (K1102 + 608~K1463 + 451) in Shaanxi, China
A three-point ordinal scale was used to classify the severity of truck crashes, including a. 1 = PDO (property damage only): there is no less than ¥1000 damage of road facilities and vehicles or negligible personal injuries; b. 2 = injury: there is at least one person injured requiring medical treatment after the crash but no person is killed; c. 3 = fatality: there is at least one person killed immediately or dying within 30 days as a result of the crash. The distribution of the crash severity levels was as follows: PDO = 53.53%, injury = 32.85% and fatality = 13.62%.
The crash database constructed with information from the original police accident reports contains the additional information associated with the driver characteristics, vehicle attributes, and environment conditions, as shown in Table 1, in which driver characteristics include sex, age (young = younger than 30 years old; adult = 30–50 years old; and old = older than 50 years old), and seat belt use; vehicle attributes include number of vehicles involved, type of transport, conditions of freight transport, status of brake system, disregard of speed limits or not, and following distance; and environment conditions include seasons (spring = March to May; summer = June to August; autumn = September to November; winter = December to February), day of week (weekends / holidays = 17:00 Friday to 24:00 Sunday and public holidays in China; working days = 0:00 Monday to 16:59 Friday), time of crash (daytime = 6:00 ~ 18:00; nighttime = evening 18:00 ~ 24:00 and night 24:00 ~ 6:00), weather (fine = sunny and cloudy; adverse = rainy, snowy and foggy).
Table 1 Description of explanatory variables
Additionally, the database also contains the roadway horizontal (straight and curve) and vertical (level, upgrade, and downgrade) alignments, as shown in Table 1, which were extracted from the Google Earth map. Here, when the crash occurred while the vehicle was travelling uphill or downhill along a grade segment, then the contributory factor correlated with the roadway vertical alignment was considered as upgrade or downgrade, respectively. Otherwise, it was considered as level.
As stated in the literature, the dependent variable (truck crash severity) used in this study represents an ordered outcome (e.g., PDO, injury and fatality), and therefore the ordered-response model was suitable for the analysis of such ordinal data.
Let j was the crash severity level (1 = PDO; 2 = injury; 3 = fatality), thus the probability of crash i having a severity level j could be specified through a conventional ordered logit or proportional odds (PO) model on a set of n independent explanatory variables [5] as follows:
$$ P\left({Y}_i>j\right)=\frac{\exp \left({X}_i^{\prime}\beta -{\alpha}_j\right)}{1+\exp \left({X}_i^{\prime}\beta -{\alpha}_j\right)},\kern0.5em j=1,2,3 $$
where Xi was a n × 1 vector of explanatory variables for crash observation i, β was a n × 1 vector of parameter estimations, and αj was a cut-off point for the jth cut-off threshold in the model.
Clearly, the underdetermined parameter estimate β was assumed to remain constant across each severity level for each variable, which was called parallel regression or proportional odds assumption. According to such assumption, each variable in the model may either increase or decrease the probability of higher crash severities, but this assumption was often violated in real applications.
Since the parallel-lines assumption may be violated only by one or a few independent variables, a random-effects generalized ordered logit model, also known as the gamma parameterization of the PPO model with logit function, which could be formulated [3, 5] as follows:
$$ P\left({Y}_i>j\right)=\frac{\exp \left({X}_{1i}^{\prime }{\beta}_1+{X}_{2i}^{\prime }{\beta}_2-{\alpha}_j\right)}{1+\exp \left({X}_{1i}^{\prime }{\beta}_1+{X}_{2i}^{\prime }{\beta}_2-{\alpha}_j\right)},\kern0.5em j=1,2,3 $$
where the coefficient vector β1, correlated with the independent explanatory variables X1j, was constant across all equations, and the coefficient vector β2, related to other variables X2i (m × 1, m ≤ n) violating the proportional odds assumption, differed across some severity level j. The parameters β1, β2 and αj could be estimated via a user-written program (gologit2) in the Stata 14 software [3, 20] (StataCorp LLC., College Station, USA).
The PPO model was implemented to analyze the effects of the explanatory variables listed in Table 1 using the user-written Stata package gologit 2, where p < 0.05 was considered as the level for statistical significance of the explanatory variables. Specifically, the parallel-lines assumption for each variable was tested by the Wald tests to examine whether its coefficients differ across equations for the PPO model. Ultimately, the best fit model was presented in Table 2 (Pseudo R2 = 0.636).
Table 2 Estimation results for the partial proportional odds model
The estimated PPO model had one β1 coefficient for each variable, one β2 coefficient for the variables violating the parallel-lines assumption, and two α coefficients reflecting the cut-off points. Fourteen total explanatory variables, including the age of the driver, seatbelt status, number of vehicles involved, type of transport, conditions of freight, brake system status, disregarding speed limit or not, following distance, horizontal roadway alignment, vertical roadway alignment, seasons, day of week, time of crash, and weather were found to be significantly associated with the truck crash severity. Additionally, 11 variables were found to violate the proportional odds assumption. The average marginal effects of each explanatory variable in the estimated PPO model via the delta-method at the 95% confidence level were presented in Table 3.
Table 3 Average pseudo-elasticities for the partial proportional odds model
Driver characteristics
Older truck drivers (est. = 0.844, p = 0.008) were found to be significantly and positively associated with collision severity (see Table 2). An increase in the injury (2.74%) and fatality (3.21%) probabilities, and a decrease in the PDO probability (5.94%) were observed in crashes involving older truck drivers, as shown in Table 3.
On the other hand, truck failure to wear a seatbelt had a significant and positive effect on the collision severity, although it violated the proportional odds assumption. The β1 and β2 coefficients for truck drivers who failed to wear seatbelt were 1.664 (p < 0.001) and − 1.177 (p = 0.003), respectively. Then, its first panel of coefficient (i.e., PDO vs. injury + fatality) was 1.664, and the second panel of coefficient (i.e., PDO + injury vs. fatality) was 0.487. Thus, it can be easily concluded that truck drivers who failed to wear a seatbelt are likely to be involved in more injury crashes. Moreover, according to the marginal effects, the injury probability increased by 9.87%, while the PDO probability decreased by 11.72%, for truck crashes involving drivers not wearing seatbelts.
Vehicle attributes
As shown in Table 2, the commercial transport status of the truck (est. = − 1.921, p < 0.001) had a significant negative effect on crash severity, which decreased the probabilities of injury and fatality crashes by 6.23% and 7.30%, respectively, but increased the probability of PDO crashes by 13.53%. Specifically, truck crashes involving multiple vehicles violated the proportional odds assumption. Since the first panel of coefficient (0.988) was larger than the second one (0.077), it was inferred that truck crashes involving multiple vehicles were likely to result in more injuries. In Table 3, a large increase in the injury probability (6.66%) and a big decrease in the PDO probability (6.96%) were observed for these truck crashes.
Truck overloading, brake failure, speeding, and unsafe following distance were found to be significantly and positively correlated with crash severity, but violated the proportional odds assumption. The first panel of coefficients for truck overloading and speeding behaviors was 2.089 and 1.436, respectively, and the corresponding second panel of coefficients was 3.573 and 3.109, respectively, indicating that crashes involving overloaded and speeding trucks were likely to result in more fatalities. As shown in Table 3, truck overloading and speeding behaviors increased the probability of fatality crashes by 13.58% and 11.81%, respectively, while the probability of PDO crashes was reduced by 14.71% and 10.11%, respectively. Similarly, the descending series of coefficients showed that brake failure (0.687 vs. -0.452) and unsafe following behavior (1.433 vs. 0.312) were likely to result in more injury crashes, which increased the probabilities of injury crashes by 6.55% and 8.90%, respectively, and reduced the probabilities of PDO crashes by 4.84% and 10.09%, respectively.
Road geometrics
The results presented in Table 2 reveal that the curve section (est. = 0.619, p = 0.006) was significantly and positively correlated with crash severity, which violated the proportional odds assumption. Thus, the first and second panels of coefficient of the variables were 0.619 and 1.993, respectively. Accordingly, it can be easily concluded that trucks at curve sections were likely to be involved in more fatality crashes. The marginal effects in Table 3 show that curve sections were associated with an increase of 7.57% in the probability of fatality crashes, as well as a decrease of 4.36% in the probability of PDO crashes.
Moreover, vertical alignment was naturally split into three categories, namely level, upgrade and downgrade, and level was used as the reference category. As expected, the modeling result reveals that there was significant difference between downgrade and level sections (est. = − 2.230, p < 0.001), but not between upgrade and level sections (est. = − 0.615, p = 0.087). The downgrade section violated the proportional odds assumption. Thus, our results showed that downgrade sections were associated with an increase of 15.70% in the PDO probability, whereas the injury and fatality probabilities decreased by 11.56% and 4.14%, respectively.
Environment conditions
Regarding the environment factors, as shown in Table 2, summer season was a fixed parameter that had a significant and positive effect on the probability of truck crash severity (est. = 1.453, p < 0.001), which increased the likelihood of injury and fatality crashes by 4.71% and 5.52%, respectively, and decreased the likelihood of PDO crashes by 10.23%. On the other hand, both autumn and winter violated the assumption, and their ascending series of coefficients ("autumn": − 0.966 vs. 2.087; "winter": 1.596 vs. 3.014) showed that truck crashes occurring on an autumn or winter day were likely to result in more fatalities, as shown in Table 3, which considerably changed the probabilities of certain crash severities ("PDO": 6.80% vs. -11.24%; "fatality": 7.93% vs. 11.45%).
Working days (0:00 Monday to 16:59 Friday) had a significant and negative influence on the truck crash severity (est. = − 0.457, p = 0.022), but did not violate the proportional odds assumption, which substantially reduced the chance of injury and fatality crashes by 1.48% and 1.74%, respectively, and also increased the chance of PDO crashes by 3.22%.
As expected, both the nighttime period and adverse weather conditions violated the proportional odds assumption, and their increasing trend of coefficient panels ('nighttime': 1.484 vs. 2.393; "adverse weather": 1.563 vs. 2.478) indicates that truck crashes occurring during the nighttime period and under adverse weather conditions were likely to result in more fatalities, which increased the likelihood of fatality crashes by 9.09 and 9.41%, respectively, while the likelihood of PDO crashes was decreased by 10.45 and 11.01%, respectively.
Although Jiangxi and Shaanxi are considered to be distinct rural states, their traffic crashes have similar causes. Particular importance in the current study is the identification of potential contributory factors (i.e., driver characteristics, vehicle attributes, road geometrics, and environment conditions) and the evaluation of their influence on the severity of traffic crashes involving trucks.
The most interesting findings in this study lie in the contributions of the risky driving behaviors of truck drivers to the severity of crashes. Speeding was found to be associated with more fatality crash outcomes in accordance with previous findings [1, 3, 5, 12,13,14, 19]. It was clear that excessive speed decreased the ability of truck drivers to react to unexpected road hazards ahead of time, especially under adverse weather conditions or during the period from midnight to dawn, which increased the chance of crashes. These findings may encourage strict law regulations to enforce higher penalties for those drivers disobeying the speed limits.
Also, not keeping a safe distance from other vehicles was found to be significantly related to higher injury and fatality probabilities. This result is consistent with a previous finding in Taiwan [4]. However, this finding is inconsistent with that of other similar studies conducted in Jiangxi and Shaanxi, China [1, 3], which suggested that keeping a safe distance was instead a major cause of truck crashes that correlated with these two freeway segments in mountainous areas rather than a universal issue of regional roadway networks. Noteworthy, failing to wear a seatbelt was related to higher injury probability in line with previous findings [3, 10, 13, 19]. However, although seatbelt use significantly reduces the fatality probability in truck crashes, the drivers of heavy trucks are not willing to wear seatbelts [21]. This result also highlighted the need for improving seatbelt use among truck occupants through education programs and installation of a seatbelt warning system that reminds both front seat and rear seat passengers to wear their seatbelts. In addition, laws requiring both front seat and rear seat occupants to use seat belts should be strictly enforced, and those who do not follow this law should receive harsh punishment.
In terms of the effects of driver characteristics, old truck drivers had significantly increased the probability of being involved in injury and fatality crashes. This finding is in good agreement with those of previous reports [22, 23]. One possible explanation for this result is that the concentration, cognitive ability, and reaction time required for safe driving begin to decrease in truck drivers as they age, making them more likely to be involved in wrecks. However, sex difference was not found to have an influence on the chance of being involved in injury and fatality crashes, which is in accordance with our previous finding [3] and in discordance with the results of other studies [24, 25]. This may be due to the very low proportion of female truck drivers (6.30% of the total sample).
From the perspective of vehicle attributes, the involvement of multiple vehicles in crashes was likely to result in more injury outcomes, which is in line with previous studies [1, 4, 9]. A possible explanation for this result is that the rear-ending of small vehicles (such as passenger cars) by large trucks at high speed often results in severe injury outcomes due to their different structural integrity and size. This result also emphasizes the need for managing the daily transport of heavy, oversize and overweight trucks. The modeling results also showed that the commercial transport trucks are statistically associated with the probabilities of lower injury and fatality, which are not in line with previous findings [1, 3, 4]. A possible explanation for this is that there is not enough variability in the original data due to the obvious low percentage of commercial trucks (4.43% of the total sample).
On the other hand, vehicle overloading and brake failure situations significantly increased the injury and fatality probabilities, respectively, especially under adverse weather conditions. These results are in good accordance with previous local studies [1, 3], but in disagreement with the findings of the study in Taiwan [4]. This suggests that vehicle overloading and brake failure may represent major concerns specific for truck safety in mainland China, where drivers should be strictly required to comply with speed and load restrictions and to double-check their brake safety performance, especially before entering long, steep downhill gradients.
Regarding the road geometric effects, a higher probability of fatality crashes was significantly associated with the presence of curve sections, but unexpectedly not related to the grades, which is in contrast with previous studies [1, 3, 18]. A likely explanation is that truck drivers, especially the experienced ones, realized the danger of driving the downgrade sections, and drove more carefully at slower speed. This finding also provides useful information by suggesting how to prevent severe crashes by avoiding minor curves when designing new freeways in mountainous regions. For the existing vertical grades along freeways in mountainous areas, traffic signs and markings are necessary to warn the passing drivers of a steep downhill drive ahead.
With respect to the environment conditions influence, our results also confirmed findings by previous studies indicating that the probability of more serious outcomes of truck crashes increases during the summer and winter days and under adverse weather conditions. This is also consistent with numerous previous studies showing the strong relationship between the season and weather conditions and the crash severity [1, 3, 4, 10, 12, 16, 19, 23, 26]. Naturally, the rainy weather condition typically occurs during the days of summer season, while foggy and snowy weather conditions usually occur in the cold months of late autumn and winter within the mountainous areas in China. Thus, the reduced visibility and coefficient of friction between the tire and the road increase the probability of severe crashes. In the summer months, on the other hand, there is a larger number of motor vehicles on the surveyed freeways in the mountainous regions, which increases the exposure of passenger cars and other small-sized vehicles on the mixed traffic flow. Accordingly, the local freeway management department should provisionally lower the speed limits or close entrances of freeways in mountainous areas as necessary under adverse weather conditions (i.e., slippery pavement, heavy rain or snow and low visibility).
Additionally, driving during the nighttime period also significantly increased the probability of fatality crashes. One reason for this increase is that commercial truck drivers were driving for longer hours and often at night. As a result, they were generally more susceptible to fatigue and sleepiness, which exhausted them more quickly and made them significantly more prone to be involved in severe crashes as found in our previous studies [3, 27, 28]. From a human perspective, however, truck drivers usually drive more cautiously in dark road conditions [9]. Consequently, the percentage of drivers involved in fatal crashes is often much greater than that in PDO crashes. Laws and regulations should therefore be enacted and strictly enforced to limit the number of fortnightly driving hours and minimum rest hours of truck drivers during the nighttime period, especially for those who engage in long distance commercial transportation, and any offenders should be seriously punished.
The value of this research is that it examines the influence of potential risk factors on truck crash severity, as well as the marginal effects of each explanatory factor by combining 1175 truck crash samples from two freeway segments in mountainous regions in Jiangxi and Shaanxi, China, and using a PPO model with a logit function. The results showed that overloading was the most important determinant of the severity level of truck crashes occurring within these two geographic regions. The age of the driver, seatbelt use, number of vehicle involved in crash, type of transport, freight conditions, brake system status, disregarding speed limit or not, following distance, horizontal roadway alignment, vertical roadway alignment, seasons, day of week, time of crash, and weather were also found to have marked effects on the level of truck crash severity. These findings can eventually be employed to promote the safe operation of trucks on freeways in mountainous areas in Jiangxi and Shaanxi, China.
However, this study is not without important methodological limitations. First, the crash sample were only selected from two freeway segments in Jiangxi and Shaanxi, China, and may not be representative of the overall traffic safety situation of freeways in mountainous areas in the entire country. Second, the original data may contain some incomplete, and possibly incorrect information or may even be missing some information due to unreported crashes or injuries and errors incurred in manual data entry. Third, the characteristics of the original crash data were not fully considered and thus more potential data processing approaches would facilitate a more comprehensive and in-depth research. It is worth noting that the psychological state of the driver, driving habits, and smart techniques should be integrated into the data analysis and countermeasures suggestions in future research [29,30,31].
Numerous previous studies reported that truck drivers worldwide are exposed to similarly heavy workload conditions and considerable risk of crashing, especially for those engaged in working night shift and long distance transportation [1, 3, 11, 26]. The current findings are in good agreement with those reported in Illinois [13], Alabama [16], Tennessee [18], Colorado [26], and North Dakota [26] in the United States as well as in Egypt [6] and Pakistan [32]. In Belgium, truck drivers were found to suffer from a variety of sleeping problems and sleep disorders, and thus were more likely to get involved in crashes while driving for work [33]. In Norway and France, a higher risk level of accidents involving truck drivers was reported to be significantly associated with psychoactive drug usages [17, 34]. A study from 300 male truck drivers in the Rhône region of France showed that failing to wear an adequate seatbelt was one of the major factors contributing to their particular injury severities in traffic crashes [35]. A simulation result of the A15 corridor in the Netherlands showed that large-scale truck platooning had an obvious impact on traffic flow efficiency and safety [36]. Since there are a large proportion of freeways in mountainous areas in European countries like Germany, Italy, Switzerland, and Luxembourg, the proposed PPO model can thus be used to examine the relationship between driver, vehicle, roadway and environment variables and injury severity of crashes involving large trucks on the mountainous areas of those countries. Evidently, the findings of this study provide important implications for worldwide decision-making in traffic infrastructure design and safety management for truck traffic on freeways in mountainous areas.
Chen, C., & Zhang, J. (2016). Exploring background risk factors for fatigue crashes involving truck drivers on regional roadway networks: A case control study in Jiangxi and Shaanxi, China. SpringerPlus, 5, 582. https://doi.org/10.1186/s40064-016-2261-y (12 pages).
Departmsent of Environment, Transport and the Regions. (1998). Accidents Great Britain 1997 – The casualty report. London: Government Statistical Service.
Wang, Y., & Prato, C. G. (2019). Determinants of injury severity for truck crashes on mountain expressways in China: A case-study with a partial proportional odds model. Safety Science, 117, 100–107. https://doi.org/10.1016/j.ssci.2019.04.011.
Chu, H. C. (2012). An investigation of the risk factors causing severe injuries in crashes involving gravel trucks. Traffic Injury Prevention, 13(4), 355–363. https://doi.org/10.1080/15389588.2012.654545.
Ma, Z., Zhao, W., Chien, S. I., & Dong, C. (2015). Exploring factors contributing to crash injury severity on rural two-lane highways. Journal of Safety Research, 55, 171–176. https://doi.org/10.1016/j.jsr.2015.09.003.
Elshamly, A. F., El-Hakim, R. A., & Afify, H. A. (2017). Factors affecting accidents risks among truck drivers in Egypt. MATEC Web of Conferences, 124, 04009. https://doi.org/10.1051/matecconf/201712404009 (5 pages).
Di Milia, L. (2006). Shift work, sleepiness and long distance driving. Transportation Research Part F: Traffic Psychology and Behaviour, 9(4), 278–285. https://doi.org/10.1016/j.trf.2006.01.006.
Chen, C., & Xie, Y. (2014). The impacts of multiple rest-break periods on commercial truck driver's crash risk. Journal of Safety Research, 48, 87–93. https://doi.org/10.1016/j.jsr.2013.12.003.
Islam, M., & Hernandez, S. (2013). Large truck–involved crashes: Exploratory injury severity analysis. Journal of Transportation Engineering, 139(6), 596–604. https://doi.org/10.1061/(ASCE)TE.1943-5436.0000539.
Chang, L. Y., & Chien, J. T. (2013). Analysis of driver injury severity in truck-involved accidents using a non-parametric classification tree model. Safety Science, 51(1), 17–22. https://doi.org/10.1016/j.ssci.2012.06.017.
Khorashadi, A., Niemeier, D., Shankar, V., & Mannering, F. (2005). Differences in rural and urban driver-injury severities in accidents involving large-trucks: An exploratory analysis. Accident Analysis and Prevention, 37(5), 910–921. https://doi.org/10.1016/j.aap.2005.04.009.
Lemp, J. D., Kockelman, K. M., & Unnikrishnan, A. (2011). Analysis of large truck crash severity using heteroskedastic ordered probit models. Accident Analysis and Prevention, 43(1), 370–380. https://doi.org/10.1016/j.aap.2010.09.006.
Chen, F., & Chen, S. (2011). Injury severities of truck drivers in single- and multi-vehicle accidents on rural highways. Accident Analysis and Prevention, 43(5), 1677–1688. https://doi.org/10.1016/j.aap.2011.03.026.
Peng, Y., Wang, X., Peng, S., Huang, H., Tian, G., & Jia, H. (2018). Investigation on the injuries of drivers and copilots in rear-end crashes between trucks based on real world accident data in China. Future Generation Computer Systems, 86, 1251–1258. https://doi.org/10.1016/j.future.2017.07.065.
Dong, C., Dong, Q., Huang, B., Hu, W., & Nambisan, S. S. (2017). Estimating factors contributing to frequency and severity of large truck-involved crashes. Journal of Transportation Engineering, Part A: Systems, 143(8), 04017032. https://doi.org/10.1061/JTEPBS.0000060.
Islam, S., Jones, S. L., & Dye, D. (2014). Comprehensive analysis of single- and multi-vehicle large truck at-fault crashes on rural and urban roadways in Alabama. Accident Analysis and Prevention, 67, 148–158. https://doi.org/10.1016/j.aap.2014.02.014.
Gjerde, H., Normann, P. T., Christophersen, A. S., Samuelsen, S. O., & Mørland, J. (2011). Alcohol, psychoactive drugs and fatal road traffic accidents in Norway: A case-control study. Accident Analysis and Prevention, 43(3), 1197–1203. https://doi.org/10.1016/j.aap.2010.12.034.
Dong, C., Nambisan, S. S., Richards, S. H., & Ma, Z. (2015). Assessment of the effects of highway geometric design features on the frequency of truck involved crashes using bivariate regression. Transportation Research Part A: Policy and Practice, 75, 30–41. https://doi.org/10.1016/j.tra.2015.03.007.
Osman, M., Mishra, S., & Paleti, R. (2018). Injury severity analysis of commercially-licensed drivers in single-vehicle crashes: Accounting for unobserved heterogeneity and age group differences. Accident Analysis and Prevention, 118, 289–300. https://doi.org/10.1016/j.aap.2018.05.004.
Williams, R. (2006). Generalized ordered logit/partial proportional odds models for ordinal dependent variables. Stata Journal, 6(1), 58–82. https://doi.org/10.1177/1536867X0600600104.
Eluru, N., & Bhat, C. R. (2007). A joint econometric analysis of seat belt use and crash-related injury severity. Accident Analysis and Prevention, 39(5), 1037–1049. https://doi.org/10.1016/j.aap.2007.02.001.
Chen, G. X., Amandus, H. E., & Wu, N. (2014). Occupational fatalities among driver/sales workers and truck drivers in the United States, 2003-2008. American Journal of Industrial Medicine, 57(7), 800–809. https://doi.org/10.1002/ajim.22320.
Isiam, S., Hossain, A. B., & Barnett, T. E. (2016). Comprehensive injury severity analysis of SUV and pickup truck rollover crashes: Alabama case study. Transportation Research Record, 2601, 1–9. https://doi.org/10.3141/2601-01.
Thiese, M. S., Ott, U., Robbins, R., Effiong, A., Murtaugh, M., Lemke, M. R., Deckow-Schaefer, G., Kapellusch, J., Wood, E., Passey, D., Hartenbaum, N., Garg, A., & Hegmann, K. T. (2015). Factors associated with truck crashes in a large cross section of commercial motor vehicle drivers. Journal of Occupational and Environmental Medicine, 57(10), 1098–1106. https://doi.org/10.1097/JOM.0000000000000503.
Sassi, S., Hakko, H., Raty, E., & Riipinen, P. (2018). Light motor vehicle collisions with heavy vehicles - psychosocial and health related risk factors of drivers being at-fault for collisions. Forensic Science International, 291, 245–252. https://doi.org/10.1016/j.forsciint.2018.08.037.
Zheng, Z., Lu, P., & Lantz, B. (2018). Commercial truck crash injury severity analysis using gradient boosting data mining model. Journal of Safety Research, 65, 115–124. https://doi.org/10.1016/j.jsr.2018.03.002.
Wang, Y., Xin, M., Bai, H., & Zhao, Y. (2017). Can variations in visual behavior measures be good predictors of driver sleepiness? A real driving test study. Traffic Injury Prevention, 18(2), 132–138. https://doi.org/10.1080/15389588.2016.1203425.
Wang, Y., Li, L., & Prato, C. G. (2019). The relation between working conditions, aberrant driving behaviour and crash propensity among taxi drivers in China. Accident Analysis and Prevention, 126, 17–24. https://doi.org/10.1016/j.aap.2018.03.028.
Cardamone, A. S., Eboli, L., Forciniti, C., & Mazzulla, G. (2017). How usual behaviour can affect perceived drivers' psychological state while driving. Transport, 32(1), 13–22. https://doi.org/10.3846/16484142.2015.1059885.
Razi-Ardakani, H., Mahmoudzadeh, A., & Kermanshah, M. (2018). A nested logit analysis of the influence of distraction on types of vehicle crashes. European Transport Research Review, 10, 44. https://doi.org/10.1186/s12544-018-0316-6 (14 pages).
Ma, C., Hao, W., Xiang, W., & Yan, W. (2018). The impact of aggressive driving behavior on driver-injury severity at highway-rail grade Ccrossings accidents. Journal of Advanced Transportation, 2018, 9841498. https://doi.org/10.1155/2018/9841498 (10 pages).
Hussain, G., Batool, I., Kanwal, N., & Abid, M. (2019). The moderating effects of work safety climate on socio-cognitive factors and the risky driving behavior of truck drivers in Pakistan. Transportation Research Part F: Traffic Psychology and Behaviour, 62, 700–715. https://doi.org/10.1016/j.trf.2019.02.017.
Braeckman, L., Verpraet, R., Van Risseghem, M., Pevernagie, D., & De Bacquer, D. (2011). Prevalence and correlates of poor sleep quality and daytime sleepiness in Belgian truck drivers. Chronobiology International, 28(2), 126–134. https://doi.org/10.3109/07420528.2010.540363.
Labat, L., Fontaine, B., Delzenne, C., Doublet, A., Marek, M. C., Tellier, D., Tonneau, M., Lhermitte, M., & Frimat, P. (2008). Prevalence of psychoactive substances in truck drivers in the Nord-Pas-de-Calais region (France). Forensic Science International, 174(2–3), 90–94. https://doi.org/10.1016/j.forsciint.2007.03.004.
Charbotel, B., Martin, J. L., Gadegbeku, B., & Chiron, M. (2003). Severity factors for truck drivers' injuries. American Journal of Epidemiology, 158(8), 753–759. https://doi.org/10.1093/aje/kwg200.
Yang, D., Kuijpers, A., Dane, G., & Van der Sande, T. (2019). Impacts of large-scale truck platooning on Dutch highways. Transportation Research Procedia, 37, 425–432. https://doi.org/10.1016/j.trpro.2018.12.212.
The authors acknowledge the Department of Transport of Jiangxi Province, Jiangxi Research Institute of Communications and Shaanxi Provincial Highway Bureau for providing crash data and cooperation of site visits.
This research is partially supported by the Key Programs of Department of Transport of Shaanxi, China (15-42R).
The datasets generated and analyzed in the current study are not publicly available due to privacy reasons, but are available from the corresponding author upon reasonable request.
School of Highway, Chang'an University, P.O.Box 487, Middle Section of South 2 Ring Rd., Xi'an, 710064, Shaanxi, China
Yonggang Wang
& Ye Luo
School of Computer and Electronic Information, Guangxi University, 100 East Daxue Rd., Nanning, 530004, Guangxi, China
Fayu Chen
Search for Yonggang Wang in:
Search for Ye Luo in:
Search for Fayu Chen in:
YW designed the study, interpreted results and drafted the manuscript. YL collected the crash data and performed data analysis. FC helped collected the data and contributed to the interpretation of results. All authors have read and given final approval of the version to be published.
Correspondence to Yonggang Wang.
Wang, Y., Luo, Y. & Chen, F. Interpreting risk factors for truck crash severity on mountainous freeways in Jiangxi and Shaanxi, China. Eur. Transp. Res. Rev. 11, 26 (2019) doi:10.1186/s12544-019-0366-4
Mountainous freeways
Partial proportional odds model | CommonCrawl |
Resolving puzzles of the phase-transformation-based mechanism of the strong deep-focus earthquake
Earthquakes indicated magma viscosity during Kīlauea's 2018 eruption
D. C. Roman, A. Soldati, … B. R. Shiro
Impact of interseismic deformation on phase transformations and rock properties in subduction zones
Sebastian Cionoiu, Evangelos Moulas & Lucie Tajčmanová
Subduction age and stress state control on seismicity in the NW Pacific subducting plate
Nicola Alessandro Pino, Vincenzo Convertito, … Claudia Piromallo
A likely geological record of deep tremor and slow slip events from a subducted continental broken formation
Francesco Giuntoli & Giulio Viola
Slab weakening during the olivine to ringwoodite transition in the mantle
Anwar Mohiuddin, Shun-ichiro Karato & Jennifer Girard
Metamorphic pressure variation in a coherent Alpine nappe challenges lithostatic pressure paradigm
Cindy Luisier, Lukas Baumgartner, … Torsten Vennemann
Earthquake nucleation in the lower crust by local stress amplification
L. R. Campbell, L. Menegon, … G. Pennacchioni
Dynamic earthquake rupture preserved in a creeping serpentinite shear zone
Matthew S. Tarling, Steven A. F. Smith, … James M. Scott
Hydrothermal fluid flow triggered by an earthquake in Iceland
Laurent Geoffroy, Catherine Dorbath, … Aurore Franco
Valery I. Levitas ORCID: orcid.org/0000-0001-8556-44191,2,3
Nature Communications volume 13, Article number: 6291 (2022) Cite this article
Coarse-grained models
Deep-focus earthquakes that occur at 350–660 km are assumed to be caused by olivine → spinel phase transformation (PT). However, there are many existing puzzles: (a) What are the mechanisms for jump from geological 10−17 − 10−15 s−1 to seismic 10 − 103 s−1 strain rates? Is it possible without PT? (b) How does metastable olivine, which does not completely transform to spinel for over a million years, suddenly transform during seconds? (c) How to connect shear-dominated seismic signals with volume-change-dominated PT strain? Here, we introduce a combination of several novel concepts that resolve the above puzzles quantitatively. We treat the transformation in olivine like plastic strain-induced (instead of pressure/stress-induced) and find an analytical 3D solution for coupled deformation-transformation-heating in a shear band. This solution predicts conditions for severe (singular) transformation-induced plasticity (TRIP) and self-blown-up deformation-transformation-heating process due to positive thermomechanochemical feedback between TRIP and strain-induced transformation. This process leads to temperature in a band, above which the self-blown-up shear-heating process in the shear band occurs after finishing the PT. Our findings change the main concepts in studying the initiation of the deep-focus earthquakes and PTs during plastic flow in geophysics in general.
Deep-focus earthquakes are very old puzzles in geophysics. While the shallow earthquakes occur due to brittle fracture, materials at 350–600 km are under pressure of 12–23 GPa and temperature of 900–2000 K and are above the brittle-ductile transition1. That is why the main hypothesis is that the earthquakes are caused by instability due to phase transformation (PT) from the subducted metastable α-olivine (Mgx Fe1−x)2SiO4 to denser β-spinel or γ-spinel2,3,4,5,6,7,8,9,10,11 (Fig. 1a); for the San Carlos olivine x = 0.9. Self-organized ellipsoidal transformed regions (anticracks) filled with nanograined product phase with very low shear resistance and orthogonal to the largest normal stress were considered. A set of anticracks aligned along the maximum shear stress reduces shear resistance and causes a shear band. In refs. 12, 13, the acoustic emission approach was pioneered to detect "seismic" events during several PTs, which was interpreted in favor of PT and shear instability hypotheses of the earthquake initiation. The modern acoustic emission approach combined with microstructural analyses is presented in refs. 10, 14, 15. However, we will show that these semi-qualitative approaches cannot resolve the existing puzzles. In particular, the mechanisms for jumping from geological 10−17 − 10−15 s−1 to seismic 10 − 103 s−1 strain rates (see4) are not understood, and it is not clear whether they are possible without PT. Next, abrupt olivine-spinel PT in seconds, while it does not occur for over a million years, needs to be quantitatively rationalized. Deviatoric strain-dominated seismic signals caused by volume-change-dominated transformation strain1,9 should also follow from some equations.
Fig. 1: Schematics of triggering deep-focus earthquake by transformation-deformation-heating bands during phase transformation (PT) from the subducted metastable olivine to spinel.
a Results of modeling of subduction of the Pacific plate including metastable olivine wedge beneath Japan with the temperature contour line. Magenta lines denote 1% (upper line) and 99% (lower line) of PT from olivine to β-spinel; blue lines denote 1% (upper line) and 99% (lower line) of PT from γ-spinel to bridgmanite+magnesiowüstite. Black lines designate transformation-deformation-heating bands. Earthquakes occur at the olivine wedge boundary (adapted with modifications from ref. 11 with permission from Elsevier Publ.). b Schematics of a transformation-deformation-heating band within a rigid space. Part of a band before PT (red) and after PT and isotropic transformation strain (green) is shown. c To satisfy the continuity of displacements across the shear-band boundary and rigid space outside the band, additional transformation-induced plasticity (TRIP) develops, leading to deformation of the green rectangular AtBtGtHt to ABGH that coincides with A0B0G0H0 and to large plastic shear. d 2D view (along axis 3) of c.
In this work, we suggested mechanisms of localized thermoplastic flow and PT that consist of several interrelated steps shown in Fig. 2. We introduce a combination of several novel concepts that allow us to resolve the above puzzles quantitatively. We treat the olivine-spinel PT as plastic strain-induced (instead of pressure/stress-induced), which was not done for any PT in geophysics. This leads to completely different kinetics, for which the transformation rate is proportional to the strain rate, explaining very high transformation rate for very high-strain rates. We find an analytical 3D solution for TRIP and coupled PT-TRIP-heating processes in a shear band. This solution predicts conditions for severe (singular) TRIP and self-blown-up deformation-PT-heating process due to positive thermomechanochemical feedback between TRIP and strain-induced transformation, leading to completing the PT in a few seconds. Severe TRIP shear explains shear-dominated seismic signals. In nature, this process leads to temperature in a band exceeding the unstable stationary temperature, above which the self-blown-up shear-heating process in the shear band continues after completing the PT. Without PT and TRIP, significant temperature and strain rate increase is impossible. Due to the much smaller shear band thickness in the laboratory, there is no heating, and plastic flow after the PT is very limited. Our results change the main concepts in studying the deep-focus earthquakes and PTs during plastic flow in geophysics in general.
Fig. 2: Mechanisms of localized thermoplastic flow and phase transformation (PT) leading to high strain and PT rates and high temperatures in a transformation-deformation and shear bands.
Temperature and shear rate before each stage is shown on the top. Initial strain localization occurs due to transition to dislocation plasticity along properly oriented weak [001](010) slip systems and corresponding orientational softening, as well as along the path with a small content of other strong phases like diopside. Localized plastic flow leads to the generation of strong stress concentrators (dislocation pileups, disclinations, shear nanobands), causing strain-induced PT. Due to crystal lattice instability, fast nucleation at strong stress concentrators occurs during 10 ps but without growth, leading to a weaker nanograined spinel and strain-controlled kinetics proportional to the strain rate instead of time. Volume reduction during PT in a shear band causes severe transformation-induced plasticity (TRIP), which in turn causes strain-induced PT leading to further TRIP and PT, and so on. This positive thermomechanochemical feedback leads to self-blown-up deformation-transformation-heating up to high temperature, exceeding unstable stationary temperature in a shear band, and high-strain rate. After completing PT, in nature but not in the thin laboratory-scale band, further heating and increased strain rate in a band occur due to shear flow. Similar processes are expected in multiple transformation-deformation bands and then just deformation bands that find ways through weak obstacles and may percolate or just increase the total shear-band volume and amplify generated seismic waves. Propagating transformation-deformation and just plastic shear bands generate strong stress concentrators at their tips producing a microscale counterpart of a dislocation pileup, which causes both fast PT and plasticity and further propagation of a shear band, i.e., repeats the above processes at a larger scale.
Utilizing high-pressure mechanochemistry
It is clear that to obtain such jumps in plastic flow and PT rates in some rare cases, a theory should contain singularity that strongly depends on some external conditions. To resolve the problem, we will utilize the main concept of high-pressure mechanochemistry16,17,18,19. Our first point is that in all previous geophysical papers2,3,4,5,6,7,8,9,10,20, pressure- and stress-induced PTs were considered a mechanism for initiating the shear instability. These PTs start at crystal defects naturally existing in material and for stresses below the yield strength. These defects (e.g., various dislocation structures or grain boundaries) produce stress concentrators and serve as nucleation sites for a PT. Since the number of such defects is limited, one has to increase pressure to activate defects with smaller stress concentrations. In contrast, plastic strain-induced PTs occur by nucleation at defects produced during plastic flow. The largest concentration of all stress components can be produced at the tip of the dislocation pileups, proportional to the number of dislocations N in a pileup. Since N = 10 − 100, local stresses could be huge and exceed the lattice instability limit, leading to the nucleation of spinel within sub-nanoseconds, which is negligible compared to the 1 − 10 s time scale considered here. Indeed, a typical time for the loss of lattice stability and reaching a new stable phase for different PTs obtained with molecular dynamics simulation is <10 ps21,22,23,24. Due to a strong reduction of stresses away from the defect tip, growth is very limited. Thus, the next plastic strain increment leading to new defects and new nuclei at their tips is required to continue PT. That is why (and because of barrierless nucleation, which does not require thermal fluctuations) time is not a governing parameter in a kinetic equation, and plastic strain plays a role of a time-like parameter16,17,18,19,25 (Eq. 4). Arrested growth also explains nanograin structure after strain-induced PTs in various systems25,26,27,28, including olivine → spinel4,6,10,29. The important point is that the deviatoric (nonhydrostatic) stresses in the nanoregion near the defect tip are not bounded by the engineering yield strength but rather by the ideal strength in shear for a defect-free lattice which may be higher by a factor of 10–100. Local stresses of such magnitude may result in the nucleation of the high-pressure phase at an applied pressure that is not only significantly lower than that under hydrostatic loading but also below the phase-equilibrium pressure. For example, plastic strain-induced PT from graphite to hexagonal and cubic diamonds at room temperature was obtained at 0.4 and 0.7 GPa, 50 and 100 times below than under hydrostatic loading, respectively, and well below the phase-equilibrium pressure of 2.45 GPa26 (see other examples for PTs in Zr, Si, and BN25,27,30,31). In addition, such highly-deviatoric stress states with large stress magnitudes cannot be realized in bulk. Such unique stresses may lead to PTs into stable or metastable phases that were not or could not be attained in bulk under hydrostatic or quasi-hydrostatic conditions25,27,32,33. It was concluded in refs. 16,17,18,19 that plastic strain-induced transformations require completely different thermodynamic, kinetic, and experimental treatments than pressure- and stress-induced transformations.
Thus, our quantitative mechanisms of very fast localized thermoplastic flow and PT consist of several interrelated steps shown in Fig. 2 and contain several conceptually important points:
Proof that plastic flow alone cannot lead to localized in mm-scale band heating, that is why PT is required.
Substitution of stress-induced PT with plastic strain-induced PT, which was not previously used in geophysics and leads to completely different kinetic description. Transformation rate is proportional to the strain rate, which explains very high transformation rate for very high-strain rates.
Transition to dislocation flow with strong stress concentrators is required to substitute stress-induced PT with barrierless and fast plastic strain-induced PT.
Strain-induced PT in a shear band generates severe (singular) TRIP shear and heating, which in turn produces strain-induced PT and so on, resulting in the self-blown-up PT-TRIP-heating process due to positive thermomechanochemical feedback. This process leads to completing the PT on the few second time scale. Severe TRIP shear explains shear-dominated seismic signals.
The self-blown-up PT-TRIP leads to the heating above the unstable stationary temperature Ts = 1400 − 1800 K, after which further heating in a shear band occurs due to traditional thermoplastic flow. Achieving T = 1800 K is sufficient to reach \(\dot{\varepsilon }(T)=10-1{0}^{3}\,{{{{\mathrm{s}}}}}^{-1}\) and generate strong seismic waves.
These processes repeat themselves at larger scale.
Lack of any of these processes due to not meeting the required conditions (e.g., proper orientation or path with a small content of stronger phases) may lead to inability to reach very fast localized PT and plastic flow and cause an earthquake, which explains why the strong earthquakes are relatively rare events. Similarly, lack of seismic activity below 660 km, where endothermic and slow disproportionation reaction from γ-spinel to bridgmanite+oxide (magnesiowüstite) occurs, can be explained.
Relatively small shear strain in laboratory experiment29 (γ = 43 vs. γ = 106 in nature) is because the temperature cannot grow due to an extremely thin band; processes in the third column in Fig. 2 are absent, and TRIP occurs only. Our Eq. 1 below relates the change in strain rate with respect to the initial one before localization. That is why the final strain rate is distributed with depth similar to the initial strain rate before localization. This is consistent with the correlation between seismicity in the transition zone and strain rate before localization34.
Mechanisms and conditions of localized thermoplastic flow and heating in Mg1.8Fe0.2SiO4 olivine
According to34, seismicity in the transition zone correlates with the rate of plastic flow, which is in the range of 10−17 − 10−15 s−1. Orthorhombic olivine has only three independent slip systems set, i.e., less than five required for the accommodation of arbitrary homogeneous deformation. That is why other mechanisms like grain-boundary migration through disclination motion35, amorphization36, dislocation climb, diffusive creep, and other isotropic mechanisms with linear flow rule37,38 supplement dislocation plasticity and control strain rate. Less than 40% of olivine aggregate strain at high temperatures may be accommodated by dislocation activity. However, when one of the slip systems is aligned along or close to maximum shear stress, faster shear-dominated deformation is possible controlled solely by dislocations. Especially, [001](010) slip system has critical shear stress of 0.15 MPa, at least three times lower than that for all other systems (at 405 km depth, T = 1757 K, p = 13.3 GPa, equivalent plastic strain rate \(\dot{\varepsilon }=1{0}^{-15}\))38. Thus, if some group of grains is oriented with [001](010) slip system along the maximum shear stress, dislocation glide may occur compatible with shear strain localization due to orientational softening. Despite the variety of deformation mechanisms, plastic flow in olivine is formally described by
$$\dot{\varepsilon }=H{\sigma }^{n}\exp (-{Q}_{r}/T)\to M=\dot{\varepsilon }(T)/\dot{\varepsilon }({T}_{0})=\exp [-{Q}_{r}({T}^{-1}-{T}_{0}^{-1})],$$
where Qr = Q/R, Q is the activation energy, R is the gas constant, and σ is the differential stress, which is approximately the same within and outside of the shear band due to continuity of shear and normal stresses along the band boundary. Since for olivine n = 3.538,39, reduction in slip resistance by a factor of 3 leads for the same stress to increase in the strain rate by a factor of 47. Also, in Earth, olivine is mixed with other phases, e.g., diopside, which has much higher critical shear stresses, 7.31-64.7 MPa and n = 6.4 − 11.4 at the same conditions38, and which may constitute 30% of the olivine-diopside mixture. Thus, shear localization should start in the region with small diopside content, bypassing diopside inclusions, which may also increase strain rate by additional two–three orders of magnitude. In total, when both proper alignment of olivine grains and small diopside content are combined, the local strain rate may increase at least by 104 times without a change in temperature and reach 10−13 − 10−11 s−1. At such a strain rate, shear localization may be promoted by plastic heating in a band with the width h exceeding 10–103 m39, but a characteristic time of this localization, 10–104 years, is way too long to resolve puzzles mentioned in abstract, and too broad to reproduce a few-mm thick slip zone in the Punchbowl Fault4,6. Also, such a slow heating increases chances for slow and nonlocalized olivine-spinel PT, which eliminates the possibility of fast and localized PT and TRIP described in the next section.
To estimate softening due to the substitution of olivine to a weaker nanograined spinel in a band, we will use data from ref. 40. The initial yield strength in compression σy of the transformed nanograined γ-spinel at \(\dot{\varepsilon }\simeq 1{0}^{-5}{s}^{-1}\) is 4.7 times lower than that for olivine. The estimated strain rate in Earth in this nanograined γ-spinel is 10−13 s−1. This shows, in contrast to ref. 4, 6, that weak nanograined spinel cannot even close provide the seismic strain rate 10 − 103 s−1. Note that the strength completely recovers within 5 h due to grain growth. Anticracks filled with weaker nanograined spinel along the path of a shear band also reduce strength (the main softening mechanism suggested in refs. 2, 4, 6), but much less than the above estimate when nanograined spinel is located within the entire shear band; that is why we will not consider them. While we included reduced strength of spinel versus olivine in Fig. 2, we did not use it in our estimates, getting more conservative values.
We assume that the initial temperature of the cold slab is T0 = 900 K40, cold enough to avoid stress-induced olivine-spinel PT in bulk, and show that to get the desired jump in the strain rate, the final temperature should be T = 1800 K. Indeed, taking from ref. 39Qr = 58, 333 K we obtain from Eq. 1 that at T = 1800 K the strain rate increases by a factor of M = 1014 (Fig. 3a). Thus, if initial strain rate in the localized region was \(\dot{\varepsilon }({T}_{0})=1{0}^{-13}-1{0}^{-11}\,{{{{\mathrm{s}}}}}^{-1}\), then after heating to T = 1800 K it increases to \(\dot{\varepsilon }(T)=10-1{0}^{3}\,{{{{\mathrm{s}}}}}^{-1}\). While we did not include spinel in our calculations, these numbers are close to strain rates of 1 − 10 s−1 for γ-spinel obtained for San Carlos olivine at 17 GPa, 1800 K, and grain size of 10nm that can be estimated from Fig. S10 in ref. 40. Thus, despite the doubt of the validity of Eq. 1 for such high-strain rates, it gives a reasonable order-of-magnitude value.
Fig. 3: Characteristics of the localized thermoplastic flow.
a Plot of \({\log }_{10}M\) vs. temperature (Eq. 1) for the chosen activation energy Q/R = 58,333 K39 and 1.5Q and Q/1.5. b Plots of both sides of Eq. 3 for stationary temperature, namely the straight line related to the heat flux from the band and the term related to the plastic dissipation, for different strain rates \(\dot{\varepsilon }({T}_{0})\) (shown near the curves). Intersections of these lines produce two stationary solutions for the temperature evolution equation. The solution with T ≃ T0 is stable. The second solution Ts ≫ T0 is unstable since any fluctuational increase (decrease) in temperature within a band leads to higher (lower) plastic dissipation than the heat flux from the band and further increase (decrease) in temperature. This means that (i) some very significant additional heating source than the traditional plastic flow is required to reach Ts; otherwise, the temperature will be close to T0; (ii) after reaching Ts, plastic dissipation will lead to unlimited heating up to melting temperature with a corresponding drastic increase in the strain rate.
The temperature evolution equation in a localized shear band with the thickness h and temperature T within the rest of the material with temperature T0 is
$$\rho \nu \dot{T}{{{{\mathrm{h}}}}}=-4k(T-{T}_{0})/{{{{\mathrm{h}}}}}+{\sigma }_{y}\dot{\varepsilon }{{{{\mathrm{h}}}}}=-4k(T-{T}_{0})/{{{{\mathrm{h}}}}}+H{\sigma }^{n+1}\exp (-{Q}_{r}/T){{{{\mathrm{h}}}}},$$
where ρ is the mass density, ν is the specific heat, and k is the thermal conductivity. The term − 4k(T − T0)/h is the heat flux through two shear-band surfaces due to temperature gradient 2(T − T0)/h, similar to ref. 39, and Eq. 1 was used to calculate plastic dissipation. The thermal conductivity k = ρνκ = 2.4 × 10−6MPa m2/(s K)39, where κ = 10−6m2/s is the thermal diffusivity, ρ = 3000 kg/m3, and ν = 800 J/(kg K) = 800 × 10−6MPa m3/(kg K). Constant H is determined from Eq. 1 as \(H=\dot{\varepsilon }({T}_{0}){\sigma }^{-n}\exp [{Q}_{r}/{T}_{0}]\). Then the stationary solution Ts of Eq. 2 (i.e., \(\dot{T}=0\)) is determined from
$${T}_{s}-{T}_{0}=0.25{{{{\mathrm{h}}}}}^{2}\sigma \dot{\varepsilon }({T}_{0})\exp [-{Q}_{r}({T}_{s}^{-1}-{T}_{0}^{-1})]/{{{{\mathrm{k}}}}}.$$
Since the Punchbowl Fault exhibited a few-mm thick slip zone4,6, we assume h = 4 × 10−3m. We also choose σ = 300 MPa39,40.
Plots of both sides of Eq. 3 in Fig. 3b shows that there are two stationary solutions. One of the solutions with T ≃ T0 is stable since any fluctuational increase (decrease) in temperature within a band leads to higher (lower) heat flux from the band than the plastic dissipation. The second solution Ts ≫ T0 varies from 1396 to 1825 K when strain rate \(\dot{\varepsilon }({T}_{0})\) reduces from 10−10 to 10−14 s−1. The higher combination \({h}^{2}\sigma \dot{\varepsilon }({T}_{0})\) is, the lower the stationary temperature Ts is. This solution is unstable since any fluctuational increase (decrease) in temperature within a band leads to higher (lower) plastic dissipation than the heat flux from the band and further increase (decrease) in temperature. This means that (a) localized increase in strain rate and temperature in a thin band is impossible, and temperature increase estimated with neglected heat flux term to justify melting41 or low shear resistance4,6 are wrong; (b) some very significant additional heating source than the traditional plastic flow is required to reach Ts; otherwise, the temperature will be close to T0; (c) after reaching Ts ≫ T0, plastic dissipation will lead to unlimited heating up to melting temperature with a corresponding drastic increase in the strain rate. Thus, even if the entire olivine would transform everywhere to much weaker nanograined spinel (not just in selected anticracks) and softening due to small content of other strong phases (which were not included in the previous models) are present, still, strain rate cannot exceed \(\dot{\varepsilon }({T}_{0})=1{0}^{-13}-1{0}^{-11}\,{{{{\mathrm{s}}}}}^{-1}\), which cannot cause a localized temperature increase.
Note that the transformation heat for olivine-spinel PT increases temperature by 100 K only42, which is too small to reach Ts ≫ T0. Below, we suggest PT- and TRIP-related mechanisms of increase in temperature above Ts.
Plastic strain-induced phase transformation olivine → spinel
Usually, during a PT, spinel appears as a continuous film along grain boundaries with increasing thickness43 or as anticrack region nucleated at the grain boundaries4,6,29. Transition to dislocation plasticity should lead to dislocation pileups and strain-induced PT within grains, consistent with band-shaped spinel regions observed within grains in refs. 4, 6, 29 and related to dislocation pileups. It is known that large overdrive and nonhydrostatic stresses promote martensitic PT at dislocations within grains44,45. Shear stresses at the tip of the dislocation pileup should also change a slow reconstructive mechanism of olivine-spinel PT to a fast martensitic mechanism; however, this is not a necessary condition for our scenario. Transformation bands include (010) planes, which include [001](010) slip system with the smallest critical stress, see ref. 38, consistent with our assumption above. However, there are also (011) transformation bands, which do not have smaller critical shear stress and do not lead to orientational softening. That means that orientational softening is not a mandatory mechanism for initial localization and can be compensated by smaller diopside content along those planes.
Strain-controlled kinetic equation17,18 for the volume fraction of the strain-induced high-pressure phase simplified in Supplementary Information is
$$\frac{dc}{d\varepsilon }=A\left(1-c\right)\qquad {{{{{{{\rm{for}}}}}}}}\, p\, > \,{p}_{\varepsilon }^{d}(T);\quad A:=a\frac{p-{p}_{\varepsilon }^{d}(T)}{{p}_{h}^{d}(T)-{p}_{\varepsilon }^{d}(T)}\quad \to \quad c=1-\exp (-A\varepsilon ).$$
Here, \({p}_{\varepsilon }^{d}(T)\) and \({p}_{h}^{d}(T)\) are the minimum pressure at which the direct (i.e., to high-pressure phase) strain-induced and pressure-induced PTs are possible, respectively, and a is a parameter. We do not consider strain-induced reverse spinel → olivine PT because the resultant nanograin spinel deforms dominantly by grain-boundary sliding, which does not produce stress concentrators inside the grains. The first experimental and only existing confirmation of Eq. 4 and parameter identification were performed for α → ω PT in Zr31. Based on data, A ≃ 23 for p = pe, which will be used due to lack of data for olivine → spinel PT. While we study the effect of A on the transformation kinetics (Fig. 4), it is shown below that for large shear strains, the term with A is negligible in the expression for TRIP.
Fig. 4: Kinetics of coupled strain-induced phase transformations and transformation-induced plasticity (TRIP).
a, b Shear strain and volume fraction of the high-pressure phase vs. τ/τy, respectively. Dots denote shear stress τmin for initiation of strain-induced phase transformation (PT). Line for the stress-induced PT corresponds to Eq. 5. c Kinetics of olivine → γ-spinel PT for different kinetic parameters A. d, e Shear stress τmin/τy for initiation of strain-induced PT and angle α between the shear-transformation band and direction with maximum shear stress τmax, respectively, vs. kinetic parameter A. Results for chemical reaction γ-spinel → bridgmanite+oxide (magnesiowüstite) are included in a–e with A = 2.3 and ε0 = 0.083. f, g Shear strain and shear strain rate, respectively, required to reach temperatures of 1400 K and 1800 K during transformation time t for parameters for the Punchbowl Fault.
In contrast to pressure/stress-induced PT, time is not a parameter in Eq. 4; plastic strain plays a role of a time-like parameter. Thus, the rate of strain-induced PT is determined by the rate of plastic deformation. To reach c = 0.99, plastic strain ε = 4.6/A = 0.2, which at strain rate 10 s−1 (or, alternatively, at 10−4 s−1) takes just 0.02 s (or, alternatively, 20 s), instead of millions years without plastic strain. Thus, plastic strain can increase the transformation rate by >12–16 orders of magnitude.
Strain-induced character of PTs is consistent with results in refs. 4, 6, 10, 29, where metastable olivine Mg2GeO4 (structural analog of natural olivine that transforms at much lower pressure) transforms into spinel in the 70 nm thick shear band, partially transforms in the surrounding band of few μm thick, and does not transform away from the band. These thin planar layers of strain-induced nanograined (10–30 nm) Mg2GeO4 spinel within olivine were observed in refs. 4, 10, 29 after laboratory experiment and suggested as an additional to anticrack mechanism of shear weakening. They appear along the specific slip planes, are related to dislocation pileups, and correspond to our model's prediction below. The lower temperature is, the more strain-induced planar spinel bands and less stress-induced spinel anticrack regions are observed, consistent with promoting effect of strain-induced defects. Relative slip along a 70 nm thick transformed planar layer is 3 microns, i.e., shear strain γ = 43; slip rate is 1 μm s−1, thus \(\dot{\gamma }=14\,{{{{\mathrm{s}}}}}^{-1}\) and time of sliding (and PT) is \(\gamma /\dot{\gamma }=3\,\mathrm {s}\)4,29. These bands offset multiple non-transforming pyroxene crystals, which allows for determining relative slip. In contrast to anticracks that are mostly orthogonal to the compressive stress, transformation bands are mostly under 450 with some scatter to the compression direction, i.e., they coincide with planes with maximum shear stress or pressure-dependent resolved shear stress.
Similar results are obtained for silicate olivine Fe2SiO415 tested at pressure range 3.9 − 8.4 GPa and temperature range 748 − 923 K. Co-seismic slip of 40 microns over the fault width of 1.5 microns, i.e., an order of magnitude larger than in germanium olivine, results in γ = 27, i.e., the same order of magnitude as in germanium olivine. While faults in Mg2SiO4 and Mg1.8Fe0.2SiO4 have not been observed yet, due to the close magnitude of the transformation strain for all (Mgx Fe1−x)2SiO4 and (Mgx Fe1−x)2GeO4 for any x (see supplementary materials), similar γ is expected.
In nature, the Punchbowl Fault also exhibited a few-mm thick slip and PT zone, along which slip occurred by several kilometers, which contains product nanograins4,6, i.e., shear strain γ = 106. Similar strain-induced PTs and reactions are observed at the surface layers in friction experiments4,6.
TRIP and self-blown-up deformation-transformation-heating process
Next, we need to find a mechanism for a drastic increase in strain rate and temperature. We suggest that TRIP caused by olivine → spinel PT can lead to this. TRIP occurs due to internal stresses caused by volume change during the PT combined with external stresses. We found (Supplementary Information) an analytical 3D solution, in which the plastic shear γ, which is TRIP, is related to the applied shear stress τ, the yield strength in shear τy during PT, and volumetric transformation strain εo (see Fig. 4a) as
$$d\gamma /dc=\frac{2}{\sqrt{3}}|{\varepsilon }_{o}|(\tau /{\tau }_{y})/\sqrt{1-{\left(\tau /{\tau }_{y}\right)}^{2}}\to \gamma=\frac{2}{\sqrt{3}}c|{\varepsilon }_{o}|(\tau /{\tau }_{y})/\sqrt{1-{\left(\tau /{\tau }_{y}\right)}^{2}}.$$
Effective transformation volumetric strain cεo during growth of c forces plastic strain to restore displacement continuity across an interface (see Fig. 1b, c), and plastic flow takes place at arbitrary (even infinitesimal) shear stress. The yield strength in shear τy during PT is unknown. Atomistic simulations for many materials (e.g., in refs. 26, 46) show that lattice resistance drops to and even below zero after lattice instability. For strain-induced PT, nanosize nuclei also reduce the yield strength40. We assume conservatively that \({\tau }_{y}={{{{\mathrm{const}}}}}=\sigma /\sqrt{3}=173\,{{{{\mathrm{MPa}}}}}\). For τ → τy (e.g., in a shear band), plastic shear tends to infinity (Fig. 4a). This is the desired singularity we wanted to find above. Note that our 3D solution has the proportionality factor \(2\sqrt{3}\simeq 3.4\) times larger than in the previous 2D treatments47,48,49,50, which changes the current results qualitatively.
Since PT causes TRIP, which (like traditional plasticity) promotes strain-induced PT, it, in turn, promotes TRIP, and so on, there is positive thermomechanochemical feedback, which we called a self-blown-up deformation-transformation-heating process. In such a case, Eq. 4 cannot be integrated alone but should be considered together with Eq. 5. For shear-dominated flow \(\varepsilon=\gamma /\sqrt{3}\), and we obtain (Fig. 4a–d)
$$\gamma =2\frac{|{\varepsilon }_{o}|}{\sqrt{3}}\frac{\tau }{{\tau }_{y}}/\sqrt{1-{\left(\frac{\tau }{{\tau }_{y}}\right)}^{2}}-\sqrt{3}/A;\quad c=1-\frac{3}{2}\sqrt{1-{\left(\frac{\tau }{{\tau }_{y}}\right)}^{2}}/\left(\frac{\tau }{{\tau }_{y}}A|{\varepsilon }_{o}|\right) \\ ={\left(1+\sqrt{3}/(A\gamma )\right)}^{-1};$$
$$\tau /{\tau }_{y}\ge 1/\sqrt{1+4{A}^{2}{\left|{\varepsilon }_{o}\right|}^{2}/9}.$$
Equation 7 is the criterion for a self-blown-up deformation-transformation-heating process, shown in Fig. 4d vs. A. It is obtained from Eq. 6 and condition c ≥ 0 or γ ≥ 0. The last expression for c(γ) in Eq. 6 is obtained by excluding τ/τy from two previous Eqs. 6. For Mg1.8Fe0.2SiO4 olivine → γ-spinel PT εo = −0.096 and for olivine → β-spinel PT εo = − 0.06, see refs. 3, 51 and supplementary material; this results in τ/τy ≥ 0.562 for γ-spinel and τ/τy ≥ 0.736 for β-spinel, which are not very restrictive. Thus, since \(\tau /{\tau }_{y}=\cos 2\alpha\), where α is the angle between maximum shear stress and shear band, the above criterion is met at α ≤ 27. 9o for γ-spinel and α ≤ 21. 3o for β-spinel (Fig. 4e). We will focus on olivine → γ-spinel PT since it has larger TRIP and less restrictive constraints.
To have γ = 10, τ/τy = 0.999939 and c = 0.9925; for γ = 100, τ/τy = 0.999999 and c = 0.999248. Thus, for the self-blown-up deformation-transformation process to produce shear γ > 10, one needs τ/τy = 1, i.e., perfect alignment of maximum shear stress and shear band. This contributes to understanding why the self-blown-up deformation-transformation-heating process and strong deep-focus earthquakes are relatively rare processes. Equation 7 explains extremely large shear strains (sliding) in a fault or friction surface. Also, since the shear strain is much >εo, this resolves a puzzle of the shear character of the deep-earthquake source1,9. Note that for very large TRIP shear the term \(-\sqrt{3}/A\) in Eq. 62 is negligible (Fig. 4a), i.e., TRIP shear is independent of any kinetic properties (specifically, parameter A) of strain-induced PT. Also, for τ/τy → 1, Eq. 62 gives c → 1. TRIP-induced temperature rise is determined by the equation
$$\rho \nu \dot{T}h=-4k(T-{T}_{0})/h+{\tau }_{y}\dot{\gamma }h,$$
in which for τ → τy we even neglected the transformation heat to have a conservative estimate. The solution is
$$T={T}_{0}+({T}_{s}^{tr}-{T}_{0})\left[1-\exp \left(-\frac{4k}{\rho \nu {h}^{2}}t\right)\right];\quad {T}_{s}^{tr}={T}_{0}+\frac{{\tau }_{y}\dot{\gamma }{h}^{2}}{4k},$$
where \({T}_{s}^{tr}\) is the stationary temperature due to TRIP heating. The shear rate to reach temperature T during the PT time t, as well as corresponding shear strain γ are determined from Eq. 9
$$\dot{\gamma }=(T-{T}_{0})\frac{4k}{{\tau }_{y}{h}^{2}}{\left[1-\exp \left(-\frac{4k}{\rho \nu {h}^{2}}t\right)\right]}^{-1};\quad \gamma=\dot{\gamma }t;\quad \gamma (t=0)=\frac{\rho \nu }{{\tau }_{y}}(T-{T}_{0}).$$
Note that M in Eq. 1, Ts in Eq. 3, and Eqs. 8–10 are independent of the exponent n in Eq. 1. Figure 4f, g exhibit \(\dot{\gamma }\) and γ required to reach temperatures 1800 K and 1400 K vs. transformation time t for parameters for the Punchbowl Fault. The faster PT is, the smaller shear but larger strain rates are required. Minimum shears are at t = 0 (instantaneous PT), γ(1800) = 12.5 and γ(1400) = 6.9 but lead to infinite strain rate. For t < 10 s, the desired temperature is reached during transitional heating. For t > 10 s, it is reached by approaching a stationary temperature; that is why the required strain rates approach stationary values. Based on kinetic estimates in ref. 40, time for complete pressure-induced PT at 17 GPa and 1420 K is 10 s; strain-induced PT may occur by orders of magnitude faster even at a much lower temperature.
Practically, limitation comes from the required shear (rather than the shear rate). For t ≤ 10 s, the required strain is <43 observed in the laboratory4,29. Based on Eq. 6, strain γ ≥ 10 requires τ/τy ≥ 0.999939, i.e., practically perfect alignment of the shear band along the maximum shear direction. The shear rate is calculated by dividing shear by PT time. For t > 1 s, shear rate is s < 10 s−1, and after completing PT it further increases during traditional plastic flow due to T > Ts (Fig. 3b). For 0.001 < t < 1 s, the shear rate is in the range of 10 − 104 s−1, on the same order of magnitude as expected at 1800 K during traditional plastic flow.
Thus, TRIP and the self-blown-up deformation-transformation-heating process should lead to temperatures >Ts in Fig. 3, after which further drastic temperature increase does not need PT and can occur due to traditional plastic flow. Note that since during PT τ/τy ≃ 1, traditional plastic flow (which is neglected) should add to TRIP and further increase both strain rate and temperature.
Theoretically, thermoplastic unstable temperature increase above Ts can lead to melting, which is one of the mechanisms of high-strain rate shear localization and deep earthquake1,41. However, due to a strong heterogeneity of earth materials along the shear band, including non-transforming minerals, melting temperature (which is around 2700 K at 17 GPa for Mg2SiO4 and Mg1.8Fe0.2SiO452) may not be reached and is not necessary. As estimated above, reaching 1800 K is sufficient for achieving strain rates 10 − 103 s−1. We also want to stress that the melting-based mechanism of the deep earthquake is possible in nature only if some other processes (like self-blown-up deformation-transformation-heating) will increase temperature above Ts.
Similar processes are expected in multiple transformation-shear and shear bands (Fig. 2) that find ways through weak obstacles and may percolate or just increase the total shear-band volume and amplify generated seismic waves. In reality, the shear band is not infinite but has a very large (10 to 1000 and larger) ratio of length, at least in the shear direction, to the width. That is why the above theory is applicable away from the tips of a band. When finite-size single or coalesced deformation or transformation-deformation bands propagate, stresses at their ends are equivalent to those at a dislocation pileup or superdislocation, but at a larger scale53 and with the total Burgers vector γh, which may be huge. These stresses cause both fast PT and plasticity and further propagation of shear band and trigger initiation of new bands, mostly mutually parallel. Such a stress concentrator is by a factor of γ/ε0, i.e., orders of magnitude, stronger than that at the tip of the anticrack2,3,4,5,6,8,29 and much more effective in spreading transformation-deformation bands at the higher, microscale. The resulting propagating thermoplastic band can pass through non-transforming minerals and extend outside the metastable olivine wedge. Indeed, it was demonstrated in ref. 6 that the fault originated in metastable Mg2GeO4 olivine during its transformation to spinel propagated through previously transformed spinel.
Analysis of the lack of seismic activity below 660 km
Lack of any of the processes shown in Fig. 2 due to not meeting the required conditions may explain lack of seismic activity below 660 km, where endothermic and slow disproportionation reaction from ringwoodite to MgSiO3 (bridgmanite) + (Mgx Fe1−x)O (magnesiowüstite) occurs. It is difficult to say which exactly process is missing because a counterargument may override each argument. For example, one may say that the chemical reaction, in contrast to the martensitic PT, requires a diffusive mass transport, and both nucleation and growth cannot be as fast as martensitic PT, which is proved for the proxy reaction albite → jadeite + coesite6,54. However, this may be true or not because large plastic shears strongly accelerate mass transport and chemical reactions as well49,55,56,57,58,59, and it is unknown how do shears affect this specific reaction. In particular, at friction surfaces the decomposition reaction of dolomite MgCa(CO3)2 → MgO + CaO + 2CO2 completes within 0.006 s4 with temperature increase exceeding 1000 K. That is why the martensitic character of PT is not required here and was not required for olivine → spinel PT because reconstructive PT can also be drastically accelerated by plastic straining.
The most probable reasons are:
lack of initial shear localization in nanograined spinel before reaction due to grain sliding deformation without orientational softening (which reduces ε(T0) by a factor of 47) and reduced dislocation activity, which makes the transition to strain-induced PT and self-blown-up deformation-transformation-heating process impossible;
the higher initial temperature at 660 km (see refs. 11, 34 and Fig. 1a); e.g., increase in T0 from 900 K to 1000 K reduces parameter M in Eq. 1 by a factor of 653, and
low initial strain rate below 660 km34 reduces the final strain rate proportionally.
One of the conditions for PT-induced instability mentioned in refs. 3,6 is the exothermic character of the olivine-spinel PT, leading to runaway heating. At the same time, the reaction from ringwoodite to bridgmanite+magnesiowüstite is endothermic and cannot produce instability and earthquakes below 600 km. However, for coupled strain-induced PT-TRIP process, plastic heating occurs during PT, and the contribution of PT heat (100 K42) in temperature increase from 900 to Ts = 1400 − 1800 K is small. Thus, we do not think that the exothermic character of PT alone is critical. In laboratory experiments, temperature change within the shear band is negligible.
Exothermic PT was utilized in ref. 4 also to explain nanograined spinel structure. The temperature increase due to PT heat increases the driving force for PT and causes runaway nucleation under growth-inhibited conditions. Suppose a slight temperature increase would be the reason for a drastic increase in nucleation rate. In that case, runaway nucleation should occur everywhere rather than to localize within anticracks, especially in hotter regions of the metastable olivine slab closer to its boundary with spinel. It is also unclear why growth is slow at such a large thermodynamic driving force that causes runaway nucleation. At the same time, nucleation at dislocations and dislocations pileups leads to nanograined structure because of growth arrest due to a strong reduction of stresses away from the defect tip16,17,18,19.
Heat transfer analysis of laboratory experiments4,29
Substituting in Eq. 3 data for Mg2GeO4 from ref. 29, namely (sample GL707), \({\dot{\varepsilon }}_{0}=2\times 1{0}^{-4}\,{{{{\mathrm{s}}}}}^{-1}\), T0 = 1250 K, σ = 1589 MPa, and h = 10−7 m, as well as from ref. 4, \({\dot{\varepsilon }}_{0}=1{0}^{-4}\,{s}^{-1}\), T0 = 1200 K, σ = 1804 MPa, and h = 0.7 × 10−7 m, we obtain Ts = 3398 K for the first case and Ts = 3302 K for the second case (Fig. 5). Due to very small shear band thickness in the laboratory experiments, these values are extremely high, far away from the region of stability of spinel, and well above the melting temperature. Since no traces of reverse PT to olivine and melting were observed in refs. 4, 29, these temperatures were not reached, and no thermoplastic shear localization is possible without PT, TRIP, and self-blown-up deformation-transformation-heating process.
Fig. 5: Analysis of experiments in refs. 4,29.
Plots of both sides of Eq. 3 for stationary temperature, namely the straight line related to the heat flux from the band and the term associated with the plastic dissipation, for two different sets of experiments in refs. 4,29. Blue lines correspond to the experiment at T0 = 1200 K and red lines are for T0 = 1250 K. Since unstable stationary temperatures Ts for both experiments are very high, they cannot be reached by thermoplastic flow alone, and a phase transformation with transformation-induced plasticity is required.
However, even with TRIP, substituting in Eq. 9 data from the same laboratory experiment4 h = 0.7 × 10−7 m, \(\dot{\gamma }=14\,{{{{\mathrm{s}}}}}^{-1}\), and maximum τy = 300 MPa from Fig. S2 in ref. 4, we obtain that the maximum (stationary) temperature increase is just 1.3 × 10−6 K. This should not be surprising because thickness h = 70 nm in a laboratory experiment is smaller than in Earth h = 4 mm by a factor of 57143. Since stationary temperature increment is proportional to h2, for h = 4 mm, \(\dot{\gamma }=14\,{{{{\mathrm{s}}}}}^{-1}\), and τy = 300 MPa, it would be 4.33 × 103 K. Thus, in laboratory experiments on Mg2GeO44 temperature increase in the transformation-shear band was absent.
In ref. 4, adiabatic approximation was used to estimate maximum shear stress and internal friction coefficient from the condition that temperature increment does not exceed 230 K, maximum increment to reach the olivine-spinel phase-equilibrium temperature. A paradoxical result was that the estimated shear stress and friction coefficient were an order of magnitude lower than directly measured. The reason for this paradox is in adiabatic approximation; when heat flux from the shear band is included, the temperature increase is negligible for any reasonable shear resistance and does not restrict the internal friction stress. As it was found in ref. 40, the initial yield strength in compression σy of the transformed nanograined γ-spinel at \(\dot{\varepsilon }\simeq 1{0}^{-5}{{{{\mathrm{s}}}}}^{-1}\) is 4.7 times lower than that for olivine. The above result also means that the sliding should drastically increase after completing PT; that is why shear in the Punchbowl Fault, γ = 106, is drastically larger than in the laboratory, γ = 43. Consequently, processes in the third column in Fig. 2 are absent in laboratory experiments and cannot be verified due to small shear band thickness.
Similarly, drastic heating leading to melting and dissociation is predicted in ref. 41 using adiabatic approximation. When heat flux is included, conditions for melting are quite restrictive.
Relation to some previous works
TRIP is well known to the geological community, but it was considered to have a small effect7,44,60,61. This is correct in general, but for a properly oriented shear band where τ → τy, plastic shear tends to infinity (Eq. 7 and Fig. 4a). Shear banding and TRIP are observed in DAC experiments in fullerene62 and BN28 despite the PTs to stronger high-pressure phases. For PT from hexagonal to superhard wurtzitic BN, TRIP was evaluated to be 20 times larger than the prescribed shear28. Shear banding during PT is possible if the yield strength τy during PT does not increase despite the high-strain rate and strength of the high-pressure phase, which supports our conservative hypothesis τy = const. Positive feedback between PT and TRIP without heating was suggested in ref. 28 but without any equations. Reaction-induced plasticity (RIP), similar to TRIP, was revealed for a chemical reaction within a shear band in Ti-Si powder mixture49, and RIP-induced adiabatic heating was considered as a factor promoting reaction rate. However, mechanochemical feedback was not claimed since kinetics was considered within the theory for stress-induced reactions instead of strain-induced.
Here, we follow the main idea formulated in refs. 2,3,4,5,6,7,8 that the deep-focus earthquakes can be initiated by instability caused by PT, in particular, from olivine to spinel. However, as we discussed above, the broadly observed self-organized anticracks filled with weak nanograined spinel aligned along the maximum normal stress direction cannot cause the jump in strain rate by a factor of 1018. Instead, we use here strain-induced PT in thin planar layers leading to nanograined spinel observed in refs. 4, 10, 29.
It is also demonstrated in the paper that adiabatic approximation for a thin shear band, used to estimate the shear strength in ref. 4 and the possibility of melting in ref. 41, and a corresponding increase in strain rate is wrong. Allowing for the heat flux changes results qualitatively.
It is shown in ref. 63 based on the elegant dynamic solution for "pancake-like" flattened ellipsoidal Eshelby inclusion that it can grow self-similarly above some critical pressure. It is also derived that in order for the total strain energy to be finite (and not zero) in the inclusion with tending to zero thickness, deviatoric eigen strain (without specification of its nature) must tend to infinity (even under hydrostatic compression), which "explains" deviatoric character of the deep-earthquake source. This argument is unphysical: why should zero-thickness inclusion "desire" to have nonzero strain energy? Eigen strain in inclusion should be determined by processes in inclusion, like PT and plasticity, which is done in the current paper. Huge TRIP shear in Eq. 6 after complete PT explains deviatoric character of the deep-earthquake source. Also, plasticity (that significantly affects the stress-strain fields, reduces thermodynamic driving force, and may arrest PT64) is neglected in ref. 63, as well as interfacial energy.
Our findings change the main concepts in studying the initiation of the strong deep-focus earthquakes and PTs during plastic flow in geophysics in general. They will be elaborated in much more detail using modern computational multiscale approaches for studying coupled PTs and plasticity16, which can describe nucleation and evolution of multiple PT-shear bands from nano- to macroscales53,65,66. They will also be checked in experiments with rotational diamond anvil cell26,27,28,31,33 in a closed feedback loop with simulations. Introducing strain-induced PT and the self-blown-up transformation-TRIP-heating process may change the interpretation of various geological phenomena. In particular, they may explain possibility of the appearance of microdiamond directly in the cold Earth crust within shear bands26 during tectonic activities without subduction to the high-pressure and high-temperature mantle and uplifting. Developed theory of the self-blown-up transformation-TRIP-heating process is applicable outside geophysics for various processes in materials under pressure and shear, e.g., for new routes of material synthesis, friction and wear under high load, penetration of the projectiles and meteorites, surface treatment, and severe plastic deformation and mechanochemical technologies16,17,18,19,32,56,57,58,59.
Analytical methods used in the paper are described in the main text and Supplementary Material.
All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials.
Frohlich, C. The nature of deep-focus earthquakes. Annu. Rev. Earth Planet. Sci. 17, 227–254 (1989).
Green, H. W. & Burnley, P. C. A new, self-organizing, mechanism for deep-focus earthquakes. Nature 341, 773–737 (1989).
Green II, H. W. Shearing instabilities accompanying high-pressure phase transformations and the mechanics of deep earthquakes. Proc. Natl Acad. Sci. USA 104, 9133–9138 (2007).
Green II, H. W., Shi, F., Bozhilov, K., Xia, G. & Reches, Z. Phase transformation and nanometric flow cause extreme weakening during fault slip. Nat. Geosci. 8, 484–489 (2015).
Schubnel, A. et al. Deep focus earthquake analogs recorded at high pressure and temperature in the laboratory. Science, 341, 1377–1380 (2013).
Green II, H. W. Phase-transformation-induced lubrication of earthquake sliding. Philos. Trans. R. Soc. A 375, 20160008 (2017).
Kirby, S. Localized polymorphic phase transformations in high-pressure faults and applications to the physical mechanism of deep earthquakes. J. Geophys. Res. 92, 789–800 (1987).
Kirby, S. H., Stein, S., Okal, E. A. & Rubie, D. C. Metastable mantle phase transformations and deep earthquakes in subducting oceanic lithosphere. Rev. Geophys. 34, 261–306 (1996).
Zhan, Z. Mechanics and implications of deep earthquakes. Annu. Rev. Earth Planet. Sci., 48, 147–174 (2020).
Wang, Y. et al. A laboratory nanoseismological study on deep-focus earthquake micromechanics. Sci. Adv. 3, e1601896 (2017).
Kawakatsu, H. & Yoshioka, S. Metastable olivine wedge and deep dry cold slab beneath southwest Japan. Earth Planet. Sci. Lett. 303, 1–10 (2011).
Meade, C. R. & Jeanloz, R. Acoustic emissions and shear instabilities during phase transformations in Si and Ge at ultrahigh pressures. Nature 339, 616–618 (1989).
Meade, C. & Jeanloz, R. Deep-focus earthquakes and recycling of water into Earth's mantle. Science 252, 68–72 (1991).
Smart, T. et al. High-pressure nano-seismology: Use of micro-ring resonators for characterizing acoustic emissions. Appl. Phys. Lett. 115, 081904 (2019).
Officer, T. & Secco, R. A. Detection of high P,T transformational faulting in Fe2SiO4 via in-situ acoustic emission: Relevance to deep-focus earthquakes. Phys. Earth Planet. Inter. 300, 106429 (2020).
Levitas, V. I. High-pressure phase transformations under severe plastic deformation by Torsion in rotational Anvils. Mater. Trans. 60, 1294–1301 (2019).
Levitas, V. I. Continuum Mechanical Fundamentals of Mechanochemistry. In: High Pressure Surface Science and Engineering, (eds. Gogotsi, Y. & Domnich, V.)159–292 (Institute of Physics, Bristol, 2004)
Levitas, V. I. High-pressure mechanochemistry: conceptual multiscale theory and interpretation of experiments. Phys. Rev. B 70, 184118 (2004).
Levitas, V. I. High pressure phase transformations revisited. Invited Viewpoint article. J. Phys.: Condens. Matter 30, 163001 (2018).
ADS Google Scholar
Burnley, P. C. & Green II, H. W. Stress dependence of the mechanism of the olivine-spinel transformation. Nature 338, 753–756 (1989).
Wang, J., Yip, S., Phillpot, S. R. & Wolf, D. Crystal instabilities at finite strain. Phys. Rev. Lett. 71, 4182–4185 (1993).
Mizushima, K., Yip, S. & Kaxiras, E. Ideal crystal stability and pressure-induced phase transition in silicon. Phys. Rev. B. 50, 14952–14959 (1994).
Levitas, V. I. & Ravelo, R. Virtual melting as a new mechanism of stress relaxation under High strain rate loading. Proc. Natl Acad. Sci. USA 109, 13204–13207 (2012).
Chen, H., Levitas, V. I. & Xiong, L. Amorphization induced by 60o shuffle dislocation Pileup against Tilt grain boundaries in Silicon bicrystal under shear. Acta Materialia 179, 287–295 (2019).
Blank, V. D. & Estrin, E. I. Phase Transitions in Solids under High Pressure (CRC Press, 2014)
Gao, Y. et al. Shear driven formation of nano-diamonds at sub-gigapascals and 300 K. Carbon 146, 364–368 (2019).
Ji, C. et al. Shear-induced phase transition of nanocrystalline hexagonal boron nitride to wurtzitic structure at room temperature and low pressure. Proc. Natl Acad. Sci. USA 109, 19108–19112 (2012).
Levitas, V. I., Ma, Y., Hashemi, J., Holtz, M. & Guven, N. Strain-induced disorder, phase transformations and transformation induced plasticity in hexagonal boron nitride under compression and shear in a rotational diamond anvil cell: in-situ X-ray diffraction study and modeling. J. Chem. Phys. 25, 044507 (2006).
Riggs, E. & Green II, H. W. A new class of microstructures which lead to transformation-induced faulting in magnesium germanate. J. Geophys. Res. 110, B03202 (2005).
Levitas, V. I. & Shvedov, L. K. Low pressure phase transformation from rhombohedral to cubic BN: experiment and theory. Phys. Rev. B 65, 104109 (2002).
Pandey, K. K. & Levitas, V. I. In situ quantitative study of plastic strain-induced phase transformations under high pressure: Example for ultra-pure Zr. Acta Materialia 196, 338–346 (2020).
Edalati, K. & Horita, Z. A review on high-pressure torsion (HPT) from 1935 to 1988. Mat. Sci. Eng. A. 652, 325–352 (2016).
Levitas, V. I., Ma, Y., Selvi, E., Wu, J. & Patten, J. A. High-density amorphous phase of silicon carbide obtained under large plastic shear and high pressure. Phys. Rev. B 85, 054114 (2012).
Billen, M. I. Deep slab seismicity limited by rate of deformation in the transition zone. Sci. Adv. 6, eaaz7692 (2020).
Cordier, P. et al. Disclinations provide the missing mechanism for deforming olivine-rich rocks in the mantle. Nature 507, 51–56 (2014).
Samae, V. et al. Stress-induced amorphization triggers deformation in the lithospheric mantle. Nature 591, 82–86 (2021).
Hirth, G. & Kohlstedt, D. L. Experimental constraints on the dynamics of the partially molten upper-mantle. 2. Deformation in the dislocation creep regime. J. Geophys. Res. 100, 15441–15449 (1995).
Raterron, P. et al. Multiscale modeling of upper mantle plasticity: from single-crystal rheology to multiphase aggregate deformation. Phys. Earth Planet. Inter. 228, 232–243 (2014).
Ogawa, M. Shear instability in a viscoelastic material as the cause of deep focus earthquakes. J. Geophys. Res. 92, 801–810 (1987).
Mohiuddin, A., Karato, S.-I. & Girard, J. Slab weakening during the olivine to ringwoodite transition in the mantle. Nat. Geosci. 13, 170–174 (2020).
Kanamori, H., Anderson, D. L. & Heaton, T. H. Frictional melting during the rupture of the 1994 Bolivian earthquake. Science 279, 839–842 (1998).
Sung, C.-M. & Burns, R. G. Kinetics of high-pressure phase transformations: implications to the evolution of the olivine → spinel transition in the downgoing lithosphere and its consequences on the dynamics of the mantle. Tectonophysics 31, 1–32 (1976).
Mohiuddin, A. & Karato, S. An experimental study of grain-scale microstructure evolution during the olivine-wadsleyite phase transition under nominally "dry" conditions. Earth Planet. Sci. Lett. 501, 128–137 (2018).
Poirier, J.-P. Introduction to the Physics of the Earth's Interior (Cambridge University Press, 2000).
Smyth, J. R. et al. Olivine-wadsleyite-pyroxene topotaxy: Evidence for coherent nucleation and diffusion-controlled growth at the 410-km discontinuity. Phys. Earth Planet. Inter. 200-201, 85–91 (2012).
Zarkevich, N. A., Chen, H., Levitas, V. I. & Johnson, D. D. Lattice instability during solid-solid structural transformations under general applied stress tensor: example of Si I → Si II with metallization. Phys. Rev. Lett. 121, 165701 (2018).
Levitas, V. I. Phase transitions in elastoplastic materials: continuum thermomechanical theory and examples of control. Part I and II. J. Mech. Phys. Solids 45, 923-947, 1203-1222 (1997).
MathSciNet MATH Google Scholar
Levitas, V. I. Thermomechanical theory of martensitic phase transformations in inelastic materials. Int. J. Solids Struct. 35, 889–940 (1998).
Article MathSciNet MATH Google Scholar
Levitas, V. I., Nesterenko, V. F. & Meyers, M. A. Strain-induced structural changes and chemical reactions. Part I and II. Acta Materialia 46, 5929-5945, 5947-5963 (1998).
Levitas, V. I. Structural changes without stable intermediate state in inelastic material. Part I and II. Int. J. Plasticity 16, 805-849, 851-892 (2000).
MATH Google Scholar
Navrotsky, A. Thermodynamic relations among olivine, spinel, and phenacite structures in silicates and germanates: I. Volume relations and the systems NiO-MgO-GeO2 and CoO-MgO-GeO2. J. Solid State Chem. 6, 21–41 (1973).
Fei, Y. & Bertka, C.M. Mantle Petrology: Field Observations and High-Pressure Experimentation (Oxford University Press: Oxford, 1999).
Levitas, V. I., Esfahani, S. E. & Ghamarian, I. Scale-free modeling of coupled evolution of discrete dislocation bands and multivariant martensitic microstructure. Phys. Rev. Lett. 121, 205701 (2018).
Gleason, G. & Green II, H. W. A general test of the hypothesis that transformation-induced faulting cannot occur in the lower mantle. Phys. Earth Planet Inter 172, 91–103 (2009).
Takacs, L. Self-sustaining reactions induced by ball milling. Prog. Mater. Sci. 47, 355–414 (2002).
Zharov, A. A. In High Pressure Chemistry and Physics of Polymers (ed Kovarskii A. L.) Ch. 7 (CRC Press, Boca Raton,1994)
Koch, C. C. The synthesis and structure of nanocrystalline materials produced by mechanical attricion: a review. Nanostruct. Mater. 2, 109–129 (1993).
Takacs, L. The historical development of mechanochemistry. Chem. Soc. Rev. 42, 7649–7659 (2013).
Balaz, P. et al. Hallmarks of mechanochemistry: from nanoparticles to technology. Chem. Soc. Rev. 42, 7571–7637 (2013).
Frohlich, C. Deep Earthquakes (Cambridge University Press, 2006).
Karato, S., Riedel, M. R. & Yuen, D. A. Rheological structure and deformation of subducted slabs in the mantle transition zone: implications for mantle circulation and deep earthquakes. Phys. Earth Planet. Inter. 127, 83–108 (2001).
Kulnitskiy, B. A. et al. Transformation-deformation bands in C60 after the treatment in a shear diamond anvil cell. Mater. Res. Express 3, 045601 (2016).
Markenscoff, X. "Volume collapse" instabilities in deep-focus earthquakes: a shear source nucleated and driven by pressure. J. Mech. Phys. Solids 152, 104379 (2021).
Article MathSciNet Google Scholar
Levitas, V. I., Idesmanm, A. V., Olson, G. B. & Stein, E. Numerical modeling of martensite growth in elastoplastic material. Philos. Mag., A 82, 429–462 (2002).
Levitas, V. I. & Javanbakht, M. Phase transformations in nanograin materials under high pressure and plastic shear: nanoscale mechanisms. Nanoscale 6, 162–166 (2014).
Feng, B. & Levitas, V. I. Effects of gasket on coupled plastic flow and strain-induced phase transformations under high pressure and large torsion in a rotational diamond anvil cell. J. Appl. Phys. 119, 015902 (2016).
Support from NSF (CMMI-1943710 and DMR-1904830) and Iowa State University (Vance Coffman Faculty Chair Professorship) is greatly appreciated.
Iowa State University, Department of Aerospace Engineering, Ames, IA, 50011, USA
Valery I. Levitas
Iowa State University, Department of Mechanical Engineering, Ames, IA, 50011, USA
Ames Laboratory, Division of Materials Science and Engineering, Ames, IA, 50011, USA
V.I.L. is the sole author of the results obtained in the current paper.
Correspondence to Valery I. Levitas.
The author declares no competing interests.
Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Levitas, V.I. Resolving puzzles of the phase-transformation-based mechanism of the strong deep-focus earthquake. Nat Commun 13, 6291 (2022). https://doi.org/10.1038/s41467-022-33802-y | CommonCrawl |
Where do the laws of quantum mechanics break down in simulations?
As someone who holds a BA in physics I was somewhat scandalized when I began working with molecular simulations. It was a bit of a shock to discover that even the most detailed and computationally expensive simulations can't quantitatively reproduce the full behavior of water from first principles.
Previously, I had been under the impression that the basic laws of quantum mechanics were a solved problem (aside from gravity, which is usually assumed to be irrelevant at molecular scale). However, it seems that once you try to scale those laws up and apply them to anything larger or more complex than a hydrogen atom their predictive power begins to break down.
From a mathematics point of view, I understand that the wave functions quickly grow too complicated to solve and that approximations (such as Born-Oppenheimer) are required to make the wave functions more tractable. I also understand that those approximations introduce errors which propagate further and further as the time and spatial scales of the system under study increase.
What is the nature of the largest and most significant of these approximation errors? How can I gain an intuitive understanding of those errors? Most importantly, how can we move towards an ab-initio method that will allow us to accurately simulate whole molecules and populations of molecules? What are the biggest unsolved problems that are stopping people from developing these kinds of simulations?
quantum-mechanics simulation
teltel
$\begingroup$ Er...what every made you think that "the basic laws of quantum mechanics were a solved problem" was equivalent to being able to "reproduce the full behavior of water from first principles [in simulation]"? It's a thirteen body problem. $\endgroup$ – dmckee --- ex-moderator kitten Apr 23 '12 at 19:28
$\begingroup$ @dmckee see, this is exactly what I'm confused about. 13 body problem means no analytic solution, sure, but what's stopping us from coming up with a numerical solution of arbitrary accuracy? Is it simply that you hit the wall of what's computationally feasible? Are you already at the point where a computation requires the lifetime of a sun to complete? If so, what kinds of approximations can you make to simplify the problem? Can you understand these approximations on an intuitive level? Are there ways to improve the approximations, reduce the level of error they introduce? Break it down for me $\endgroup$ – tel Apr 23 '12 at 20:01
$\begingroup$ @dmckee as for what made me think that water should be simple in the first place... I blame the protein simulators. They made me dream of what was possible :) $\endgroup$ – tel Apr 23 '12 at 20:06
As far as I'm aware, the most accurate methods for static calculations are Full Configuration Interaction with a fully relativistic four-component Dirac Hamiltonian and a "complete enough" basis set. I'm not an expert in this particular area, but from what I know of the method, solving it using a variational method (rather than a Monte-Carlo based method) scales shockingly badly, since I think the number of Slater determinants you have to include in your matrix scales something like $O(^{n_{orbs}}C_{n_e})$. (There's an article on the computational cost here.) The related Monte-Carlo methods and methods based off them using "walkers" and networks of determinants can give results more quickly, but as implied above, aren't variational. And are still hideously costly.
Approximations currently in practical use just for energies for more than two atoms include:
Born Oppenheimer, as you say: this is almost never a problem unless your system involves hydrogen atoms tunneling, or unless you're very near a state crossing/avoided crossing. (See, for example, conical intersections.) Conceptually, there are non-adiabatic methods for the wavefunction/density, including CPMD, and there's also Path-Integral MD which can account for nuclear tunneling effects.
Nonrelativistic calculations, and two-component approximations to the Dirac equation: you can get an exact two-component formulation of the Dirac equation, but more practically the Zeroth-Order Regular Approximation (see Lenthe et al, JChemPhys, 1993) or the Douglas-Kroll-Hess Hamiltonian (see Reiher, ComputMolSci, 2012) are commonly used, and often (probably usually) neglecting spin-orbit coupling.
Basis sets and LCAO: basis sets aren't perfect, but you can always make them more complete.
DFT functionals, which tend to attempt to provide a good enough attempt at the exchange and correlation without the computational cost of the more advanced methods below. (And which come in a few different levels of approximation. LDA is the entry-level one, GGA, metaGGA and including exact exchange go further than that, and including the RPA is still a pretty expensive and new-ish technique as far as I'm aware. There are also functionals which use differing techniques as a function of separation, and some which use vorticity which I think have application in magnetic or aromaticity studies.) (B3LYP, the functional some people love and some people love to hate, is a GGA including a percentage of exact exchange.)
Configuration Interaction truncations: CIS, CISD, CISDT, CISD(T), CASSCF, RASSCF, etc. These are all approximations to CI which assume the most important excited determinants are the least excited ones.
Multi-reference Configuration Interaction (truncations): Ditto, but with a few different starting reference states.
Coupled-Cluster method: I don't pretend to properly understand how this works, but it obtains similar results to Configuration Interaction truncations with the benefit of size-consistency (i.e. $E(H_2) \times 2 = E((H_2)_2$ (at large separation)).
For dynamics, many of the approximations refer to things like the limited size of a tractable system, and practical timestep choice -- it's pretty standard stuff in the numerical time simulation field. There's also temperature maintenance (see Nose-Hoover or Langevin thermostats). This is mostly a set of statistical mechanics problems, though, as I understand it.
Anyway, if you're physics-minded, you can get a pretty good feel for what's neglected by looking at the formulations and papers about these methods: most commonly used methods will have at least one or two papers that aren't the original specification explaining their formulation and what it includes. Or you can just talk to people who use them. (People who study periodic systems with DFT are always muttering about what different functionals do and don't include and account for.) Very few of the methods have specific surprising omissions or failure modes. The most difficult problem appears to be proper treatment of electron correlation, and anything above the Hartree-Fock method, which doesn't account for it at all, is an attempt to include it.
As I understand it, getting to the accuracy of Full relativistic CI with complete basis sets is never going to be cheap without dramatically reinventing (or throwing away) the algorithms we currently use. (And for people saying that DFT is the solution to everything, I'm waiting for your pure density orbital-free formulations.)
There's also the issue that the more accurate you make your simulation by including more contributions and more complex formulations, the harder it is to actually do anything with. For example, spin orbit coupling is sometimes avoided solely because it makes everything more complicated to analyse (but sometimes also because it has negligable effect), and the canonical Hartree-Fock or Kohn-Sham orbitals can be pretty useful for understanding qualitative features of a system without layering on the additional output of more advanced methods.
(I hope some of this makes sense, it's probably a bit spotty. And I've probably missed someone's favourite approximation or niggle.)
AesinAesin
The fundamental challenge of quantum mechanical calculations is that they do not scale very well—from what I recall, the current best-case scaling is approximately $O(N_e^{3.7})$, where $N_e$ is the number of electrons contained in the system. Thus, 13 water molecules will scale as having $N_e = 104$ electrons instead of just $N = 39$ atoms. (That's a factor of nearly 40.) For heavier atoms, the discrepancy becomes even greater.
The main issue will be that, in addition to increased computational horsepower, you will need to come up with better algorithms that can knock down the 3.7 exponent to something that is more manageable.
aeismailaeismail
$\begingroup$ Expand on this. What's the nature of the $O({{N}_{e}}^{3.7})$ algorithm? Who are the people working to improve it? How are they going about it? $\endgroup$ – tel Apr 23 '12 at 20:05
$\begingroup$ I really like and enjoy this discussion! $\endgroup$ – Open the way Apr 24 '12 at 12:27
$\begingroup$ My understanding is that quantum mechanics (or at least electronic structure theory) would be considered a solved problem if the most accurate methods scaled as O(N^3). The problem is that it is essentially only the worst methods, mean field approximations, that approach this scaling, and something like Full CI scales exponentially with the number of electrons (or more typically the basis functions). $\endgroup$ – Tyberius Apr 25 '18 at 21:45
The problem is broadly equivalent to the difference between classical computers and quantum computers. Classical computers work on single values at once, as only one future/history is possible for one deterministic input. However, a quantum computer can operate on every possible input simultaneously, because it can be put in a superposition of all the possible states.
In the same way, a classical computer has to calculate every property individually, but the quantum system it is simulating has all the laws of the universe to calculate all the properties simultaneously.
The problem is exacerbated by the way we have to pass data almost serially through a CPU, or at most a few thousand CPUs. By contrast, the universe has a nearly unlimited set of simultaneous calculations going on at the same time.
Consider as an example 3 electrons in a box. A computer has to pick a timestep (first approximation), and keep recalculating the interactions of each electron with each other electron, via a limited number of CPUs. In reality, the electrons have an unknowable number of real and virtual exchange particles in transit, being absorbed and emitted, as a continuous process. Every particle and point in space has some interaction going on, which would need a computer to simulate.
Simulation is really the art of choosing your approximations and your algorithms to model the subject as well as possible with the resources you have available. If you want perfection, I'm afraid it's the mathematics of spherical chickens in vacuums; we can only perfectly simulate the very simple.
Phil HPhil H
$\begingroup$ really nice "Simulation is really the art of choosing your approximations and your algorithms to model the subject as well as possible with the resources you have available" $\endgroup$ – Open the way Apr 25 '12 at 14:07
$\begingroup$ It is true that only spherical chicken fetishists care about perfection. The real question is what's stopping us from getting to "good enough"? For many problems of biological interest (i.e. every drug binding problem ever), accurate enough would be calculating the energies to within ~1 kT or so. This is sometimes referred to as "chemical accuracy". $\endgroup$ – tel Nov 3 '13 at 8:51
$\begingroup$ @tel: Depends on the area. For some things we have more accuracy in models than we can achieve in practice, e.g. modelling Hydrogen electron orbitals. For others, usually many-body, non-linear systems where multiple effects come into play we struggle to match experiment; quantum chemistry for things like binding energies (see Density Functional Theory), protein folding, these are places where we cannot yet reliably reproduce experiment with commonly available resources. Quantum computers of a reasonable size would do the job. $\endgroup$ – Phil H Nov 6 '13 at 12:25
I don't know if the following helps, but for me it was very insightful to visualize the scaling behavior of quantum systems:
The main problem comes from the fact that the Hilbert space of quantum states grows exponentially with the number of particles. This can be seen very easily in discrete systems. Think of a couple of potential wells that are conected to each other, may just two: well 1 and well 2. Now add bosons (e.g., Rubidium 87, just as an example), at first only one. How many possible basis vectors are there?
basis vector 1: boson in well 1
They can be written like $\left|1,0 \right\rangle$ and $\left|0,1 \right\rangle$
Now suppose the boson can hop (or tunnel) from one well to the other. The Hamiltonian that describes the system can then be written is matrix notation as
$$ \hat{H}=\pmatrix{ \epsilon_{1} & t \\ t & \epsilon_{2}} $$
where $\epsilon_{1,2}$ are just the energies of the boson in well 1 and 2, respectively, and t the tunneling amplitude. The complete solution of this system, i.e., the solution containing all the information necessary to compute the system's state at any given point of time (given an initial condition), is given by the eigenstates and eigenvalues. The eigenstates are linear superpositions of the basis vectors (in this case $\left|1,0 \right\rangle$ and $\left|0,1 \right\rangle$).
This problem is so simple that it can be solved by hand.
Now suppose we have more potential wells and more bosons, e.g., in the case of four wells with two bosons there are 10 different possibilities to distribute the bosons among the wells. Then the Hamiltonian would have 10x10=100 elements and 10 eigenstates.
One can quickly see that the number of eigenstates is given by the binomial coefficient: $$ \text{number of eigenstates}=\pmatrix{\text{number of wells} + \text{number of bosons} - 1 \\ \text{number of bosons}} $$
So even for "just" ten bosons and ten different potential wells (a very small system), we'd have 92,378 eigenstates. The size of the Hamiltonian is then $92,378^2$ (approximately 8.5 billion elements). In a computer they'd occupy (depending on your system) about 70 gigabytes of RAM and is therefore probably impossible to solve on most computers.
Now let's assume we have a continuous system (i.e. no potential wells, but free space) and 13 water molecules (for simplicity I treat them each of them as a particle). Now in computer we can still model free space using many tiny potential wells (we discretize space... which is ok, as long as the relevant physics takes place on larger length scales then the discretization length). Let's say there are 100 different possible positions for each of the molecules in each of the x, y and z directions. So we end up with 100*100*100 = 1,000,000 little boxes. Then we'd have more than $2.7 \cdot 10^{53}$ basis vectors, the Hamiltonian would have almost $10^{107}$ elements, occupying so much space that we'd need all the particles from 10 million universes like ours just to encode that information.
RobertRobert
One problem is that quantum mechanics suffers from the "curse of dimensionality": for most methods of solving PDEs, the number of basis functions needed to get a certain accuracy scales exponentially with the number of dimensions. Since an $n$-electron atom has $3n$ degrees of freedom, even relatively small systems require an enormous number of dimensions to simulate exactly.
Monte Carlo can be used to get around this problem, as the error scales like $\text{points}^{-\frac{1}{2}}$ regardless of the number of dimensions, but convergence is slow.
Density functional theory is another way to deal with this problem, but it's an approximation. It's a very good approximation in some cases, but in other cases it can be surprisingly bad.
I think a highly-accurate simulation of water was the topic of one of the very first and large simulations performed using the Jaguar supercomputer. You might want to look into this paper and their follow-up work (which, by the way, was a finalist for the Gordon-Bell prize in 2009):
"Liquid water: obtaining the right answer for the right reasons", Aprà, Rendell, Harrison, Tipparaju, deJong, Xantheas.
fcruzfcruz
This problem is solved by Density Functinal Theory. The essence is replacing many body degrees of freedom by several fields one of them beeing the density of electrons. For a grand exposition see the nobel lecture of one of the founders of DFT: http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1998/kohn-lecture.pdf
ArtanArtan
$\begingroup$ Could you give some context to the link you're providing? We discourage answers that only give a link without any sort of explanation, and these sorts of answers are deleted unless they are edited. $\endgroup$ – Geoff Oxberry Apr 24 '12 at 19:40
$\begingroup$ and by the way, you should really take care with "This problem is solved by ....". Since there are limits for DFT which somebody should mention $\endgroup$ – Open the way Apr 25 '12 at 8:59
$\begingroup$ DFT provides a very useful approximation, but does not 'solve' anything! It is not exact without exact functionals for the exchange and correlation, and even then does not yield the wavefunctions but the electron density. $\endgroup$ – Phil H Apr 25 '12 at 13:06
$\begingroup$ Many body QM does not break down as a theory, it is just NP hard. DFT is a theory with polynomial complexity that solves with the same acuracy as with basic principles QM the electronic structure of all chemical elements. This is why it earned Nobel Prize in chemistry. It has provided excellent resuls for large systems when compared to experiments. $\endgroup$ – Artan Apr 25 '12 at 19:31
$\begingroup$ You are wrong. DFT does not solve "the problem" with the same accuracy. It "solves" one particular case (ground state) by introducing completely unknown exchange-correlation functional. $\endgroup$ – Misha Sep 13 '13 at 9:43
Not the answer you're looking for? Browse other questions tagged quantum-mechanics simulation or ask your own question.
Are there any QM macromolecule simulation methods that can use an electron density map as input?
What are the advantages and disadvantages of the particle decomposition and domain decomposition parallelization algorithms?
What is the difference in accuracy between fully QM atomic simulations vs QM + classical?
Confusion about Quantum Monte Carlo
A programming model for Quantum Mechanics angular momenta in Mathematica
ground state from the Schroedinger equation with a central potential what happens to the origin
Non-hermitian discretizations in quantum mechanics
What will be the impact of quantum computing on existing numerical techniques (e.g. CFD)? | CommonCrawl |
An Introduction to Political and Social Data Analysis Using R
How to use this book
What's in this Book?
Keys to Student Success
Data Sets and Codebooks
1 Introduction to Research and Data
1.1 Political and Social Data Analysis
1.2 Data Analysis or Statistics?
1.3.1 Interests and Expectations
1.3.2 Research Preparation
1.3.3 Data Analysis and Interpretation
1.3.4 Feedback
1.4 Observational vs. Experimental Data
1.4.1 Necessary Conditions for Causality
1.5 Levels of Measurement
1.6 Level of Analysis
1.7 Next Steps
1.8 Assignments
1.8.1 Concepts and Calculations
2 Using R to Do Data Analysis
2.1 Accessing R
2.2 Understanding Where R (or any program) Fits In
2.3 Time to Use R
2.4 Some R Terminology
2.4.1 Save Your Work
2.6.1 R Problems
3 Frequencies and Basic Graphs
3.1 Get Ready
3.3 Counting Outcomes
3.3.1 The Limits of Frequency Tables
3.4 Graphing Outcomes
3.4.1 Bar Charts
3.4.2 Histograms
3.4.3 Density Plots
3.4.4 A few Add-ons for Graphing
4 Transforming Variables
4.3 Data Transformations
4.4 Renaming and Relabeling
4.4.1 Changing Attributes
4.5 Collapsing and Reordering Catagories
4.6 Combining Variables
4.6.1 Creating an Index
4.7 Saving Your Changes
5 Measures of Central Tendency
5.2 Central Tendency
5.2.1 Mode
5.3 Median
5.4 The Mean
5.4.1 Dichotomous Variables
5.5 Mean, Median, and the Distribution of Variables
5.6 Skewness Statistic
5.7 Adding Legends to Graphs
6 Measures of Dispersion
6.3 Measures of Spread
6.3.1 Range
6.3.2 Interquartile Range (IQR)
6.3.3 Boxplots
6.4 Dispersion Around the Mean
6.4.1 Don't Make Bad Comparisons
6.5 Dichotomous Variables
6.6 Dispersion in Categorical Variables?
6.7 The Standard Deviation and the Normal Curve
6.7.1 Really Important Caveat
6.8 Calculating Area Under a Normal Curve
6.9 One Last Thing
6.10 Next Steps
6.11 Assignments
6.11.1 Concepts and Calculations
6.11.2 R Problems
7 Probability
7.1 Get Started
7.2 Probability
7.3 Theoretical Probabilities
7.3.1 Large and Small Sample Outcomes
7.4 Empirical Probabilities
7.4.1 Empirical Probabilities in Practice
7.4.2 Intersection of Two Probabilities
7.4.3 The Union of Two Probabilities
7.4.4 Conditional Probabilities
7.5 The Normal Curve and Probability
8 Sampling and Inference
8.1 Getting Ready
8.2 Statistics and Parameters
8.3 Sampling Error
8.4 Sampling Distributions
8.4.1 Simulating the Sampling Distribution
8.5 Confidence Intervals
8.6 Proportions
9 Hypothesis Testing
9.2 The Logic of Hypothesis Testing
9.2.1 Using Confidence Intervals
9.2.2 Direct Hypothesis Tests
9.2.3 One-tail or Two?
9.3 T-Distribution
9.5 T-test in R
10 Hypothesis Testing with Two Groups
10.1 Getting Ready
10.2 Testing Hypotheses about Two Means
10.2.1 Generating Subgroup Means
10.3 Hypothesis Testing with Two means
10.3.1 A Theoretical Example
10.3.2 Returning to the Empirical Example
10.3.3 Calculating the t-score
10.3.4 Statistical Significance vs. Effect Size
10.4 Difference in Proportions
10.5 Plotting Mean Differences
10.6 What's Next?
10.7.1 Concepts and Calculations
10.7.2 R Problems
11 Hypothesis Testing with Multiple Groups
11.2 Internet Access as an Indicator of Development
11.2.1 The Relationship between Wealth and Internet Access
11.3 Analysis of Variance
11.3.1 Important concepts/statistics:
11.4 Anova in R
11.5 Effect Size
11.5.1 Plotting Multiple Means
11.6 Population Size and Internet Access
11.7 Connecting the T-score and F-Ratio
11.8 Next Steps
11.9 Assignments
12 Hypothesis Testing with Crosstabs
12.2 Crosstabs
12.2.1 The Relationship Between Education and Religiosity
12.3 Sampling Error
12.4 Hypothesis Testing with Crosstabs
12.4.1 Regional Differences in Religiosity?
12.5 Directional Patterns in Crosstabs
12.5.1 Age and Religious Importance
12.6 Limitations of Chi-Square
13 Measures of Association
13.2 Going Beyond Chi-squared
13.3 Measures of Association for Crosstabs
13.3.1 Cramer's V
13.3.2 Lambda
13.4 Ordinal Measures of Association
13.4.1 Gamma
13.4.2 Tau-b and Tau-c
13.5 Revisiting the Gender Gap in Abortion Attitudes
13.5.1 When to Use Which Measure
14 Correlation and Scatterplots
14.1 Get Started
14.2 Relationships between Numeric Variables
14.3 Scatterplots
14.4 Pearson's r
14.4.1 Calculating Pearson's r
14.4.2 Other Independent Variables
14.5 Variation in Strength of Relationships
14.6 Proportional Reduction in Error
14.7 Correlation and Scatterplot Matrices
14.8 Overlapping Explanations
14.10 Exercises
14.10.1 Concepts and calculations
14.10.2 R Problems
15 Simple Regression
15.2 Linear Relationships
15.3 Ordinary Least Squares Regression
15.3.1 Calculation Example: Presidential Vote in 2016 and 2020
15.4 How Well Does the Model Fit the Data?
15.6 Getting Regression Results in R
15.6.1 All Fifty States
15.7 Understanding the Constant
15.8 Non-numeric Independent Variables
15.9 Adding More Information to Scatterplots
15.10 Next Steps
15.11 Assignments
16 Multiple Regression
16.1 Getting Started
16.2 Organizing the Regession Output
16.2.1 Summarizing Life Expectancy Models.
16.3.1 Assessing the Substantive Impact
16.4 Model Accuracy
16.5 Predicted Outcomes
16.5.1 Identifying Observations
17 Advanced Regression Topics
17.2 Incorporating Access to Health Care
17.3 Multicollinearity
17.4 Checking on Linearity
17.4.1 Stop and Think
17.5 Which Variables have the Greatest Impact?
17.6 Statistics vs. Substance
18 Regession Assumptions
18.2 Regression Assumptions
18.3 Linearity
18.4 Independent Variables are not Correlated with the Error Term
18.5 No Perfect Multicollinearity
18.6 The Mean of the Error Term equals zero
18.7 The Error Term is Normally Distributed
18.8 Constant Error Variance (Homoscedasticity)
18.9 Independent Errors
18.10 What's next?
Appendix: Codebooks
ANES20
County20large
Countries2
States20
Chapter 9 Hypothesis Testing
In this chapter,the concepts used in Chapters 7 & 8 are extended to focus more squarely on making statistical inferences through the process of hypothesis testing. The focus here is on taking the abstract ideas that are the foundation for hypothesis testing and applying them to some concrete examples. The only thing you need to load in order to follow among is the anes20.rda data set.
When engaged in the process of hypothesis testing, we are essentially asking "what is the probability that the statistic found in the sample could have come from a population in which it is equal to some other, specified, value?" As discussed in Chapter 8, social scientists want to know something about a population value of interest but frequently are only able to work with sample data. We generally think the sample data represent the population fairly well but we know that there will be some sampling error. In Chapter 8, we took this into account using confidence intervals around sample statistics. In this chapter, we apply some of the same logic to determine if the sample statistic is different enough from a hypothesized population parameter that we can be confident it did not occur just due to sampling error. (Come back and reread this paragraph when you are done with this chapter; it will make a lot more sense then).
We generally consider two different types of hypotheses, the null and alternative (or research) hypotheses.
Null Hypothesis (H0): This hypothesis is tested directly. It usually states that the population parameter (\(\mu\)) is equal to some specific value, even if the sample statistic (\(\bar{x}\)) is a different value. The implication is that the the difference between the sample statistic and the hypothesized population parameter is attributable to sampling error, not a real difference. We usually hope to reject the null hypothesis. I know this sounds strange, but it will make more sense to you soon.
Alternative (research) Hypothesis (H1): This is a substantive hypothesis that we think is true. Usually, the alternative hypothesis posits that the population parameter does not equal the value specified in H0. We don't actually test this hypothesis directly. Rather, we try to build a case for it by showing that the sample statistic is different enough from the population value hypothesized in H0 that it is unlikely that the null hypothesis is true.
We can use what we know about the z-distribution to test the validity of the null hypothesis by stating and testing hypotheses about specific values of population parameters. Consider the following problem:
An analyst in the Human Resources department for a large metropolitan county is asked to evaluate the impact of a new method of documenting sick leave among county employees. The new policy is intended to cut down on the number of sick leave hours taken by workers. Last year, the average number of hours of sick leave taken by workers was 59.2 (about 7.4 days), a level determined to be too high. To evaluate if the new policy is working, the analyst took a sample of 100 workers at the end of one year under the new rules and found a sample mean of 54.8 hours (about 6.8 days), and a standard deviation of 15.38. The question is, does this sample mean represent a real change in sick leave use, or does it only reflect sampling error? To answer this, we need to determine how likely it is to get a sample mean if 54.8 from a population in which \(\mu=59.2\).
As alluded to at the end of Chapter 8, you already know one way to test hypotheses about population parameters by using confidence intervals. In this case, we can calculate the lower- and upper-limits of a 95% confidence interval around the sample mean (54.8) to see if it includes \(\mu\) (59.2):
\[c.i._{.95}=54.8\pm {1.96(S_{\bar{x}})}\] \[S_{\bar{x}}=\frac{15.38}{\sqrt{100}}=1.538\] \[c.i._{.95}=54.8 \pm 1.96(1.538)\] \[c.i._{.95}=54.8 \pm 3.01\] \[51.78\le \mu \le57.81\]
From this sample of 100 employees, after one year of the new policy in place we estimate that there is a 95% chance that \(\mu\) is between 51.78 and 57.81, and the probability that \(\mu\) is outside this range is less than .05. Based on this alone we can say there is less than a 5% chance that the number of hours of sick leave taken is the same that it was in the previous year. In other words, there is a fairly high probability that fewer sick leave hours were used in the year after that policy change than in the previous year.
We can be a bit more direct and precise by setting this up as a hypothesis test and then calculating the probability that the null hypothesis is true. First, the null hypothesis.
\[H_{0}:\mu=59.2\]
Note that this is saying is that there is no real difference between last year's mean number of sick days (\(\mu\)) and the sample we've drawn from this year (\(\bar{x}\)). Even though the sample mean looks different from 59.2, the true population mean is 59.2 and the sample statistic is just a result of random sampling error. After all, if the population mean is equal to 59.2, any sample drawn from that population will produce a mean that is different from 59.2, due to sampling error. In other words, H0, is saying that the new policy had no effect, even though the sample mean suggests otherwise.
Because the county analyst is interested in whether the new policy reduced the use of sick leave hours, the alternative hypothesis is:
\[H_{1}:\mu < 59.2\]
Here, we are saying that the sample statistic is different enough from the hypothesized population value (59.2) that it is unlikely to be the result of random chance, and the population value is less than 59.2.
Note here that we are not testing whether the number of sick days is equal to 54.8 (the sample mean). Instead, we are testing whether the average hours of sick leave taken this year is lower than the number of sick days taken last year. The alternative hypothesis reflects what we really think is happening; it is what we're really interested in. However, we cannot test the alternative hypotheses directly. Instead, we examine the null hypothesis as a way of gathering evidence to support the alternative.
So, the question we need to answer in order to test the null hypothesis is, how likely is it that a sample mean of this magnitude (54.8) could be drawn from a population in which \(\mu\text{= 59.2}\)? We know that we would get lots of different mean outcomes if we took repeated samples from this population. We also know that most of them would be clustered near \(\mu\) and a few would be relatively far away from \(\mu\) at both ends of the distribution. All we have to do is estimate the probability of getting a sample mean of 54.8 from a population in which \(\mu\text{= 59.2}\) If the probability of drawing \(\bar{x}\) from \(\mu\) is small enough, then we can reject H0.
How do we assess this probability? By using what we know about sampling distributions. Check out the figure below, which illustrates the logic of hypothesis testing using a theoretical distribution:
Figure 9.1: The Logic of Hypothesis Testing
Suppose we draw a sample mean equal to -1.96 from a population in which \(\mu=0\) and the standard error equals 1 (this, of course, is a normal distribution). We can calculate the probability of \(\bar{x}\le-1.96\) by estimating the area under the curve to the left of -1.96. The area on the tail of the distribution used for hypothesis testing is referred to as the \(\alpha\) (alpha) area. We know that this \(\alpha\) area is equal to .025 (How do we know this? Check out the discussion of the z-distribution from the earlier chapters), so we can say that the probability of drawing a sample mean less than or equal to -1.96 from a population in which \(\mu=0\) is about .025. What does this mean in terms of H0? It means that probability that \(\mu=0\) is about .025, which is pretty low, so we reject the null hypothesis and conclude that \(\mu<0\). The smaller the p-value, the less likely it is the H0 is true.
Critical Values. A common and fairly quick way to use the z-score in hypothesis testing is by comparing it to the critical value (c.v.) for z. The c.v. is the z-score associated with the probability level required to reject the null hypothesis. To determine the critical value of z, we need to determine what the probability threshold is for rejecting the null hypothesis. It is fairly standard to consider any probability level lower than .05 sufficient for rejecting the null hypothesis in the social sciences. This probability level is also known as the significance level.
So, typically, the critical value is the z-score that gives us .05 as the area on the tail (left in this case) of the normal distribution. Looking at the z-score table from Chapter 6, or using the qnorm function in R, we see that this is z = -1.645. The area beyond the critical value is referred to as the critical region, and is sometimes also called the area of rejection: if the z-score fall in this region, the null hypothesis is rejected.
#Get the z-score for .05 area at the lower tail of the distribution
qnorm(.05, lower.tail = T)
[1] -1.644854
Once we have the \(c.v.\) we can calculate the z-score for the difference between \(\bar{x}\) and \(\mu\). If \(|z| > |z_{cv}|\), then we reject the null hypothesis:
So let's get back to the sick leave example.
First, what's the critical value? -1.65 (make sure you understand why this is the value)
What is the obtained value of z?
\[z=\frac{\bar{x}-\mu}{S_{\bar{x}}} = \frac{54.8-59.2}{1.538} = \frac{-4.4}{1.538}= -2.86\]
If the |z| is greater than the |c.v.|, then reject H0. If the |z| is less than the critical value, then fail to reject H0
In this case z (-2.86) is of much greater (absolute) magnitude than c.v. (-1.65), so we reject the null hypothesis and conclude that \(\mu\) is probably less than 59.2. By rejecting the null hypothesis we build a case for the alternative hypothesis, though we never test the alternative directly. One way of thinking about this is that there is less than a .05 probability that H0 is true. We are saying that this probability is small enough that we are confident in rejecting H0.
We can be a bit more precise about the level of confidence in rejecting the null hypothesis (the level of significance) by estimating the alpha area to the left of z=-2.86:
#Area under to curve to the left of -2.86
pnorm(-2.86)
[1] 0.002118205
This alpha area (or p-value) is close to zero, meaning that there is little chance that there was no change in sick leave usage. Check out Figure 9.2 as an illustration of how unlikely it is to get a sample mean of 54.8 (thin solid line) from a population in which \(\mu=59.2\), (thick solid line) based on our sample statistics. Remember, the area to the left of the critical value (dashed line) is the critical region, equal to .05 of the area under the curve, and the sample mean is far to the left of this point.
One useful way to think about this p-value is that if we took 1000 samples of 100 workers from a population in which \(\mu=59.2\) and calculated the mean hours of sick leave taken for each sample, only two samples would give you a result equal to or less than 54.8 simply due to sampling error. In other words, there is a 2/1000 chance that the sample mean was the result of random variation instead of representing a real difference from the hypothesized value.
Figure 9.2: An Illustration of Key Concepts in Hypothesis Testing
Note that we were explicitly testing a one-tailed hypothesis in the example above. We were saying that we expect a reduction in the number of sick days due to the new policy. But suppose someone wanted to argue that there was a loophole in the new policy that might make it easier for people to take sick days. These sorts of unintended consequences almost always occur with new policies. Given that it could go either way (\(\mu\) could be higher or lower than 59.2), we might want to test a two-tailed hypothesis, that the new policy could create a difference in sick day use–maybe positive, maybe negative.
\(H_{1}:\mu \ne 59.2\)
The process for testing two-tailed hypotheses is exactly the same, except that we use a larger critical value because even though the \(\alpha\) area is the same (.05), we must now split it between two tails of the distribution. Again, this is because we are not sure if the policy will increase or decrease sick leave. When the alternative hypothesis does not specify a direction, we use the two-tailed test.
Figure 9.3: Critical Values for One and Two-tailed Tests
The figure below illustrates the difference in critical values for one- and two-tailed hypothesis tests. Since we are splitting .05 between the two tails, the c.v. for a two-tailed test is now the z-score that gives us .025 as the area beyond z at the tails of the distribution. Using the qnorm function in R (below), we see that this is z= 1.96, so the critical value for the two-tailed test is 1.96.
#Z-score for .025 area at one tail of the distribution
qnorm(.025)
If we obtain a z-score (positive or negative) that is larger in absolute magnitude than this, we reject H0. Using a two-tailed test requires a larger z-score, making it slightly harder to reject the null hypothesis. However, since the z-score in the sick leave example was -2.86, we would still reject H0 under a two-tailed test.
In truth, the choice between a one- or two-tailed test rarely makes a difference in rejecting or failing to reject the null hypothesis. The choice matters most when the p-value from a one-tailed test is greater than .025, in which case it would be greater than .05 in a two-tailed test. It is worth scrutinizing findings from one-tailed tests that are just barely statistically significant to see if a two-tailed test would be more appropriate. Because the two-tailed test provides a more conservative basis for rejecting the null hypothesis, researchers often choose to report two-tailed significance levels even when a one-tailed test could be justified. Many statistical programs, including R, report two-tailed p-values by default.
Thus far, we have focused on using z-scores and the z-distribution for testing hypotheses and constructing confidence intervals. Another distribution available to us is the t-distribution. The t-distribution has an important advantage over the z-distribution: it does not assume that we know the population standard error. This is very important because we rarely know the population standard error. In other words, the t-distribution assumes that we are using an estimate of the standard error. The estimate of the standard error is:
\[\hat{\sigma}_{\bar{x}}=S_{\bar{x}}=\frac{S}{\sqrt{N}}\]
\(S_{\bar{x}}\) is our best guess for \(\sigma_{\bar{x}}\), but it is based on a sample statistic, so it does involve some level of error.
In recognition of the fact that we are estimating the standard error with sample data rather than the population, the t-distribution is somewhat flatter (see Figure 9.4 below) than the z-distribution. Comparing the two distributions, you can see that they are both perfectly symmetric but that the t-distribution is a bit more squat and has slightly fatter tails. This means that the critical value for a given level of significance will be larger in magnitude for a t-score than for a z-score. This difference is especially noticeable for small samples and virtually disappears for samples greater than 100, at which point the t-distribution becomes almost indistinguishable from the z-distribution (see Figure 9.5).
Figure 9.4: Comparison of Normal and t-Distributions
Now, here's the fun part—the t-score is calculated the same way as the z-score. We do nothing different than what we did to calculate the z-score.
\[t=\frac{\bar{x}-\mu}{S_{\bar{x}}}\]
We use the t-score and the t-distribution in the same way and for the same purposes that we use the z-score.
Choose a p-value or level of significance (\(\alpha\)) for rejecting H0. (Usually .05)
Find the critical value of t associated with \(\alpha\) (depends on degrees of freedom)
Calculate the t-score from the sample data.
Compare t-score to c.v. If \(|t| > c.v.\), then reject H0; if \(|t| < c.v.\), then fail to reject.
While everything else looks about the same as the process for hypothesis testing with z-scores, determining the critical value for a t-distribution is somewhat different and depends upon sample size. This is because we have to consider something called degrees of freedom (df), essentially taking into account the issue discussed in Chapter 8, that sample data tend to slightly underestimate the variance and standard deviation and that this underestimation is a bigger problem with small samples. For testing hypotheses about a single mean, degrees of freedom equal:
\[df=n-1\]
So for the sick leave example used above:
\[df=100-1=99\]
You can see the impact of sample size (through degrees of freedom) on the shape of the t-distribution in figure 9.5: as sample size and degrees of freedom increase, the t-distribution grows more and more similar to the normal distribution. At df=100 (not shown here) the t-distribution is virtually indistinguishable from the z-distribution.
Figure 9.5: Degrees of Freedom and Resemblance of t-distribution to the Normal Distribution
There are two different methods you can use to find the critical value of t for a given level of degrees of freedom. We can go "old school" and look it up in a t-distribution table (below)25, or we can ask R to figure it out for us. It's easier to rely on R for this, but there is some benefit to going old school at least once. In particular, it helps reinforce how degrees of freedom, significance levels, and critical values fit together. You should follow along.
The first step is to decide if you are using a one-tailed or two-tailed test, and then decide what the desired level of significance is. For instance, for the sick leave policy example, we can assume a one-tailed test with a .05 level of significance. The relevant column of the table is found by going across the top row of p-values to the column headed by .05. Then, scan down the column until we find the point where it intersects with the appropriate degree of freedom row. In this example, df=99, but there is no listing of df=99 in the table so we will err on the side of caution and use the next lowest value, 90. The .05 one-tailed level of significance column intersects with the df=90 row at t=1.662, so -1.662 is the critical value of t in the sick leave example. Note that it is only slightly different than the c.v. for z we used in the sick leave calculations, -1.65. This is because the sample size is relatively large (in statistical terms) and the t-distribution closely approximates the z-distribution for large samples. So, in this case the z- and t-distributions lead to the same outcome, we decide to reject H0. Table 9.1. T-score Critical Values at Different P-values and Degrees of Freedom
Alternatively, we could ask R to provide this information using the qt function. For this, you need to declare the desired p-value and specify the degrees of freedom, and R reports the critical value:
#Calculate t-score for .05 at one tail, with df=99
#The command is: qt(alpha, df)
qt(.05, 99)
By default, qt() provides the critical values for a specified alpha area at the lower tail of the distribution (hence, -1.66). For a two-tailed test, you need to cut the alpha area in half:
#Calculate t-score for .025 at one tail, wit df=99
qt(.025, 99)
Here, R reports a critical value of \(\pm 1.984\) for a two-tailed test from a sample with df=99. Again, this is slightly larger than the critical value for a z-score (1.96). If you used the t-score table to do this the old-school way, you would find the critical value is t=1.99, for df=90. The results from using the qt function are more accurate than from using the t-table since you are able to specify the correct degrees of freedom.
Whether using a one- or two-tailed, the conclusion for the sick leave example is unaffected: the t-score obtained from the sample (-2.68) is in the critical region, so reject H0.
We can also get a bit more precise estimate of the probability of getting a sample mean of 54.8 from a population in which \(\mu\)=59.2 by asking R to tell us the area under the curve to the left of t=-2.86:
1-(pt(2.86,df=99))
Note that this result is very similar to what we obtained when using the z-distribution (.002118). For a two-tailed test using the t-distribution, we double this to find a p-value equal to .005167.
As discussed in Chapter 8, the logic of hypothesis testing about mean values also applies to proportions. For example, in the sick leave example, instead of testing whether \(\mu=59.2\) we could test a hypothesis regarding the proportion of employees who take a certain number of sick days. Let's suppose that in the year before the new policy went into effect, 50% of employees took at least 7 sick days. If the new policy has an impact, then the proportion of employees taking at least 7 days of sick leave during the year after the change in policy should be lower than .50. In the sample of 100 employees used above, the proportion of employees taking at least 7 sick days was .41. In this case, the null and alternative hypotheses are:
H0: P=.50
H1: P<.50
To review, in the previous example, to test the null hypothesis we established a desired level of statistical significance (.05), determined the critical value for the t-score (-1.66), calculated the t-statistic, and compare it the the critical value. There are a couple differences, however, when working with hypotheses about the population value of proportions.
Because we can calculate the population standard deviation based on the hypothesized value of P (.5), we can use the z-distribution rather than the t-distribution to test the null hypothesis. To calculate the z-score, we use the same formula as before:
\[z=\frac{p-P}{S_{p}}\] Where:
\[S_{p}=\sqrt{\frac{P(1-P)}{n}}\]
Using the data from the problem, this give us:
\[z=\frac{p-P}{S_{p}}=\frac{.41-.5}{\sqrt{\frac{.5(.5))}{100}}}=\frac{-.09}{.05}=-1.8\]
We know from before that the critical value for a one-tailed test using the z-distribution is -1.65. Since this z-score is larger (in absolute terms) than the critical value, we can reject the null hypothesis and conclude that the proportion of employees using at least 7 days of sick leave per year is lower than it was in the year before the new sick leave policy went into effect.
Again, we can be a bit more specific about the p-value:
1-pnorm(1.8)
[1] 0.03593032
Here are a couple of things to think about with this finding. First, while the p-value is lower than .05, it is not much lower. In this case, if you took 1000 samples of 100 workers from a population in which \(P=.50\) and calculated the proportion who took 7 or more sick days, approximately 36 of those samples would produce a proportion equal to .41 or lower, just due to sampling error. This still means that the probability of getting this sample finding from a population in which the null hypothesis was true is pretty small (.03593), so we should be comfortable rejecting the null hypothesis. But what if there were good reasons to use a two-tailed test? Would we still reject the null hypothesis? No, because the critical value (-1.96) would be larger in absolute terms than the z-score, and the p-value would be .07186. These findings stand in contrast to those from the analysis of the average number of sick days taken, where the p-values for both one- and two-tailed tests were well below the .05 cut-off level.
One of the take-home messages from this example is that our confidence in findings is sometimes fragile, since "significance" can be a function of how you frame the hypothesis test (one- or two-tailed test?) or how you measure your outcomes (average hours of sick days taken, or proportion who take a certain number of sick days). For this reason, it is always a good idea to be mindful of how the choices you make might influence your findings.
Let's say you are looking at data on public perceptions of the presidential candidates in 2020 and you have a sense that people had mixed feelings about Democratic nominee, Joe Biden, going into the election. This leads you to expect that his average rating on the 0 to 100 feeling thermometer scale from the ANES was probably about 50 . You decide to test this directly with the anes20 data set.
The null hypothesis is:
H0: \(\mu=50\)
Because there are good arguments for expecting the mean to be either higher or lower than 50, the alternative hypothesis is two-tailed:
H1: \(\mu\ne50\)
First, you get the sample mean:
#Get the sample mean for Biden's feeling thermometer rating
mean(anes20$V202143, na.rm=T)
[1] 53.41213
Here, you see that the mean feeling thermometer rating for Biden in the fall of 2020 was 53.41. This is higher than what you thought it would be (50), but you know that it's possible to could get a sample outcome of 53.41 from a population in which the mean is actually 50, so you need to do a t-test to rule out sampling error as reason for the difference.
In R, the command for a one-sample two-tailed t-test is relatively simple, you just have to specify the variable of interest and the value of \(\mu\) under the null hypothesis:
#Use 't.test' and specify the variable and mu
t.test(anes20$V202143, mu=50)
One Sample t-test
data: anes20$V202143
t = 8.1805, df = 7368, p-value = 3.303e-16
alternative hypothesis: true mean is not equal to 50
95 percent confidence interval:
sample estimates:
mean of x
These results are pretty conclusive, the t-score is 8.2 and the p-value is very close to 0.26 Also, if it makes more sense for you to think of this in terms of a confidence interval, the 95% confidence interval ranges from about 52.6 to 54.2, which does not include 50. We should reject the null hypothesis and conclude instead that Biden's feeling thermometer rating in the fall of 2020 was greater than 50.
Even though Joe Biden's feeling thermometer rating was greater than 50, from a substantive perspective it is important to note that a score of 53 does not mean Biden was wildly popular, just that his rating was greater than 50. This point is addressed at greater length in the next several chapters, where we explore measures of substantive importance that can be used to complement measures of statistical significance.
The last three chapters have given you a foundation in the principles and mechanics of sampling, statistical inference, and hypothesis testing. Everything you have learned thus far is interesting and important in its own right, but what is most exciting is that it prepares you for testing hypotheses about outcomes of a dependent variable across two or more categories of an independent variable. In other words, you now have the tools necessary to begin looking at relationships among variables. We take this up in the next chapter by looking at differences in outcomes across two groups. Following that, we test hypotheses about outcomes across multiple groups in Chapters 11 through 13. In each of the next several chapters, we continue to focus on methods of statistical inference, exploring alternative ways to evaluate statistical significance. At the same time, we also introduce the idea of evaluating the strength of relationships by focusing on measures of effect size. Both of these concepts–statistical significance and effect size–continue to play an important role in the remainder of the book.
The survey of 300 college students introduced in the end-of-chapter exercises in Chapter 8 found that the average semester expenditure was $350 with a standard deviation of $78. At the same time, campus administration has done an audit of required course materials and claims that the average cost of books and supplies for a single semester should be no more than $340. In other words, the administration is saying the population value is $340.
State a null and alternative hypothesis to test the administration's claim. Did you use a one- or two-tailed alternative hypothesis? Explain your choice
Test the null hypothesis and discuss the findings. Show all calculations
The same survey reports that among the 300 students, 55% reported being satisfied with the university's response to the COVID-19 pandemic. The administration hailed this finding as evidence that a majority of students support the course they've taken in reaction to the pandemic. (Hint: this is a "proportion" problem)
For this assignment, you should use the feeling thermometers for Donald Trump (anes20$V202144), liberals (anes20$V202161), and conservatives (anes20$V202164).
Using descriptive statistics and either a histogram, boxplot, or density plot, describe the central tendency and distribution of each feeling thermometer.
Use the t.test function to test the null hypotheses that the mean for each of these variables in the population is equal to 50. State the null and alternative hypotheses and interpret the findings from the t-test.
Taking these findings into account, along with the analysis of the Joe Biden's feeling thermometer at the end of the chapter, do you notice any apparent contradictions in American public opinion? Explain
The code for generating this table comes from Ben Bolker via stackoverflow (https://stackoverflow.com/questions/31637388/).↩︎
Remember that 3e-16 is scientific notation and means that you should move the decimal point 16 places to the left of 3. This means that p=.0000000000000003.↩︎ | CommonCrawl |
Search in book:
Introduction to the A7000 Textbook
Chapter 1 Science and the Universe: A Brief Tour
1.1 The Nature of Astronomy
1.2 The Nature of Science
1.3 The Laws of Nature
1.4 Numbers in Astronomy
1.5 Consequences of Light Travel Time
1.6 A Tour of the Universe
1.7 The Universe on the Large Scale
1.8 The Universe of the Very Small
1.9 A Conclusion and a Beginning
8.0 Thinking Ahead
8.1 The Global Perspective
8.2 Earth's Crust
8.3 Earth's Atmosphere
8.4 Life, Chemical Evolution, and Climate Change
8.5 Cosmic Influences on the Evolution of Earth
Chapter 2 Observing the Sky: The Birth of Astronomy
2.1 The Sky Above
2.2 Ancient Astronomy Around the World
2.3 Astronomy of the First Nations of Canada
2.4 Ancient Babylonian, Greek and Roman Astronomy
2.5 Astrology and Astronomy
2.6 The Birth of Modern Astronomy – Copernicus and Galileo
2.7 For Further Exploration, Websites
2.8 Collaborative Group Activities
2.9 Questions and Exercises
Chapter 3 Orbits and Gravity
3.1 The Laws of Planetary Motion
3.2 Newton's Great Synthesis
3.3 Newton's Universal Law of Gravitation
3.4 Orbits in the Solar System
3.5 Motions of Satellites and Spacecraft
3.6 Gravity with More Than Two Bodies
3.7 For Further Exploration
Chapter 4 Earth, Moon, and Sky
4.1 Earth and Sky
4.2 The Seasons
4.3 Keeping Time
4.4 The Calendar
4.5 Phases and Motions of the Moon
4.6 Ocean Tides and the Moon
4.7 Eclipses of the Sun and Moon
Chapter 7 Other Worlds: An Introduction to the Solar System
7.1 Overview of Our Planetary System
7.2 Composition and Structure of Planets
7.3 Dating Planetary Surfaces
7.4 Origin of the Solar System
Chapter 9 Cratered Worlds
9.1 General Properties of the Moon
9.2 The Lunar Surface
9.3 Impact Craters
9.4 The Origin of the Moon
9.5 Mercury
9.6 Key Concepts and Summary, Further Explorations
9.7 Collaborative Group Activities and Exercises
Chapter 10 Earthlike Planets: Venus and Mars
10.0 Thinking Ahead
10.1 The Nearest Planets: An Overview
10.2 The Geology of Venus
10.3 The Massive Atmosphere of Venus
10.4 The Geology of Mars
10.5 Water and Life on Mars
10.6 Divergent Planetary Evolution
10.7 Collaborative Group Activities and Exercises
Chapter 11 The Giant Planets
11.1 Exploring the Outer Planets
11.2 The Giant Planets
11.3 Atmospheres of the Giant Planets
Chapter 12 Rings, Moons, and Pluto
12.1 Ring and Moon Systems Introduced
12.2 The Galilean Moons of Jupiter
12.3 Titan and Triton
12.4 Pluto and Charon
12.5 Planetary Rings
12.6 Summary, Further Exploration, Websites
Chapter 13 Comets and Asteroids: Debris of the Solar System
13.2 Asteroids and Planetary Defense
13.3 The "Long-Haired" Comets
13.4 The Origin and Fate of Comets and Related Objects
13.5 Key Concepts and Summary, Further Explorations, Websites
13.1 Asteroids
Chapter 14 Cosmic Samples and the Origin of the Solar System
14.2 Meteorites: Stones from Heaven
14.3 Formation of the Solar System
14.4 Comparison with Other Planetary Systems
14.5 Planetary Evolution
14.6 Collaborative Activities, Questions and Exercises
Chapter 15 The Sun: A Garden-Variety Star
15.1 The Structure and Composition of the Sun
15.2 The Solar Cycle
15.3 Solar Activity above the Photosphere
15.4 Space Weather
Chapter 16 The Sun: A Nuclear Powerhouse
16.1 Sources of Sunshine: Thermal and Gravitational Energy
16.2 Mass, Energy, and the Theory of Relativity
16.3 The Solar Interior: Theory
16.4 The Solar Interior: Observations
Chapter 17 Analyzing Starlight
17.1 The Brightness of Stars
17.2 Colors of Stars
17.3 The Spectra of Stars (and Brown Dwarfs)
17.4 Using Spectra to Measure Stellar Radius, Composition, and Motion
Chapter 18 The Stars: A Celestial Census
18.1 A Stellar Census
18.2 Measuring Stellar Masses
18.3 Diameters of Stars
18.4 The H–R Diagram
18.5 Collaborative Group Activities, Questions and Exercises
Chapter 19 Celestial Distances
19.2 Surveying the Stars
19.3 Variable Stars: One Key to Cosmic Distances
19.4 The H–R Diagram and Cosmic Distances
Chapter 20 Between the Stars: Gas and Dust in Space
20.1 The Interstellar Medium
20.2 Interstellar Gas
20.3 Cosmic Dust
20.4 Cosmic Rays
20.5 The Life Cycle of Cosmic Material
20.6 Interstellar Matter around the Sun
Chapter 21 The Birth of Stars and the Discovery of Planets outside the Solar System
21.1 Star Formation
21.2 The H–R Diagram and the Study of Stellar Evolution
21.3 Evidence That Planets Form around Other Stars
21.4 Planets beyond the Solar System: Search and Discovery
21.5 Exoplanets Everywhere: What We Are Learning
21.6 New Perspectives on Planet Formation
Chapter 22 Stars from Adolescence to Old Age
22.1 Evolution from the Main Sequence to Red Giants
22.2 Star Clusters
22.3 Checking Out the Theory
22.4 Further Evolution of Stars
22.5 The Evolution of More Massive Stars
Chapter 23 The Death of Stars
23.1 The Death of Low-Mass Stars
23.2 Evolution of Massive Stars: An Explosive Finish
23.3 Supernova Observations
23.4 Pulsars and the Discovery of Neutron Stars
23.5 The Evolution of Binary Star Systems
23.6 The Mystery of the Gamma-Ray Bursts
Chapter 24 Black Holes and Curved Spacetime
24.1 Introducing General Relativity
24.2 Spacetime and Gravity
24.3 Tests of General Relativity
24.4 Time in General Relativity
24.5 Black Holes
24.6 Evidence for Black Holes
24.7 Gravitational Wave Astronomy
Chapter 25 The Milky Way Galaxy
25.1 The Architecture of the Galaxy
25.2 Spiral Structure
25.3 The Mass of the Galaxy
25.4 The Center of the Galaxy
25.5 Stellar Populations in the Galaxy
25.6 The Formation of the Galaxy
Chapter 26 Galaxies
26.1 The Discovery of Galaxies
26.2 Types of Galaxies
26.3 Properties of Galaxies
26.4 The Extragalactic Distance Scale
26.5 The Expanding Universe
Chapter 27 Active Galaxies, Quasars, and Supermassive Black Holes
27.1 Quasars
27.2 Supermassive Black Holes: What Quasars Really Are
27.3 Quasars as Probes of Evolution in the Universe
BCIT Astronomy 7000: A Survey of Astronomy
By the end of this section, you will be able to:
Explain what space weather is and how it affects Earth
In the previous sections, we have seen that some of the particles coming off the Sun—either steadily as in the solar wind or in great bursts like CMEs—will reach Earth and its magnetosphere (the zone of magnetic influence that surrounds our planet). As if scientists did not have enough trouble trying to predict weather on Earth, this means that they are now facing the challenge of predicting the effects of solar storms on Earth. This field of research is called space weather; when that weather turns stormy, our technology turns out to be at risk.
With thousands of satellites in orbit, astronauts taking up long-term residence in the International Space Station, millions of people using cell phones, GPS, and wireless communication, and nearly everyone relying on the availability of dependable electrical power, governments are now making major investments in trying to learn how to predict when solar storms will occur and how strongly they will affect Earth.
What we now study as space weather was first recognized (though not yet understood) in 1859, in what is now known as the Carrington Event. In early September of that year, two amateur astronomers, including Richard Carrington in England, independently observed a solar flare. This was followed a day or two later by a significant solar storm reaching the region of Earth's magnetic field, which was soon overloaded with charged particles (see Earth as a Planet).
As a result, aurora activity was intense and the northern lights were visible well beyond their normal locations near the poles—as far south as Hawaii and the Caribbean. The glowing lights in the sky were so intense that some people reported getting up in the middle of the night, thinking it must be daylight.
The 1859 solar storm happened at a time when a new technology was beginning to tie people in the United States and some other countries together: the telegraph system. This was a machine and network for sending messages in code through overhead electrical wires (a bit like a very early version of the internet). The charged particles that overwhelmed Earth's magnetic field descended toward our planet's surface and affected the wires of the telegraph system. Sparks were seen coming out of exposed wires and out of the telegraph machines in the system's offices.
The observation of the bright flare that preceded these effects on Earth led to scientific speculation that a connection existed between solar activity and impacts on Earth—this was the beginning of our understanding of what today we call space weather.
Watch NASA scientists answer some questions about space weather, and discuss some effects it can have in space and on Earth.
Sources of Space Weather
Three solar phenomena—coronal holes, solar flares, and CMEs—account for most of the space weather we experience. Coronal holes allow the solar wind to flow freely away from the Sun, unhindered by solar magnetic fields. When the solar wind reaches Earth, as we saw, it causes Earth's magnetosphere to contract and then expand after the solar wind passes by. These changes can cause (usually mild) electromagnetic disturbances on Earth.
More serious are solar flares, which shower the upper atmosphere of Earth with X-rays, energetic particles, and intense ultraviolet radiation. The X-rays and ultraviolet radiation can ionize atoms in Earth's upper atmosphere, and the freed electrons can build up a charge on the surface of a spacecraft. When this static charge discharges, it can damage the electronics in the spacecraft—just as you can receive a shock when you walk across a carpet in your stocking feet in a dry climate and then touch a light switch or some other metal object.
Most disruptive are coronal mass ejections. A CME is an erupting bubble of tens of millions of tons of gas blown away from the Sun into space. When this bubble reaches Earth a few days after leaving the Sun, it heats the ionosphere, which expands and reaches farther into space. As a consequence, friction between the atmosphere and spacecraft increases, dragging satellites to lower altitudes.
At the time of a particularly strong flare and CME in March 1989, the system responsible for tracking some 19,000 objects orbiting Earth temporarily lost track of 11,000 of them because their orbits were changed by the expansion of Earth's atmosphere. During solar maximum, a number of satellites are brought to such a low altitude that they are destroyed by friction with the atmosphere. Both the Hubble Space Telescope and the International Space Station ([link]) require reboosts to higher altitude so that they can remain in orbit.
International Space Station.
Figure 1. The International Space Station is see above Earth, as photographed in 2010 by the departing crew of the Space Shuttle Atlantis. (credit: NASA)
Solar Storm Damage on Earth
When a CME reaches Earth, it distorts Earth's magnetic field. Since a changing magnetic field induces electrical current, the CME accelerates electrons, sometimes to very high speeds. These "killer electrons" can penetrate deep into satellites, sometimes destroying their electronics and permanently disabling operation. This has happened with some communications satellites.
Disturbances in Earth's magnetic field can cause disruptions in communications, especially cell phone and wireless systems. In fact, disruptions can be expected to occur several times a year during solar maximum. Changes in Earth's magnetic field due to CMEs can also cause surges in power lines large enough to burn out transformers and cause major power outages. For example, in 1989, parts of Montreal and Quebec Province in Canada were without power for up to 9 hours as a result of a major solar storm. Electrical outages due to CMEs are more likely to occur in North America than in Europe because North America is closer to Earth's magnetic pole, where the currents induced by CMEs are strongest.
Besides changing the orbits of satellites, CMEs can also distort the signals sent by them. These effects can be large enough to reduce the accuracy of GPS-derived positions so that they cannot meet the limits required for airplane systems, which must know their positions to within 160 feet. Such disruptions caused by CMEs have occasionally forced the Federal Aviation Administration to restrict flights for minutes or, in a few cases, even days.
Solar storms also expose astronauts, passengers in high-flying airplanes, and even people on the surface of Earth to increased amounts of radiation. Astronauts, for example, are limited in the total amount of radiation to which they can be exposed during their careers. A single ill-timed solar outburst could end an astronaut's career. This problem becomes increasingly serious as astronauts spend more time in space. For example, the typical daily dose of radiation aboard the Russian Mir space station was equivalent to about eight chest X-rays. One of the major challenges in planning the human exploration of Mars is devising a way to protect astronauts from high-energy solar radiation.
Advance warning of solar storms would help us minimize their disruptive effects. Power networks could be run at less than their full capacity so that they could absorb the effects of power surges. Communications networks could be prepared for malfunctions and have backup plans in place. Spacewalks could be timed to avoid major solar outbursts. Scientists are now trying to find ways to predict where and when flares and CMEs will occur, and whether they will be big, fast events or small, slow ones with little consequence for Earth.
The strategy is to relate changes in the appearance of small, active regions and changes in local magnetic fields on the Sun to subsequent eruptions. However, right now, our predictive capability is still poor, and so the only real warning we have is from actually seeing CMEs and flares occur. Since a CME travels outward at about 500 kilometers per second, an observation of an eruption provides several days warning at the distance of Earth. However, the severity of the impact on Earth depends on how the magnetic field associated with the CME is oriented relative to Earth's magnetic field. The orientation can be measured only when the CME flows past a satellite we have put up for this purpose. However, it is located only about an hour upstream from Earth.
Space weather predictions are now available online to scientists and the public. Outlooks are given a week ahead, bulletins are issued when there is an event that is likely to be of interest to the public, and warnings and alerts are posted when an event is imminent or already under way ([link]).
NOAA Space Weather Prediction Operations Center.
Figure 2. Bill Murtagh, a space weather forecaster, leads a workshop on preparedness for events like geomagnetic storms. (credit: modification of work by FEMA/Jerry DeFelice)
To find public information and alerts about space weather, you can turn to the National Space Weather Prediction Center or SpaceWeather for consolidated information from many sources.
Fortunately, we can expect calmer space weather for the next few years, since the most recent solar maximum, which was relatively weak, occurred in 2014, and scientists believe the current solar cycle to be one of the least active in recent history. We expect more satellites to be launched that will allow us to determine whether CMEs are headed toward Earth and how big they are. Models are being developed that will then allow scientists to use early information about the CME to predict its likely impact on Earth.
The hope is that by the time of the next maximum, solar weather forecasting will have some of the predictive capability that meteorologists have achieved for terrestrial weather at Earth's surface. However, the most difficult events to predict are the largest and most damaging storms—hurricanes on Earth and extreme, rare storm events on the Sun. Thus, it is inevitable that the Sun will continue to surprise us.
The Timing of Solar Events
A basic equation is useful in figuring out when events on the Sun will impact Earth:
$$\text{distance}=\text{velocity}\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}\text{time,}\phantom{\rule{0.2em}{0ex}}\text{or}\phantom{\rule{0.2em}{0ex}}D=v\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}t$$
Dividing both sides by v, we get
$$T=D\text{/}v$$
Suppose you observe a major solar flare while astronauts are orbiting Earth. If the average speed of solar wind is 400 km/s and the distance to the Sun as 1.496 × 108 km, how long it will before the charged particles ejected from the Sun during the flare reach the space station?
The time required for solar wind particles to reach Earth is T = D/v.
$$\frac{1.496\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}{10}^{8}\phantom{\rule{0.2em}{0ex}}\text{km}}{400\phantom{\rule{0.2em}{0ex}}\text{km/s}}=3.74\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}{10}^{5}\phantom{\rule{0.2em}{0ex}}\text{s},\phantom{\rule{0.2em}{0ex}}\text{or}\phantom{\rule{0.5em}{0ex}}\frac{3.74\phantom{\rule{0.2em}{0ex}}\times{10}^{5}\phantom{\rule{0.2em}{0ex}}\text{s}}{60\phantom{\rule{0.2em}{0ex}}\text{s/min}\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}60\phantom{\rule{0.2em}{0ex}}\text{min/h}\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}24\phantom{\rule{0.2em}{0ex}}\text{h/d}}=4.3\phantom{\rule{0.2em}{0ex}}\text{d}$$
Check Your Learning
How many days would it take for the particles to reach Earth if the solar wind speed increased to 500 km/s?
$$\frac{1.496\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}{10}^{8}\phantom{\rule{0.2em}{0ex}}\text{km}}{500\phantom{\rule{0.2em}{0ex}}\text{km/s}}=2.99\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}{10}^{5}\phantom{\rule{0.2em}{0ex}}\text{s},\phantom{\rule{0.2em}{0ex}}\text{or}\phantom{\rule{0.5em}{0ex}}\frac{2.99\phantom{\rule{0.2em}{0ex}}\times{10}^{5}\phantom{\rule{0.2em}{0ex}}\text{s}}{60\phantom{\rule{0.2em}{0ex}}\text{s/min}\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}60\phantom{\rule{0.2em}{0ex}}\text{min/h}\phantom{\rule{0.2em}{0ex}}\times\phantom{\rule{0.2em}{0ex}}24\phantom{\rule{0.2em}{0ex}}\text{h/d}}=3.46\phantom{\rule{0.2em}{0ex}}\text{d}$$
Earth's Climate and the Sunspot Cycle: Is There a Connection?
While the Sun rises faithfully every day at a time that can be calculated precisely, scientists have determined that the Sun's energy output is not truly constant but varies over the centuries by a small amount—probably less than 1%. We've seen that the number of sunspots varies, with the time between sunspot maxima of about 11 years, and that the number of sunspots at maximum is not always the same. Considerable evidence shows that between the years 1645 and 1715, the number of sunspots, even at sunspot maximum, was much lower than it is now. This interval of significantly low sunspot numbers was first noted by Gustav Spӧrer in 1887 and then by E. W. Maunder in 1890; it is now called the Maunder Minimum. The variation in the number of sunspots over the past three centuries is shown in [link]. Besides the Maunder Minimum in the seventeenth century, sunspot numbers were somewhat lower during the first part of the nineteenth century than they are now; this period is called the Little Maunder Minimum.
Numbers of Sunspots over Time.
Figure 3. This diagram shows how the number of sunspots has changed with time since counts of the numbers of spots began to be recorded on a consistent scale. Note the low number of spots during the early years of the nineteenth century, the Little Maunder Minimum. (credit: modification of work by NASA/ARC)
When the number of sunspots is high, the Sun is active in various other ways as well, and, as we will see in several sections below, some of this activity affects Earth directly. For example, there are more auroral displays when the sunspot number is high. Auroras are caused when energetically charged particles from the Sun interact with Earth's magnetosphere, and the Sun is more likely to spew out particles when it is active and the sunspot number is high. Historical accounts also indicate that auroral activity was abnormally low throughout the several decades of the Maunder Minimum.
The Maunder Minimum was a time of exceptionally low temperatures in Europe—so low that this period is described as the Little Ice Age. This coincidence in time caused scientists to try to understand whether small changes in the Sun could affect the climate on Earth. There is clear evidence that it was unusually cold in Europe during part of the seventeenth century. The River Thames in London froze at least 11 times, ice appeared in the oceans off the coasts of southeast England, and low summer temperatures led to short growing seasons and poor harvests. However, whether and how changes on the Sun on this timescale influence Earth's climate is still a matter of debate among scientists.
Other small changes in climate like the Little Ice Age have occurred and have had their impacts on human history. For example, explorers from Norway first colonized Iceland and then reached Greenland by 986. From there, they were able to make repeated visits to the northeastern coasts of North America, including Newfoundland, between about 1000 and 1350. (The ships of the time did not allow the Norse explorers to travel all the way to North America directly, but only from Greenland, which served as a station for further exploration.)
Most of Greenland is covered by ice, and the Greenland station was never self-sufficient; rather, it depended on imports of food and other goods from Norway for its survival. When a little ice age began in the thirteenth century, voyaging became very difficult, and support of the Greenland colony was no longer possible. The last-known contact with it was made by a ship from Iceland blown off course in 1410. When European ships again began to visit Greenland in 1577, the entire colony there had disappeared.
The estimated dates for these patterns of migration follow what we know about solar activity. Solar activity was unusually high between 1100 and 1250, which includes the time when the first European contacts were made with North America. Activity was low from 1280 to 1340 and there was a little ice age, which was about the time regular contact with North America and between Greenland and Europe stopped.
One must be cautious, however, about assuming that low sunspot numbers or variations in the Sun's output of energy caused the Little Ice Age. There is no satisfactory model that can explain how a reduction in solar activity might cause cooler temperatures on Earth. An alternative possibility is that the cold weather during the Little Ice Age was related to volcanic activity. Volcanoes can eject aerosols (tiny droplets or particles) into the atmosphere that efficiently reflect sunlight. Observations show, for example, that the Pinatubo eruption in 1991 ejected SO2 aerosols into the atmosphere, which reduced the amount of sunlight reaching Earth's surface enough to lower global temperatures by 0.4 °C.
Satellite data show that the energy output from the Sun during a solar cycle varies by only about 0.1%. We know of no physical process that would explain how such a small variation could cause global temperature changes. The level of solar activity may, however, have other effects. For example, although the Sun's total energy output varies by only 0.1% during a solar cycle, its extreme ultraviolet radiation is 10 times higher at times of solar maximum than at solar minimum. This large variation can affect the chemistry and temperature structure of the upper atmosphere. One effect might be a reduction in the ozone layer and a cooling of the stratosphere near Earth's poles. This, in turn, could change the circulation patterns of winds aloft and, hence, the tracks of storms. There is some recent evidence that variations in regional rainfall correlate better with solar activity than does the global temperature of Earth. But, as you can see, the relationship between what happens on the Sun and what happens to Earth's climate over the short term is still an area that scientists are investigating and debating.
Whatever the effects of solar activity may be on local rainfall or temperature patterns, we want to emphasize one important idea: Our climate change data and the models developed to account for the data consistently show that solar variability is not the cause of the global warming that has occurred during the past 50 years.
Key Concepts and Summary
Space weather is the effect of solar activity on our own planet, both in our magnetosphere and on Earth's surface. Coronal holes allow more of the Sun's material to flow out into space. Solar flares and coronal mass ejections can cause auroras, disrupt communications, damage satellites, and cause power outages on Earth.
For Further Exploration
Berman, B. "How Solar Storms Could Shut Down Earth." Astronomy (September 2013): 22. Up-to-date review of how events on the Sun can hurt our civilization.
Frank, A. "Blowin' in the Solar Wind." Astronomy (October 1998): 60. On results from the SOHO spacecraft.
Holman, G. "The Mysterious Origins of Solar Flares." Scientific American (April 2006): 38. New ideas involving magnetic reconnection and new observations of flares.
James, C. "Solar Forecast: Storm Ahead." Sky & Telescope (July 2007): 24. Nice review on the effects of the Sun's outbursts and on Earth and how we monitor "space weather."
Schaefer, B. "Sunspots That Changed the World." Sky & Telescope (April 1997): 34. Historical events connected with sunspots and solar activity.
Schrijver, C. and Title, A. "Today's Science of the Sun." Sky & Telescope (February 2001): 34; (March 2001): 34. Excellent reviews of recent results about the solar atmosphere.
Wadhwa, M. "Order from Chaos: Genesis Samples the Solar Wind." Astronomy (October 2013): 54. On a satellite that returned samples of the Sun's wind.
Dr. Sten Odenwald's "Solar Storms" site: http://www.solarstorms.org/.
ESA/NASA's Solar & Heliospheric Observatory: http://sohowww.nascom.nasa.gov. A satellite mission with a rich website to explore.
High Altitude Observatory Introduction to the Sun: http://www.hao.ucar.edu/education/basic.php. For beginners.
NASA's Solar Missions: https://www.nasa.gov/mission_pages/sunearth/missions/index.html. Good summary of the many satellites and missions NASA has.
NOAA Profile of Space Weather: http://www.swpc.noaa.gov/sites/default/files/images/u33/primer_2010_new.pdf. A primer.
NOAA Space Weather Prediction Center Information Pages: http://www.swpc.noaa.gov/content/education-and-outreach. Includes primers, videos, a curriculum and training modules.
Nova Sun Lab: http://www.pbs.org/wgbh/nova/labs/lab/sun/. Videos, scientist profiles, a research challenge related to the active Sun from the PBS science program.
Space Weather: Storms on the Sun: http://www.swpc.noaa.gov/sites/default/files/images/u33/swx_booklet.pdf. An illustrated booklet from NOAA.
Stanford Solar Center: http://solar-center.stanford.edu/. An excellent site with information for students and teachers.
These can tell you and your students more about what's happening on the Sun in real time.
NASA's 3-D Sun: http://3dsun.org/.
NASA Space Weather: https://itunes.apple.com/us/app/nasa-space-weather/id422621403?mt=8.
Solaris Alpha: https://play.google.com/store/apps/details?id=com.tomoreilly.solarisalpha.
Solar Monitor Pro: http://www.solarmonitor.eu/.
Journey into the Sun: https://www.youtube.com/watch?v=fqKFQ7z0Nuk. 2010 KQED Quest TV Program mostly about the Solar Dynamics Observatory spacecraft, its launch and capabilities, but with good general information on how the Sun works (12:24).
NASA | SDO: Three Years in Three Minutes–With Expert Commentary: https://www.youtube.com/watch?v=QaCG0wAjJSY&src. Video of 3 years of observations of the Sun by the Solar Dynamics Observatory made into a speeded up movie, with commentary by solar physicist Alex Young (5:03).
Our Explosive Sun: http://www.youtube.com/watch?v=kI6YGSIJqrE. Video of a 2011 public lecture in the Silicon Valley Astronomy Lecture Series by Dr. Thomas Berger about solar activity and recent satellite missions to observe and understand it (1:20:22).
Out There Raining Fire: http://www.nytimes.com/video/science/100000003489464/out-there-raining-fire.html?emc=eta1. Nice overview and introduction to the Sun by science reporter Dennis Overbye of the NY Times (2:28)
Space Weather Impacts: http://www.swpc.noaa.gov/content/education-and-outreach. Video from NOAA (2:47); https://www.youtube.com/playlist?list=PLBdd8cMH5jFmvVR2sZubIUzBO6JI0Pvx0. Videos from the National Weather Service (four short videos) (14:41).
Space Weather: Storms on the Sun: http://www.youtube.com/watch?v=vWsmp4o-qVg. Science bulletin from the American Museum of Natural History, giving the background to what happens on the Sun to cause space weather (6:10).
Sun Storms: http://www.livescience.com/11754-sun-storms-havoc-electronic-world.html. From the Starry Night company about storms from the Sun now and in the past (4:49).
Sunspot Group AR 2339 Crosses the Sun: http://apod.nasa.gov/apod/ap150629.html. Short video (with music) animates Solar Dynamics Observatory images of an especially large sunspot group going across the Sun's face (1:15).
What Happens on the Sun Doesn't Stay on the Sun: https://www.youtube.com/watch?v=bg_gD2-ujCk. From the National Oceanic and Atmospheric Administration: introduction to the Sun, space weather, its effects, and how we monitor it (4:56).
Collaborative Group Activities
Have your group make a list of all the ways the Sun personally affects your life on Earth. (Consider the everyday effects as well as the unusual effects due to high solar activity.)
Long before the nature of the Sun was fully understood, astronomer (and planet discoverer) William Herschel (1738–1822) proposed that the hot Sun may have a cool interior and may be inhabited. Have your group discuss this proposal and come up with modern arguments against it.
We discussed how the migration of Europeans to North America was apparently affected by short-term climate change. If Earth were to become significantly hotter, either because of changes in the Sun or because of greenhouse warming, one effect would be an increase in the rate of melting of the polar ice caps. How would this affect modern civilization?
Suppose we experience another Maunder Minimum on Earth, and it is accompanied by a drop in the average temperature like the Little Ice Age in Europe. Have your group discuss how this would affect civilization and international politics. Make a list of the most serious effects that you can think of.
Watching sunspots move across the disk of the Sun is one way to show that our star rotates on its axis. Can your group come up with other ways to show the Sun's rotation?
Suppose in the future, we are able to forecast space weather as well as we forecast weather on Earth. And suppose we have a few days of warning that a big solar storm is coming that will overload Earth's magnetosphere with charged particles and send more ultraviolet and X-rays toward our planet. Have your group discuss what steps we might take to protect our civilization?
Have your group members research online to find out what satellites are in space to help astronomers study the Sun. In addition to searching for NASA satellites, you might also check for satellites launched by the European Space Agency and the Japanese Space Agency.
Some scientists and engineers are thinking about building a "solar sail"—something that can use the Sun's wind or energy to propel a spacecraft away from the Sun. The Planetary Society is a nonprofit organization that is trying to get solar sails launched, for example. Have your group do a report on the current state of solar-sail projects and what people are dreaming about for the future.
1: Describe the main differences between the composition of Earth and that of the Sun.
2: Describe how energy makes its way from the nuclear core of the Sun to the atmosphere. Include the name of each layer and how energy moves through the layer.
3: Make a sketch of the Sun's atmosphere showing the locations of the photosphere, chromosphere, and corona. What is the approximate temperature of each of these regions?
4: Why do sunspots look dark?
5: Which aspects of the Sun's activity cycle have a period of about 11 years? Which vary during intervals of about 22 years?
6: Summarize the evidence indicating that over several hundreds of years or more there have been variations in the level of the solar activity.
7: What it the Zeeman effect and what does it tell us about the Sun?
8: Explain how the theory of the Sun's dynamo results in an average 22-year solar activity cycle. Include the location and mechanism for the dynamo.
9: Compare and contrast the four different types of solar activity above the photosphere.
10: What are the two sources of particles coming from the Sun that cause space weather? How are they different?
11: How does activity on the Sun affect human technology on Earth and in the rest of the solar system?
12: How does activity on the Sun affect natural phenomena on Earth?
Thought Questions
13: [link] indicates that the density of the Sun is 1.41 g/cm3. Since other materials, such as ice, have similar densities, how do you know that the Sun is not made of ice?
14: Starting from the core of the Sun and going outward, the temperature decreases. Yet, above the photosphere, the temperature increases. How can this be?
15: Since the rotation period of the Sun can be determined by observing the apparent motions of sunspots, a correction must be made for the orbital motion of Earth. Explain what the correction is and how it arises. Making some sketches may help answer this question.
16: Suppose an (extremely hypothetical) elongated sunspot forms that extends from a latitude of 30° to a latitude of 40° along a fixed of longitude on the Sun. How will the appearance of that sunspot change as the Sun rotates? ([link] should help you figure this out.)
17: The text explains that plages are found near sunspots, but [link] shows that they appear even in areas without sunspots. What might be the explanation for this?
18: Why would a flare be observed in visible light, when they are so much brighter in X-ray and ultraviolet light?
19: How can the prominences, which are so big and 'float' in the corona, stay gravitationally attached to the Sun while flares can escape?
20: If you were concerned about space weather and wanted to avoid it, where would be the safest place on Earth for you to live?
21: Suppose you live in northern Canada and an extremely strong flare is reported on the Sun. What precautions might you take? What might be a positive result?
Figuring for Yourself
22: The edge of the Sun doesn't have to be absolutely sharp in order to look that way to us. It just has to go from being transparent to being completely opaque in a distance that is smaller than your eye can resolve. Remember from Astronomical Instruments that the ability to resolve detail depends on the size of the telescope's aperture. The pupil of your eye is very small relative to the size of a telescope and therefore is very limited in the amount of detail you can see. In fact, your eye cannot see details that are smaller than 1/30 of the diameter of the Sun (about 1 arcminute). Nearly all the light from the Sun emerges from a layer that is only about 400 km thick. What fraction is this of the diameter of the Sun? How does this compare with the ability of the human eye to resolve detail? Suppose we could see light emerging directly from a layer that was 300,000 km thick. Would the Sun appear to have a sharp edge?
23: Show that the statement that 92% of the Sun's atoms are hydrogen is consistent with the statement that 73% of the Sun's mass is made up of hydrogen, as found in [link]. (Hint: Make the simplifying assumption, which is nearly correct, that the Sun is made up entirely of hydrogen and helium.)
24: From Doppler shifts of the spectral lines in the light coming from the east and west edges of the Sun, astronomers find that the radial velocities of the two edges differ by about 4 km/s, meaning that the Sun's rotation rate is 2 km/s. Find the approximate period of rotation of the Sun in days. The circumference of a sphere is given by 2πR, where R is the radius of the sphere.
25: Assuming an average sunspot cycle of 11 years, how many revolutions does the equator of the Sun make during that one cycle? Do higher latitudes make more or fewer revolutions compared to the equator?
26: This chapter gives the average sunspot cycle as 11 years. Verify this using [link].
27: The escape velocity from any astronomical object can be calculated as $${v}_{\text{escape}}=\sqrt{2GM\text{/}R}$$. Using the data in Appendix E, calculate the escape velocity from the photosphere of the Sun. Since coronal mass ejections escape from the corona, would the escape velocity from there be more or less than from the photosphere?
28: Suppose you observe a major solar flare while astronauts are orbiting Earth. Use the data in the text to calculate how long it will before the charged particles ejected from the Sun during the flare reach them.
29: Suppose an eruptive prominence rises at a speed of 150 km/s. If it does not change speed, how far from the photosphere will it extend after 3 hours? How does this distance compare with the diameter of Earth?
30: From the information in [link], estimate the speed with which the particles in the CME in parts (c) and (d) are moving away from the Sun.
Previous: 15.3 Solar Activity above the Photosphere
Next: 16.0 Thinking Ahead
BCIT Astronomy 7000: A Survey of Astronomy by OpenStax is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.
Powered by Pressbooks
Pressbooks on YouTube Pressbooks on Twitter | CommonCrawl |
Vacuum-assisted evaporative concentration combined with LC-HRMS/MS for ultra-trace-level screening of organic micropollutants in environmental water samples
Jonas Mechelke1,2,
Philipp Longrée1,
Heinz Singer1 &
Juliane Hollender1,2
Analytical and Bioanalytical Chemistry volume 411, pages 2555–2567 (2019)Cite this article
Vacuum-assisted evaporative concentration (VEC) was successfully applied and validated for the enrichment of 590 organic substances from river water and wastewater. Different volumes of water samples (6 mL wastewater influent, 15 mL wastewater effluent, and 60 mL river water) were evaporated to 0.3 mL and finally adjusted to 0.4 mL. 0.1 mL of the concentrate were injected into a polar reversed-phase C18 liquid chromatography column coupled with electrospray ionization to high-resolution tandem mass spectrometry. Analyte recoveries were determined for VEC and compared against a mixed-bed multilayer solid-phase extraction (SPE). Both approaches performed equally well (≥ 70% recovery) for a vast number of analytes (n = 327), whereas certain substances were especially amenable to enrichment by either SPE (e.g., 4-chlorobenzophenone, logDow,pH7 4) or VEC (e.g., TRIS, logDow,pH7 − 4.6). Overall, VEC was more suitable for the enrichment of polar analytes, albeit considerable signal suppression (up to 74% in river water) was observed for the VEC-enriched sample matrix. Nevertheless, VEC allowed for accurate and precise quantification down to the sub-nanogram per liter level and required no more than 60 mL of the sample, as demonstrated by its application to several environmental water matrices. By contrast, SPE is typically constrained by high sample volumes ranging from 100 mL (wastewater influent) to 1000 mL (river water). The developed VEC workflow not only requires low labor cost and minimum supervision but is also a rapid, convenient, and environmentally safe alternative to SPE and highly suitable for target and non-target analysis.
Working on a manuscript?
Avoid the common mistakes
Organic contaminants (OCs) are constantly emitted into the aquatic environment with urban wastewater (WW), industry, and agriculture as the major sources [1]. Their potential (eco-)toxicological risk to humans, aquatic organisms, or whole ecosystems [2, 3] at the nanogram to microgram per liter level has caused extensive research activities over the last decades. Relevant nonpolar OCs are largely known, widely monitored, and regulated [4] but this barely applies to polar OCs that are highly mobile in the aquatic environment. Polar OCs, if persistent and widely emitted, have a significant potential to accumulate in the water cycle [5]. They are collectively referred to as PMOCs, i.e., persistent mobile OCs, with the anti-diabetic drug metformin being one prominent example along with its transformation product (TP), guanylurea.
Monitoring and regulation gaps are both linked to underlying analytical issues, i.e., polar OCs have the potential to go unnoticed as they are hardly amenable to state-of-the art liquid chromatography mass spectrometry (LC-MS) workflows currently and widely used for multiresidue trace organic analysis. These workflows often rely on pre-concentration by offline solid-phase extraction (SPE) with a single conventional sorbent material (e.g., C8, C18, mixed-mode) and LC on a reversed-phase (RP) stationary phase column [6,7,8]. While this combination is applicable to a wide range of moderately polar to nonpolar OCs, its suitability for highly polar OCs is limited [5]. Recent approaches that bypass or minimize this shortcoming include the following: (i) vacuum-assisted evaporative concentration (VEC) to dryness with subsequent hydrophilic interaction liquid chromatography (HILIC) [9], (ii) freeze-drying with subsequent mixed-mode LC [10], (iii) freeze-drying followed by (HILIC)-SPE and serial RPLC-HILIC or supercritical fluid chromatography (SFC) on a HILIC column [11], (iv) mixed-bed multilayer SPE optimized for retention of polar OCs with subsequent polar RPLC [9, 12,13,14], or (v) large-volume direct injection [15, 16]. Apart from these methods, chromatographic retention of polar OCs can additionally be enhanced by ion chromatography (e.g. [17, 18]), two-dimensional LC approaches (e.g. [19,20,21]), or parallel LC, e.g., HILIC parallel to RPLC with post column combination of eluents [22].
Established workflows for multiresidue trace analysis of very polar OCs are few but even fewer generic workflows exist for the simultaneous analysis of very polar to nonpolar OCs, especially when a fast, automated, reproducible, and (ultra-)sensitive analysis from a small sample volume is required. The aim of this work was to develop such workflow that covers the enrichment of OCs from different aqueous environmental matrices. The approach employs a single enrichment step using VEC, followed by large-volume injection (LVI) and chromatography on a polar RPLC column, coupled to high-resolution tandem mass spectrometry (HRMS/MS) via an electrospray ionization (ESI) interface. The major advantage of VEC is to bypass potential SPE pitfalls such as limited sorption capacity, analyte break-through during sample loading, unwanted elution of analytes during the wash step, incomplete analyte elution during the elution step, loss of analytes during the drying step, and analyte loss during nitrogen blow-down of the SPE extract. In contrast with other VEC approaches (e.g., see (i) above [9, 23, 24]), evaporation was not performed to dryness but to a residual volume of roughly 0.3 mL to avoid irreversible precipitation. To identify VEC workflow limitations especially for highly polar and nonpolar OCs, the workflow was validated for the enrichment of 590 substances with logDow,pH7, i.e., the pH-dependent octanol-water distribution coefficient at pH 7, between − 14 (highly polar) and 8 (nonpolar). The aqueous environmental matrices included in this study ranged from a seemingly simple matrix (river water) to highly complex and "dirty" matrices such as wastewater influent (IWW) and effluent (EWW). To our knowledge, this is the first time that VEC was tested and validated for a large and diverse suite of OCs.
Reference standards (STD) and isotope-labeled internal standards (IS) were purchased from CDN Isotopes (Canada), Dr. Ehrenstorfer (Germany), HPC Standards (Germany), LGC Standards (Switzerland), Molcan (Canada), MolPort (Latvia), Monsanto (Belgium), Novartis (Switzerland), Riedel-de-Häen (Germany), Sigma-Aldrich (Switzerland), or Toronto Research Chemicals (Canada) at purities ≥ 95% (analytical grade). NANOpure™ water (NPW) was generated using a lab water purification system (D11911, Barnstead/Thermo Scientific, USA). Methanol (MeOH) and ethanol were of LC-MS grade (Optima™, Fisher Scientific, Switzerland), ammonia (25% by weight) and formic acid of analytical grade (≥ 98%, Merck, Germany), and ethyl acetate of HPLC grade (99.8%, Sigma-Aldrich, Switzerland). STD and IS stock solutions (1 or 0.1 mg/mL) were prepared in appropriate solvents and combined as mixtures. These mixtures were then combined as spike solutions and subsequent dilutions were made in ethanol. An exhaustive substance list can be found in the Electronic Supplementary Material (see ESM 2 Table S6).
Sample collection
Wastewater samples were taken at Wüeri wastewater treatment plant in Regensdorf, Switzerland. Sampling points were post primary clarification (IWW) and after the biological treatment (EWW). Surface water (SW) was collected at Chriesbach, Switzerland. All waters were grab sampled on February 24, 2016, and stored at 4 °C until use the following day.
Vacuum-assisted evaporative concentration
SW and WW samples were equilibrated to room temperature, shaken thoroughly, and left to stand for 30 min to allow for settling of particles. Hence, no sample filtration or other sample treatment was applied prior to evaporation to minimize sample manipulation (see the "VEC workflow implications" section). Assuming water density at 20 °C, IWW (6 mL), EWW (15 mL), SW (60 mL), and NPW (60 mL, for the preparation of calibration standards) were carefully decanted and weighed into BUCHI™ glass vials (0.3 mL residual volume, 046069, BÜCHI Labortechnik AG, Switzerland). Depending on the validation experiment, samples were fortified with STD prior to VEC, after VEC, or not at all. Likewise, IS were spiked prior or after VEC. Water samples and calibration standards were then evaporated at 55 °C, 20 mbar and 200 to 300 orbital movements per minute using a vacuum-assisted evaporation system (see ESM 1 Fig. S1, Syncore® Analyst R-12, BÜCHI Labortechnik AG, Switzerland). Depending on sample size, parallel evaporation from up to 12 glass vials down to a residual volume of approx. 0.3 mL lasted 240 min (60 mL) or 80 min (≤ 15 mL) (see ESM 1 Table S1). A manual glass vial wall rinse involving 2 × 0.75 mL MeOH followed by 1 mL NPW was implemented after 210 min or 50 min, respectively, even though the device was operated with flushback module. Sample concentrates were transferred to flat bottom glass inserts (0.5 mL, 110506, BGB Analytik AG, Switzerland), visually adjusted to 0.4 mL using NPW, cooled down to 4 °C, and centrifuged at 10,621g for 4 min at room temperature (5427 R, Eppendorf, Switzerland). During centrifugation, glass inserts were kept inside microcentrifuge tubes (0030120094, Eppendorf, Switzerland). Glass pipettes were used to transfer supernatants to conical glass inserts (0.35 mL, 110502, BGB Analytik AG, Switzerland). The latter were kept in 2 mL amber glass LC vials at 4 °C until analysis. Enrichment factors were 15 (IWW), 37.5 (EWW), and 150 (NPW/SW).
Solid-phase extraction
Prior to SPE, SW and WW samples were adjusted to pH 6.5 by adding ammonium acetate buffer (1 M, 1 mL), formic acid, and ammonia and subsequently filtered through glass fiber filters (GF/F, Whatman, UK). Depending on the validation experiment, filtered IWW (100 mL), filtered EWW (250 mL), filtered SW (1000 mL), and (unfiltered) NPW (1000 mL) were fortified with STD either prior to SPE, after SPE, or not at all, IS were added after SPE ("Absolute recovery of VEC and SPE step" section). SPE was performed over a cartridge containing 200 mg Oasis HLB (Waters, USA) as a top layer, a 350-mg mid layer of a 1:1:1.5 (w/w/w) mixture Strata X-AW, Strata X-CW (both: Phenomenex, USA), and Isolute ENV+ (Biotage AB, Sweden), and a 200-mg bottom layer ENVI-carb™ (Supelco, USA). Layers were separated by polyethylene frits (20 μm, Supelco, USA). A scheme of the assembled SPE cartridge is provided in ESM 1 (Fig. S2); the SPE steps are described in detail elsewhere [12]. Briefly, after conditioning (5 mL MeOH, 10 mL NPW) and sample loading, the cartridge was eluted upside-down with 6 mL alkaline (2% ammonia, v/v) and 3 mL acidic (1.7% formic acid, v/v) ethyl acetate/MeOH mixture (50:50, v/v) and finally with 2 mL MeOH. Eluates were combined, evaporated to 0.1 mL by nitrogen blow-down (40 °C), reconstituted in NPW to a final volume of 1 mL, and centrifuged at 3020g for 45 min at 20 °C (Megafuge 1.0R, Heraeus), before supernatants were transferred into amber glass LC vials. Resulting enrichment factors were 100 (IWW), 250 (EWW), and 1000 (NPW/SW).
Instrumental analysis and data processing
SPE extract (15 μL) or VEC concentrate (100 μL), both corresponding to the same on-column sample volume (see ESM 1 Table S2), were injected into a polar RPLC C18 column (Atlantis T3, 3 × 150 mm, 3 μm; Waters, USA). NPW and MeOH, both acidified with 0.1% formic acid, were used as eluents for the chromatographic gradient from 5 to 95% MeOH in 17.5 min (see ESM 1 Table S3). Detection was achieved by HRMS/MS on a QExactive Plus mass spectrometer (Thermo Scientific, USA). Mass spectra were acquired in full-scan mode at a mass resolution of 140,000 (FWHM at m/z 200), with subsequent data-dependent MS2 (Top5, mass resolution 17,500). Separate runs were carried out for positive and negative ESI. TraceFinder (version 4.1 EFS, Thermo Scientific, USA) was used for automated targeted detection and integration of chromatographic analyte peaks by the ICIS algorithm at a mass tolerance of 5 ppm and with a minimum of three data points per peak. All integrations were reviewed manually. Inherent to the internal standard method, an IS was assigned to each analyte. Ideally, a matching IS was selected. If a matching IS was not available, an IS with a similar retention time (non-matching) was employed instead. Quantification was based on 1/x- or 1/x2-weighted linear or quadratic calibration curves generated by fitting analyte concentrations (x) against STD-to-IS peak area response ratios (RR, y), without forcing the fit through zero. To detect unknown compounds, raw data of interest was submitted to a Compound Discoverer (version 2.1, Thermo Scientific, USA) non-target workflow (see ESM 1 section S6 for details).
Method comparison and validation
In the following sections, "workflow" refers to VEC or SPE with subsequent instrumental analysis. First, to enable comparison between VEC and SPE, absolute recoveries over the VEC/SPE step (AR-VEC/AR-SPE) and matrix effects during ESI of IS in VEC concentrates and SPE extracts (ME-ESI-VEC/SPE) were determined. Validation parameters determined for the (entire) VEC workflow were absolute recovery (AR-W), method limit of quantification (MLOQ), accuracy, and precision.
STD spike levels were adjusted to sample matrix (NPW/SW, 200 ng/L; IWW/EWW, 1000 ng/L), compound class (PFCs, × 0.1; x-rays, × 10), or were compound specific (metformin, 5-methylbenzotriazole, benzotriazole, caffeine, sweeteners: NPW, 200 ng/L; SW, 1000 ng/L; IWW/EWW, 5000 ng/L). 171 IS (80 ng of each, PFCs, × 0.1; x-rays, × 10) were added. If not stated otherwise, the determination of validation parameters was based on three (in certain cases two) replicates and the precision was estimated by error propagation.
Calibration, method quantification limits in NPW, accuracy, and precision
A 10-point STD calibration series was prepared over a mass concentration range from 0 (matrix blank) to 1000 ng/L (0, 0.1, 0.5, 1, 5, 10, 50, 100, 500, and 1000 ng/L; PFCs, ×0.1; x-rays, ×10) by the addition of STD and IS to 60 mL NPW in glass vials. This was followed by VEC and instrumental analysis. The MLOQ in NPW was determined as the lowest analyte concentration yielding a chromatographic peak of at least three data points in full-scan mode, with a signal-to-noise ratio greater or equal to 10, among at least two replicates, and a RR of at least twice the RR in the matrix blank. Analyte concentrations in SW (60 mL), EWW (15 mL), and IWW (6 mL) were quantified against the calibration series in NPW (60 mL), resulting in volume factors (VF) of 4 and 10 for EWW and IWW, respectively. To determine accuracy (spike recovery) and the associated precision (%RSD among ≥ 2 replicates), IWW, EWW, SW, and NPW were spiked with STD (spiked) or not (unspiked). STD and IS (all samples) were both added prior to VEC. Analyte concentrations in spiked and unspiked samples were then quantified. If the concentration in a spiked sample was at least twice the unspiked (background) concentration, and quantification was possible among at least two replicates, the accuracy was calculated according to Eq. (1) using average calculated analyte amounts and the respective VF.
$$ \%\mathrm{accuracy}\ \left(\mathrm{STD}\right)=\frac{\left(\mathrm{calc}.\mathrm{amount}\ {\left(\mathrm{STD}\right)}_{\mathrm{spiked}}-\mathrm{calc}.\mathrm{amount}\ {\left(\mathrm{STD}\right)}_{\mathrm{unspiked}}\right)\times \mathrm{VF}\times 100}{\mathrm{theoretical}\ \mathrm{spiked}\ \mathrm{amount}} $$
Analyte concentrations in unspiked environmental samples are reported in the "Application to environmental samples" section. Concentrations were only considered if MLOQs ("Absolute recovery of entire VEC workflow, method quantification limits in environmental matrices, and matrix effects during ESI" section) and accuracy were available, the latter to correct a concentration if a non-matching IS was assigned.
Absolute recovery of VEC and SPE step
To determine absolute analyte recoveries over VEC (AR-VEC) and SPE (AR-SPE), NPW and water samples (IWW, EWW, SW) were spiked with STD prior to VEC/SPE (pre), after VEC/SPE (post), or not at all (unspiked). In all cases, IS were added after VEC/SPE to fix RR and compensate for signal changes during subsequent instrumental analysis (primarily during ESI). Recoveries were calculated according to Eq. (2), inserting average RR among ≥ 2 replicates.
$$ \%\mathrm{AR}-\mathrm{VEC}/\mathrm{SPE}\left(\mathrm{STD}\right)=\left(\frac{\mathrm{RR}{\left(\mathrm{STD}\right)}_{\mathrm{NPW}\ \mathrm{or}\ \mathrm{matrix}}^{\mathrm{pre}}-\mathrm{RR}{\left(\mathrm{STD}\right)}_{\mathrm{NPW}\ \mathrm{or}\ \mathrm{matrix}}^{\mathrm{unspiked}}}{\mathrm{RR}{\left(\mathrm{STD}\right)}_{\mathrm{NPW}\ \mathrm{or}\ \mathrm{matrix}}^{\mathrm{post}}-\mathrm{RR}{\left(\mathrm{STD}\right)}_{\mathrm{NPW}\ \mathrm{or}\ \mathrm{matrix}}^{\mathrm{unspiked}}}\right)\times 100\% $$
Absolute recovery of entire VEC workflow, method quantification limits in environmental matrices, and matrix effects during ESI
Depending whether a matching or non-matching IS was assigned, absolute recoveries of analytes over the entire VEC workflow were calculated as analyte signal recovery (AR-W) according to Eq. (3) or (4) by comparison of either IS or STD peak areas in enriched sample matrices with respective peak areas in NPW. In either case, IS and STD were spiked prior to VEC. AR-W integrates effects of matrix constituents during sample manipulation, the concentration step (VEC), and instrumental analysis.
$$ \%\mathrm{AR}-\mathrm{W}{\left(\mathrm{STD}\right)}_{\mathrm{matching}\ \mathrm{IS}}=\left(\frac{\mathrm{average}\ \mathrm{peak}\ \mathrm{area}\ {\left(\mathrm{IS}\right)}_{\mathrm{matrix}}}{\mathrm{average}\ \mathrm{peak}\ \mathrm{area}\ {\left(\mathrm{IS}\right)}_{\mathrm{CAL}\ \mathrm{series}\ \mathrm{in}\ \mathrm{NPW}}}\right)\times 100\% $$
$$ \%\mathrm{AR}-\mathrm{W}{\left(\mathrm{STD}\right)}_{\mathrm{non}-\mathrm{matching}\ \mathrm{IS}}=\left(\frac{\left(\mathrm{peak}\ \mathrm{area}\ {\left(\mathrm{STD}\right)}_{\mathrm{matrix}}^{\mathrm{spiked}}-\mathrm{peak}\ \mathrm{area}{\left(\mathrm{STD}\right)}_{\mathrm{matrix}}^{\mathrm{unspiked}}\ \right)\times \mathrm{VF}}{\mathrm{peak}\ \mathrm{area}{\left(\mathrm{STD}\right)}_{\mathrm{NPW}}^{\mathrm{spiked}\ \mathrm{amount}}}\right)\times 100\% $$
MLOQs in environmental sample matrices were derived from MLOQs in NPW, AR-W and volume factors ("Calibration, method quantification limits in NPW, accuracy, and precision" section) according to Eq. (5).
$$ \mathrm{MLOQ}\left(\mathrm{matrix}\right)=\frac{{\mathrm{MLOQ}}_{\mathrm{NPW}}\times \mathrm{VF}}{\mathrm{AR}-\mathrm{W}\left(\mathrm{matrix}\right)} $$
Matrix effects (ME) during ESI of IS in VEC concentrates (ME-ESI-VEC) and SPE extracts (ME-ESI-SPE) were determined by Eq. (6) that compares average peak areas of IS post-spiked into environmental samples ("Absolute recovery of VEC and SPE step" section) with average peak areas of IS post-spiked into enriched NPW. A ME of 100% indicates no effect during ESI, a ME below 100% indicates ionization suppression, and a ME above 100% indicates ionization enhancement [25].
$$ \%\mathrm{ME}-\mathrm{ESI}-\mathrm{VEC}/\mathrm{SPE}\left(\mathrm{IS}\right)=\left(\frac{\mathrm{average}\ \mathrm{peak}\ \mathrm{area}\ {\left(\mathrm{IS}\right)}_{\mathrm{matrix}}^{\mathrm{post}\ \mathrm{VEC}/\mathrm{SPE}}}{\mathrm{average}\ \mathrm{peak}\ \mathrm{area}\ {\left(\mathrm{IS}\right)}_{\mathrm{NPW}}^{\mathrm{post}\ \mathrm{VEC}/\mathrm{SPE}}}\right)\times 100\% $$
Substance selection
Five hundred ninety OCs (see ESM 2 Table S6) were selected to test and validate the suitability of VEC for the concentration of water samples prior to multiresidue trace organic analysis by LVI-polar RPLC-ESI-HRMS/MS. Substance selection criteria include environmental relevance, structural diversity, and physicochemical properties, i.e., to cover a wide range of analyte polarities (logDow,pH7 − 14 to 8), different speciations (137 anionic, 130 cationic, 50 zwitterionic, 273 neutral), masses (102 to 916 Da, two analytes > 1000 Da), functional groups, and compound classes (Fig. 1, right). In the literature, polarity categories, such as nonpolar, polar, and very or highly polar, are often defined by different logDow ranges (e.g., [9, 11]). In this work, OCs with a predicted logDow,pH7 ≤ 1 (JChem for Excel, version 18.8.0.253, ChemAxon) and a chromatographic retention time ≤ 12 min, i.e., approx. four times the column dead time, are considered polar. Hence, this classification forms a subset of 118 compounds (indicated in Fig. 1, left). In the following sections, "logD" always refers to logDow,pH7.
Substance selection. Left: chromatographic retention time (RT) versus predicted logD of the 590 organic contaminants selected for method validation. Symbols indicate the predicted major ion species at pH 7 either as cationic (C), anionic (A), neutral (N), or zwitterionic (Z). The polar chemical space includes 118 analytes with a logD ≤ 1 and a RT ≤ 12 min. Exact masses are displayed as cumulative distribution function (CDF) in the top left corner. The logD distribution is shown as histogram in the top left margin. Right: overview of number of parent compounds and transformation products (TPs) in different compound classes. See ESM 2 Table S6 for detailed substance properties
VEC workflow implications
VEC workflow fundamentals were adapted from an in-house mixed-bed multilayer SPE approach [12] such that enrichment factors and injection volume were adjusted to apply the same on-column sample volume (see ESM 1 Table S2). This approach facilitates a direct comparison of the two approaches. Prior to SPE, water samples are typically adjusted in pH and filtered. To minimize sample manipulation prior to VEC, reduce the risk of analyte loss, avoid contamination, and improve safe handling of sensitive analytes, samples were neither filtered nor adjusted in pH. Moreover, to ensure a soft evaporation process and to avoid thermal decomposition of analytes, the platform temperature during VEC was set to 55 °C. When a sample is evaporated to dryness, heat is no longer dissipated by evaporation, but directly transferred to the sample precipitate and ultimately to the analyte [26]. To avoid irreversible precipitation and heat stress on analytes, VEC was performed until roughly 0.3 mL (instead of evaporation to dryness) by means of a cooled (5 °C) glass vial appendix. Appendix cooling did not only ensure a residual volume but also potentially enhanced the stability of analytes in the VEC concentrate. By adjusting the concentrate to 0.4 mL, three injections (100 μL) were feasible and control over the final volume was established. As a last step before instrumental analysis, suspended particles were removed by centrifugation instead of filtration as there is only a small volume of purely aqueous VEC concentrate.
Comparison of VEC and SPE
The subsequent sections discuss the extent to which analytes or analyte signals were affected. First, the differences between VEC and SPE as enrichment steps ("Absolute recoveries—VEC against SPE" section) are explained and then a detailed discussion on the presence of sample matrix within VEC concentrates and SPE extracts during ESI ("Matrix effects during ESI" section) is provided.
Absolute recoveries—VEC against SPE
Absolute recoveries derived for VEC and SPE, the associated precisions, and the number of compounds for which both could be calculated are summarized in Table 1 (see ESM 2 Table S6 for details). Reasons for non-computable recoveries include (i) a very high background concentration (RR in spiked sample were not at least twice the RR in the unspiked sample), (ii) complete analyte loss over SPE/VEC (peaks in post- but not in pre-spiked samples), (iii) no detectable peaks at all (neither in pre- nor post-spiked samples, hinting at a chromatographic issue), or (iv) irreproducible peaks (peaks were not detected among sufficient replicates). Taking the uncertainties of the analytical steps into account, an absolute recovery between 70 and 130% was considered acceptable. Most recoveries fall into this range (Table 1). In particular, median AR-SPE lies between 91 and 93% with a high precision between 3 and 5%. For AR-VEC, medians lie between 100 and 126% with the associated precision being slightly lower between 7 and 11%. High recoveries (167%) and the low precision in case of SW (32%) may be explained by the standard addition procedure, i.e., it was required for AR experiments to add IS (all samples) and STD (only post-spiked samples) after VEC in the lower part of the glass vials, followed by a manual vial wall rinse and evaporation until completion (0.3 mL). VEC resulted in the formation of precipitates (see ESM 1 Fig. S3), which was particularly extensive for SW samples. SW precipitates potentially promoted sorption, caused peak area variations among replicates and ultimately a lower precision and recoveries greater 130%. For example, IS peak areas in SW varied overall (median) by 25% when added after VEC compared to 10% when added before. In NPW, EWW, and IWW, this variation was observed infrequently (post vs. pre: NPW, 11% vs. 5%; EWW, 9% vs. 5%; IWW, 6% vs. 9%), as was the formation of precipitates. When the VEC workflow was applied for quantification of analytes in SW ("VEC validation parameters" section), precipitates did not interfere with the analysis, since IS were added prior to VEC. For recovery experiments with SW, a lower enrichment factor could be beneficial and reduce precipitate formation.
Table 1 Absolute analyte recoveries (median) and associated precisions (median) over the VEC and SPE step
Overall, SPE and VEC performed equally well (AR ≥ 70%) over all matrices for a large number of analytes (n = 327). Hence, the following sections will focus on OCs that were either especially or exclusively amenable to enrichment by either SPE or VEC.
Especially amenable to VEC were ten analytes (eight shown in Fig. 2), most of them polar (median logD − 0.2, median RT 10 min): 1,3-dimethyl-2-imidazolidinone, 1-propanesulfonate, 4-aminopyrine, N-(2,4-dimethylphenyl)formamide, N-(4-aminophenyl)-N-methylacetamide, nicotine, ranitidine, sulfanilic acid, cilastatin, and cyazofamid. Their SPE recoveries were < 70% in all matrices, whereas their VEC recoveries were ≥ 70%. By contrast, seven analytes (Fig. 2), mostly nonpolar and either cationic or neutral (median logD 3.8, median RT 20.5 min) were more amenable to enrichment by SPE than VEC (VEC recoveries were < 70% in all matrices, while SPE recoveries were ≥ 70%): tebutam, diazinon, benzophenone-3, galaxolidone, iminostilbene, ticlopidine, and nordeprenyl.
Analytes exclusively (indicated by asterisk) or especially amenable to enrichment by VEC (gray shade, only compounds of polar space shown) or SPE (polar space and nonpolar analytes) from all tested matrices. Molecular structures were created by MarvinSketch (version 18.8.0, ChemAxon) as part of the Jchem for Excel plugin (version 18.8.0.253, ChemAxon)
Seven compounds were exclusively amenable to VEC and were mostly polar (median logD 0.2): TRIS, 6-aminopenicillanic acid, primaquine, perfluorohexanoic acid, lansoprazole, diazoxon, and pinoxaden. Their recoveries could only be determined for VEC (all matrices) but not SPE for several reasons (see above). An example is TRIS (Fig. 2), a polar OC with a logD of − 4.6 that was recovered by VEC from all matrices (median 115%) but lost during SPE. TRIS is an often used buffer substance that is expected to be removed during sample clean-up by SPE prior to LC-MS. By contrast, the only substance that was exclusively recovered by SPE (all matrices, median 85%) but lost during VEC (all matrices) was 4-chlorobenzophenone (Fig. 2), a nonpolar OC with a logD of 4. To investigate whether the loss of 4-chlorobenzophenone over VEC was related to its Henry's law constant (HLC), HLC were estimated (25 °C, bond contribution methodology, see ESM 2) for 491 of the 590 organic substances using HenryWin (v3.20, embedded in EPI Suite v4.11, US EPA). No correlation between HLC and recovery over VEC or SPE became evident. However, the elevated HLC (17th highest of 491) of 4-chlorobenzophenone (1.4 × 10−6 atm m3/mol) could still be a possible explanation for its loss over VEC, but this remains speculative since very few even more volatile substances (according to estimated HLC) were recovered over VEC.
Considering the 118 analytes of the polar chemical space (logD ≤ 1, RT ≤ 12 min) separately, 110 were recovered by both workflows from at least one spiked matrix (including NPW), demonstrating the excellent performance of both workflows. For six polar analytes (see ESM 1 section 7; logD − 14 to − 3, RT < 5 min) recoveries could only be determined for VEC in NPW (lactitol, 2-amino-1,5-napthalenedisulfonic acid, acamprosat), few matrices (1,3-propylenediaminotetraacetic acid), or all matrices (Fig. 2; TRIS, 6-aminopenicillanic acid). 1,3-Propylenediaminotetraacetic acid was the most polar among the selected substances (logD − 14). It was recovered from VEC concentrates of all matrices except SW. However, it was neither detected in post- nor pre-spiked SPE extracts, suggesting a chromatographic issue related to SPE extracts (same for acamprosat and 2-amino-1,5-napthalenedisulfonic acid). By contrast, recoveries of two polar analytes could only be calculated for the SPE but not the VEC step (logD − 1.8 to − 1.9, RT < 6.2 min; NPW: allopurinol, SW: maleic hydrazide). The chromatography of selected polar OCs is shown in section S7 of ESM 1.
Of the 472 analytes outside the polar chemical space (logD > 1, RT ≤ 12 min or logD ≤ ≥ 1, RT > 12 min), 285 were equally amenable to both workflows (≥ 70% recovery) in all matrices, six especially to SPE, one exclusively to SPE (4-chlorobenzophenone) (all Fig. 2), two especially to VEC (cilastatin, cyazofamid; logD − 3.7 and 1.8, zwitterionic and neutral) and five exclusively to VEC (logD − 1.2 to 5.1; primaquine, perfluorohexanoic acid, lansoprazole, diazoxon, pionoxaden). VEC and SPE recoveries of the other 172 OCs were analyte-specific and matrix-dependent with values between 70 and 130% for all analytes (median) and matrices except SW (see above, precipitate interference).
Overall, the recovery data suggests the suitability of both workflows for a wide range of analytes. VEC appears especially suitable for the enrichment of polar compounds, with a few limitations regarding individual volatile or nonpolar ones for which SPE showed a better performance. The good SPE performance including polar analytes is attributed to the combination of diverse sorbent materials that were pre-selected and highly tuned for the simultaneous extraction of polar and nonpolar analytes. However, SPE based on a single sorbent material is by far the most widely used approach for the concentration of water samples prior to LC-MS [6]. As part of the in-house mixed-bed multilayer SPE method development [13], different sorbent materials, i.e., Oasis HLB, Strata X-AW/-CW, Isolute ENV+, and ENVI-carb™ were evaluated individually for analyte recoveries over SPE. In this context, 418 analytes were investigated, of which 380 overlap with the 590 substances selected for VEC validation. This allowed the comparison of analyte recoveries between Oasis HLB (a single sorbent), mixed-bed multilayer SPE based on multiple sorbents and VEC in NPW. The overall number of recovered analytes (VEC = multiple sorbents > HLB, 380 = 380 > 356) and the number of analytes recovered ≥ 70% (VEC > multiple sorbents > HLB, 358 > 331 > 273) indicate a clear benefit of the two latter methods over SPE with a single sorbent (Oasis HLB). Moreover, median logD of analytes recovered ≤ 70% (SPE < multiple sorbents < VEC, − 0.7 < − 0.1 < 3.3) and ≥ 70% (SPE > multiple sorbents > VEC, 1.6 > 1.2 > 1.0) emphasize and confirm the overall suitability of VEC towards polar analytes.
Matrix effects during ESI
Matrix effects (ME) during ESI were determined for 170 IS in VEC concentrates and 171 IS in SPE extracts for SW, EWW, and IWW (see ESM 2 Table S7). ME of IS in VEC concentrates strongly depended on matrix-specific enrichment factors. Specifically, SW had the highest enrichment factor (150×) and the median ME accounted for 26%, i.e., 74% ionization suppression (Fig. 3), followed by EWW (37.5×; ME, 55%) and IWW (15×; ME, 60%). In SPE extracts, the ME remained constant throughout the matrices with median values between 63 and 72%. For VEC concentrates of EWW and IWW, ME (EWW, 55%; IWW, 60%) and AR-W ("VEC validation parameters" section; EWW, 62%; IWW, 72%) were similar, suggesting that signal suppression during ESI is the likely cause of signal loss throughout the VEC workflow. Despite practical issues associated with SW ("Absolute recoveries—VEC against SPE" section), AR-W and ME were also similar (ME, 26%; AR-W, 28%), further emphasizing the overall role of signal suppression during ESI in the analysis of VEC concentrates.
Matrix effects during ESI of IS in VEC concentrates and SPE extracts indicated as ionization suppression (S) and enhancement, i.e., | matrix effect − 100% |. Right margin: number of compounds, median over all compounds
VEC validation parameters
Absolute recoveries over the VEC workflow (AR-W, Fig. 4, top) could be calculated for 525 analytes in SW, 533 analytes in EWW, and 515 analytes in IWW. Interestingly, the assumingly simplest matrix (SW) caused the largest analyte signal loss over all workflow steps (smallest median AR-W of 28%), followed by EWW (62%) and IWW (72%). This can be explained by the increased enrichment of matrix with increasing enrichment factor (SW > EWW > IWW), accompanied by increasing analyte signal loss (decreasing AR-W). The increased enrichment of matrix was also reflected in the extent of precipitates formed in the bottom part of the glass vials as precipitates were more pronounced in SW than in IWW and EWW (see ESM 1 Fig. S3). For the analysis of SPE extracts by LC-ESI-MS/MS, AR-W is typically more similar for different matrices as compared to VEC. Obviously, during SPE (unlike VEC), analytes are not only enriched but are also extracted. Hence, the matrix interferences are removed (to a certain extent).
Validation parameters of the VEC workflow, i.e., absolute recoveries over the entire VEC workflow (AR-W) calculated as analyte signal recovery (top), method quantification limits (MLOQ, middle), accuracy, and precision (bottom). Right margin: number of compounds, median over all compounds
MLOQs in NPW, SW, EWW, and IWW show median values of 1, 4, 8, and 15 ng/L, respectively (Fig. 4, middle). Of the 576 MLOQs in NPW, 216 (38%) were at the sub-nanogram per liter level (˂ 1 ng/L), followed by SW (9%), EWW (8%), and IWW (1%). Furthermore, 360 of 576 OCs demonstrated a linear range in NPW from the respective MLOQ to the highest calibration level, i.e., a linear fit was applied. Linear ranges of 216 OCs ceased below the highest calibration level and a quadratic fit was more suitable. MLOQs were increasing (SW < EWW < IWW) with decreasing nominal enrichment factors (SW > EWW > IWW) and were higher than expected in SW since signal suppression was more pronounced than in the other matrices (EWW, IWW). In addition, the VEC workflow performed well in both aspects of accuracy (spike recovery in %) and precision (%RSD of spike recovery). Specifically, the median spike recoveries in SW, EWW, and IWW were close to 100% and precisions below 10% (Fig. 4, bottom).
Application to environmental samples
To further demonstrate the applicability to environmental samples, the 590 selected OCs ("Substance selection" section) were quantified in the unspiked SW, EWW, and IWW samples. One hundred twenty-one OCs were quantified in SW, 157 in EWW and 146 in IWW above the respective MLOQ. Of the quantified analytes, 51 belong to the polar chemical space (logD ≤ 1, RT ≤ 12 min) and 27 of these were detected in all matrices (Fig. 5). Of the analytes outside the polar chemical space, 73 (logD − 1.8 to 5.3, RT 10 to 22 min) were quantified in all matrices between 0.8 ng/L (N,N-didesmethylvenlafaxine) and 30 μg/L (caffeine, outside the calibration range). The least polar analytes were telmisartan, losartan, atazanavir, and propiconazole with a logD of 5.3, 5.1, 4.5, and 4.3, respectively.
Concentrations (log scale) of polar analytes (logD ≤ 1, RT ≤ 12 min, sorted by logD) quantified in all environmental samples. Concentrations outside the calibration range: metformin (IWW), guanylurea (SW/EWW), acesulfame (SW/EWW/IWW). Guanylurea: peak splitting is a chromatographic artifact. Error bars indicate the standard deviation. Gray bars are a visual aid for grouping analyte concentrations
Applicability of VEC to non-target screenings
The use of HRMS does not only allow for the targeted analysis of known OCs but also enables the screening for suspected or unknown (polar) OCs, as well as (polar) TPs formed in the environment or lab-scale studies. The aim of a non-target screening is to detect unknown compounds that differ from background. Such screening was applied to VEC concentrates and SPE extracts of NPW, IWW, and EWW to identify the workflow that is more suitable for the concentration of water samples prior to a non-target screening. Unspiked NPW was analyzed to detect compounds originating from NPW itself and the analytical workflow (e.g., contamination from glassware, SPE materials) (Fig. 6, left). Less compounds were detected in VEC concentrates of unspiked NPW compared to the respective SPE extracts (VEC 17,388, SPE 25,512), hinting at the introduction of contamination throughout the SPE procedure. IWW samples were processed to identify the workflow that provided the larger number of compounds at a comparable overall ionization suppression (approx. 40% in SPE extracts and VEC concentrates; "Matrix effects during ESI" section). Similar to NPW, less compounds were observed in VEC concentrates of IWW samples (VEC 27,637, SPE 42,290). After blank subtraction (IWW minus NPW overlap), 23,777 compounds could be assigned to IWW VEC concentrates and 35,374 to IWW SPE extracts. Besides contamination, another potential explanation for this difference is that a considerable number of compounds in IWW SPE extracts were of nonpolar or volatile nature. Losses due to sorption to glass surfaces, precipitation, or volatilization during VEC are possible. To test this, compounds unique to VEC concentrates of IWW (15,541) and SPE extracts of IWW (27,518) were investigated for heteroatom content and retention time distribution. Heteroatom content among the suggested molecular formulae (VEC 76% by weight, SPE 71% by weight) and retention time distributions (47% of compounds in VEC IWW concentrates fall into the polar chemical space in terms of RT, i.e., ≤ 12 min compared to 38% of the compounds in SPE IWW extracts) (Fig. 6, right) both indicate that compounds unique to VEC IWW are more polar than compounds unique to SPE IWW. Thus, these results further suggest the potential and applicability of VEC for (non-target) screenings of unknown polar OCs or TPs.
Comparison of compound numbers in NPW and IWW after enrichment via VEC and SPE (left). Right: cumulative distribution function (CDF) of retention times (RT) among compounds unique to VEC IWW concentrates (15,541) and SPE IWW extracts (27,518)
Conclusions and outlook
The developed VEC workflow is a valuable, environmentally friendly (minimal need for organic solvents) alternative to SPE that only requires minimal laboratory supervision at a lower cost. Its future application should be considered under the following conditions: (1) if analytes of interest are (very) polar while the LC in use is still performing well, (2) when the sample volume is limited, and/or (3) low LOQs are desired and not provided by direct sample injection, i.e., without enrichment. For the tested set of compounds, the VEC workflow performs exceptionally well despite using "only" a (polar) RPLC column. A mixed-mode LC column may improve the analyte retention further and expand the analytical space towards even more polar analytes. To exploit HILIC, VEC concentrates need to be made HILIC-compatible, requiring reconstitution in organic solvent (e.g., acetonitrile) at the expense of the enrichment factor. Alternatively, evaporation could be performed to dryness with subsequent reconstitution in organic solvent but at the expense of irreversible precipitation and loss of heat-sensitive (and volatile) analytes.
aus der Beek T, Weber F-A, Bergmann A, Hickmann S, Ebert I, Hein A, et al. Pharmaceuticals in the environment—global occurrences and perspectives. Environ Toxicol Chem. 2016;35:823–35. https://doi.org/10.1002/etc.3339.
Santos LHMLM, Araújo AN, Fachini A, Pena A, Delerue-matos C, Montenegro MCBSM. Ecotoxicological aspects related to the presence of pharmaceuticals in the aquatic environment. J Hazard Mater. 2010;175:45–95. https://doi.org/10.1016/j.jhazmat.2009.10.100.
Bu M, Grabicová K, Kubec J, Kouba A, Kuklina I, Kozák P, et al. Environmentally relevant concentrations of tramadol and citalopram alter behaviour of an aquatic invertebrate. Aquat Toxicol. 2018;200:226–32. https://doi.org/10.1016/j.aquatox.2018.05.008.
European Parliament and Council. Directive 2013/39/EU of 12 August 2013 amending Directives 2000/60/EC and 2008/105/EC as regards priority substances in the field of water policy. Off J Eur Union. 2013;L226.
Reemtsma T, Berger U, Arp HPH, Gallard H, Knepper TP, Neumann M, et al. Mind the gap: persistent and mobile organic compounds—water contaminants that slip through. Environ Sci Technol. 2016;50:10308–15. https://doi.org/10.1021/acs.est.6b03338.
Pérez-Fernández V, Mainero Rocca L, Tomai P, Fanali S, Gentili A. Recent advancements and future trends in environmental analysis: sample preparation, liquid chromatography and mass spectrometry. Anal Chim Acta. 2017;983:9–41. https://doi.org/10.1016/j.aca.2017.06.029.
Richardson SD, Ternes TA. Water analysis: emerging contaminants and current issues. Anal Chem. 2018;90:398–428. https://doi.org/10.1021/acs.analchem.7b04577.
Mattarozzi M, Careri M. Liquid chromatography/mass spectrometry in environmental analysis. Encycl Anal Chem. 2015. https://doi.org/10.1002/9780470027318.a0840.pub2.
Köke N, Zahn D, Knepper TP, Frömel T. Multi-layer solid-phase extraction and evaporation—enrichment methods for polar organic chemicals from aqueous matrices. Anal Bioanal Chem. 2018;410:2403–11. https://doi.org/10.1007/s00216-018-0921-1.
Montes R, Aguirre J, Vidal X, Rodil R, Cela R, Quintana JB. Screening for polar chemicals in water by trifunctional mixed-mode liquid chromatography-high resolution mass spectrometry. Environ Sci Technol. 2017;51:6250–9. https://doi.org/10.1021/acs.est.6b05135.
Bieber S, Greco G, Grosse S, Letzel T. RPLC-HILIC and SFC with mass spectrometry: polarity-extended organic molecule screening in environmental (water) samples. Anal Chem. 2017;89:7907–14. https://doi.org/10.1021/acs.analchem.7b00859.
Ruff M, Mueller MS, Loos M, Singer HP. Quantitative target and systematic non-target analysis of polar organic micro-pollutants along the river Rhine using high-resolution mass-spectrometry—identification of unknown sources and compounds. Water Res. 2015;87:145–54. https://doi.org/10.1016/j.watres.2015.09.017.
Vogler B. Master thesis: development of a comprehensive multicomponent screening method for polar organic compounds using LC-Orbitrap. University of Zurich (Institute of Organic Chemistry), Eawag (Department of Environmental Chemistry). 2013.
Knepper TP, Zahn D, Fr T. Halogenated methanesulfonic acids: a new class of organic micropollutants in the water cycle. Water Res. 2016;101:292–9. https://doi.org/10.1016/j.watres.2016.05.082.
Albergamo V, Helmus R, de Voogt P. Direct injection analysis of polar micropollutants in natural drinking water sources with biphenyl liquid chromatography coupled to high-resolution time-of-flight mass spectrometry. J Chromatogr A. 2018. https://doi.org/10.1016/j.chroma.2018.07.036.
Reemtsma T, Alder L, Banasiak U. A multimethod for the determination of 150 pesticide metabolites in surface water and groundwater using direct injection liquid chromatography-mass spectrometry. J Chromatogr A. 2013;1271:95–104. https://doi.org/10.1016/j.chroma.2012.11.023.
Wang L, Schnute B. Application note 263: sensitive and fast determination of endothall in water samples by IC-MS/MS. Thermo Scientific. 2014.
Kurz A, Bousova K, Beck J, Schoutsen F, Bruggink C, Kozeluh M, et al. Routine analysis of polar pesticides in water at low ng/L levels by ion chromatography coupled to triple quadrupole mass spectrometer. Thermo Scientific. 2017.
Brudin SS, Shellie RA, Haddad PR, Schoenmakers PJ. Comprehensive two-dimensional liquid chromatography : ion chromatography × reversed-phase liquid chromatography for separation of low-molar-mass organic acids. J Chromatogr A. 2010;1217:6742–6. https://doi.org/10.1016/j.chroma.2010.05.064.
Murphy RE, Schure MR, Foley JP. One- and two-dimensional chromatographic analysis of alcohol ethoxylates. Anal Chem. 1998;70:4353–60. https://doi.org/10.1021/ac980180j.
Cao JL, Wang SS, Hu H, He CW, Wan JB, Su HX, et al. Online comprehensive two-dimensional hydrophilic interaction chromatography×reversed-phase liquid chromatography coupled with hybrid linear ion trap Orbitrap mass spectrometry for the analysis of phenolic acids in Salvia miltiorrhiza. J Chromatogr A. 2018;1536:216–27. https://doi.org/10.1016/j.chroma.2017.09.041.
Klavins K, Drexler H, Hann S, Koellensperger G. Quantitative metabolite profiling utilizing parallel column analysis for simultaneous reversed-phase and hydrophilic interaction liquid chromatography separations combined with tandem mass spectrometry. Anal Chem. 2014;86:4145–50. https://doi.org/10.1021/ac5003454.
Pérez-Fernández V, Marchese S, Gentili A, García MÁ, Curini R, Caretti F, et al. Analysis of antithyroid drugs in surface water by using liquid chromatography-tandem mass spectrometry. J Chromatogr A. 2014;1367:78–89. https://doi.org/10.1016/j.chroma.2014.09.045.
Ens W, Senner F, Gygax B, Schlotterbeck G. Development, validation, and application of a novel LC-MS/MS trace analysis method for the simultaneous quantification of seven iodinated X-ray contrast media and three artificial sweeteners in surface, ground, and drinking water. Anal Bioanal Chem. 2014;406:2789–98. https://doi.org/10.1007/s00216-014-7712-0.
Kruve A, Rebane R, Kipper K, Oldekop M-L, Evard H, Herodes K, et al. Tutorial review on validation of liquid chromatography–mass spectrometry methods: part II. Anal Chim Acta. 2015;870:8–28. https://doi.org/10.1016/j.aca.2015.02.016.
BUCHI Labortechnik AG. Syncore application guide, version A. 2012.
Our special thanks go to Bernadette Vogler (Eawag) for providing unpublished SPE data of her master thesis and photographs of the mixed-bed multilayer SPE cartridge. Furthermore, we thank Maricor Arlos (Eawag) for proofreading the manuscript. Finally, we gratefully acknowledge ChemAxon Ltd. for the donation of the academic research license to the JChem package.
This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 641939.
Eawag, Swiss Federal Institute of Aquatic Science and Technology, 8600, Dübendorf, Switzerland
Jonas Mechelke, Philipp Longrée, Heinz Singer & Juliane Hollender
Institute of Biogeochemistry and Pollutant Dynamics, ETH Zürich, 8092, Zürich, Switzerland
Jonas Mechelke & Juliane Hollender
Jonas Mechelke
Philipp Longrée
Heinz Singer
Juliane Hollender
Correspondence to Heinz Singer or Juliane Hollender.
ESM 1
(PDF 1.93 mb)
(XLSX 342 kb)
Mechelke, J., Longrée, P., Singer, H. et al. Vacuum-assisted evaporative concentration combined with LC-HRMS/MS for ultra-trace-level screening of organic micropollutants in environmental water samples. Anal Bioanal Chem 411, 2555–2567 (2019). https://doi.org/10.1007/s00216-019-01696-3
Revised: 08 February 2019
Issue Date: 19 May 2019
Multiresidue analysis
PMOC
Large-volume injection
LC-HRMS
Non-target screening
Orbitrap | CommonCrawl |
Basic Derivatives
The basic derivatives correspond to the simplest rules of derivation, such as the derivative of a constant, the derivative of a power, derivative of a sum, derivative of the product, among others.
$\frac{d}{dx}\left(\ln\left(g\left(7-9\right)\cdot x\right)\right)$ 2d ago
$\frac{d}{dx}\left(\frac{\sqrt{2+x^22+x^2x^2}}{x^2}\frac{\sqrt{2+x^22+x^2x^2}}{x^2x^2}\right)$ 2d ago
$\frac{d}{dx}\left(y^2-y-x^2\right)$ 2d ago
$\frac{d}{dx}\left(\sin y\right)$ 2d ago
$\frac{d}{dx}\left(x^2+y^2\right)$ 2d ago
$\frac{d}{dx}\left(\frac{\sqrt{x^2-4}}{x}\right)$ 2d ago
$\frac{d}{dx}\left(\sec\left(6t\right)\right)$ 2d ago
$\frac{d}{dy}\left(x^{\frac{4}{3}}\right)$ 2d ago
$\frac{d}{dx}\left(\frac{x^2+xy}{y^3+x}-x^2+y^2+2\right)$ 2d ago
$\frac{d}{dx}\left(x^{-\frac{2}{3}}\:\left(x+5\right)\right)$ 2d ago | CommonCrawl |
Singular integral equation
An equation containing the unknown function under the integral sign of an improper integral in the sense of Cauchy (cf. Cauchy integral). Depending on the dimension of the manifold over which the integrals are taken, one distinguishes one-dimensional and multi-dimensional singular integral equations. In comparison with the theory of Fredholm equations (cf. Fredholm equation), the theory of singular integral equations is more complex. For example, the theories of one-dimensional and multi-dimensional singular integral equations, both in the formulation of definitive results and in the methods used to establish them, differ significantly from one another. In the one-dimensional case, the theory is more fully developed, and its results are formulated more simply than the corresponding results in the multi-dimensional case. In what follows, main attention will be given to the one-dimensional case.
An important class of one-dimensional singular integral equations are those with a Cauchy kernel:
$$ \tag{1 } a ( t) \phi ( t) + \frac{b ( t) }{\pi i } \int\limits _ \Gamma \frac{\phi ( \tau ) }{\tau - t } \ d \tau + \int\limits _ \Gamma k ( t, \tau ) \phi ( \tau ) \ d \tau = f ( t), $$
$$ t \in \Gamma , $$
where $ a $, $ b $, $ k $, $ f $ are known functions, $ k $ is a Fredholm kernel (see Integral operator), $ \phi $ is the desired function, $ \Gamma $ is a planar curve, and the improper integral is to be understood as a Cauchy principal value, i.e.
$$ \int\limits _ \Gamma \frac{\phi ( \tau ) }{\tau - t } \ d \tau = \ \lim\limits _ {\epsilon \rightarrow 0 } \ \int\limits _ {\Gamma _ \epsilon } \frac{\phi ( \tau ) }{\tau - t } \ d \tau ,\ t \in \Gamma , $$
where $ \Gamma _ \epsilon = \Gamma \setminus l _ \epsilon $, $ l _ \epsilon $ being the arc $ t ^ \prime tt ^ {\prime\prime} $ on $ \Gamma $ such that $ tt ^ \prime $ and $ tt ^ {\prime\prime} $ are both of length $ \epsilon $.
The operator $ K $ defined by the left-hand side of (1) is called a singular operator (or sometimes a general singular operator):
$$ \tag{2 } K = aI + bS + V, $$
where $ I $ is the identity operator, $ S $ is a singular integral operator (sometimes called a singular integral operator with Cauchy kernel), i.e.
$$ ( S \phi ) ( t) = \ { \frac{1}{\pi i } } \int\limits _ \Gamma \frac{\phi ( \tau ) }{\tau - t } \ d \tau ,\ \ t \in \Gamma , $$
and $ V $ is the integral operator with kernel $ k ( t, \tau) $.
The operator $ K _ {0} = aI + bS $ is called the characteristic part of the singular operator $ K $, or the characteristic singular operator, and the equation
$$ \tag{3 } a ( t) \phi ( t) + \frac{b ( t) }{\pi i } \int\limits _ \Gamma \frac{\phi ( t) }{\tau - t } \ d \tau = f ( t),\ \ t \in \Gamma , $$
is called a characteristic singular integral equation, the functions $ a $ and $ b $ being the coefficients of the corresponding operator or equation.
$$ a ( t) \psi ( t) - { \frac{1}{\pi i } } \int\limits _ \Gamma \frac{b ( \tau ) \psi ( \tau ) }{\tau - t } \ d \tau + $$
$$ + \int\limits _ \Gamma k ( \tau , t) \psi ( \tau ) d \tau = g ( t),\ t \in \Gamma , $$
is called the adjoint of equation (1), and the operator $ K ^ \prime = aI + SbI + V ^ \prime $ ($ V ^ \prime $ being the integral operator with kernel $ k ( \tau , t) $) is called the adjoint of $ K $. In particular, $ K _ {0} ^ \prime = aI + SbI $ is the adjoint of $ K _ {0} $.
The operators $ K $, $ K _ {0} $, $ K ^ \prime $, $ K _ {0} ^ \prime $, or their corresponding equations, are said to be of normal type if the functions
$$ A = a + b,\ \ B = a - b $$
do not vanish anywhere on $ \Gamma $. In this case one also says that the coefficients of the operator or equation satisfy the normality condition.
Let $ H _ \alpha ( \Gamma ) $, $ 0 < \alpha \leq 1 $, be the class of functions $ \{ f \} $ defined on $ \Gamma $ and satisfying the condition
$$ | f ( t _ {1} ) - f ( t _ {2} ) | \leq \textrm{ const } | t _ {1} - t _ {2} | ^ \alpha , $$
for all $ t _ {1} , t _ {2} \in \Gamma $. If $ f $ belongs to $ H _ \alpha ( \Gamma ) $ for some admissible value $ \alpha $ and knowledge of the numerical value of $ \alpha $ is not required, then one writes $ f \in H ( \Gamma ) $, or even $ f \in H $ if it is clear from the context which contour $ \Gamma $ is meant.
The set $ H $ is called a Hölder class of functions, and if $ f \in H $ one says that $ f $ satisfies a Hölder condition or that $ f $ is an $ H $-function.
Let $ G $ be a complex-valued continuous function that does not vanish on an oriented closed simple smooth contour $ \Gamma $, and let
$$ \tag{4 } \kappa = \ { \frac{1}{2 \pi } } [ \mathop{\rm arg} G ( t)] _ \Gamma , $$
where $ [ \cdot ] _ \Gamma $ denotes the increment of the function between brackets after a single circuit of $ \Gamma $ in the positive direction. The integer $ \kappa $ is called the index of the function $ G $, $ \kappa = \mathop{\rm ind} G $.
1 Solution of the characteristic singular integral equation and its adjoint.
1.1 Theorem 1.
2 The regularization problem.
3 Systems of singular integral equations.
4 Multi-dimensional singular integral equations.
5 Historical survey.
Solution of the characteristic singular integral equation and its adjoint.
Let $ \Gamma $ be a simple, closed, oriented, smooth contour on which the positive direction is chosen in such a way that it bounds a finite domain on the left, let the coordinate origin lie in this domain, let $ a, b, f \in H ( \Gamma ) $, and let $ a $ and $ b $ satisfy the normality condition. Further, let $ \kappa $ be defined by (4), with
$$ \tag{5 } G = \frac{a - b }{a + b } . $$
Then the following assertions hold.
1) If $ \kappa \geq 0 $, then the equation (3) is solvable in $ H ( \Gamma ) $ for any right-hand side $ f \in H ( \Gamma ) $, and all its $ H $-solutions are given by the formula (see [1], [2])
$$ \tag{6 } \phi ( f ) = \ a _ {*} ( t) f ( t) - \frac{b _ {*} ( t) \omega ( t) }{\pi i } \int\limits _ \Gamma \frac{f ( \tau ) }{\omega ( \tau ) ( \tau - t) } \ d \tau + $$
$$ + b _ {*} ( t) \omega ( t) p _ {\kappa - 1 } ( t), $$
$$ a _ {*} = \ { \frac{a}{a ^ {2} - b ^ {2} } } ,\ \ b _ {*} = \ { \frac{b}{a ^ {2} - b ^ {2} } } , $$
$$ \omega ( t) = t ^ {- {\kappa / 2 } } \sqrt {a ^ {2} ( t) - b ^ {2} ( t) } \mathop{\rm exp} \left [ { \frac{1}{2 \pi i } } \int\limits _ \Gamma \frac{ \mathop{\rm ln} [ \tau ^ {- \kappa } G ( \tau )] }{\tau - t } d \tau \right ] , $$
and $ p _ {\kappa - 1 } $ is an arbitrary polynomial of degree $ \kappa - 1 $ ($ p _ {- 1} = 0 $). If $ \kappa < 0 $, then equation (3) is solvable in $ H ( \Gamma ) $ if and only if $ f $ satisfies the condition
$$ \int\limits _ \Gamma \frac{t ^ {k} }{\omega ( t) } f ( t) dt = 0,\ \ k = 0, \dots, - \kappa - 1. $$
When these conditions hold, (3) has a unique $ H $-solution, given by the formula (6) with $ p _ {\kappa - 1 } = 0 $.
2) The singular integral equation adjoint to (3),
$$ \tag{7 } a ( t) \psi ( t) - { \frac{1}{\pi i } } \int\limits _ \Gamma \frac{b ( \tau ) \psi ( \tau ) }{\tau - t } \ d \tau = g ( t),\ \ t \in \Gamma , $$
is solvable in $ H $ for any $ g \in H ( \Gamma ) $ if $ \kappa \leq 0 $, and all its $ H $-solutions are given by the formula
$$ \tag{8 } \psi ( t) = a _ {*} ( t) g ( t) + $$
$$ + { \frac{1}{\pi i \omega ( t) } } \int\limits _ \Gamma \frac{\omega ( \tau ) b _ {*} ( \tau ) g ( \tau ) }{\tau - t } d \tau + \frac{p _ {- \kappa - 1 } }{\omega ( t) } . $$
But if $ \kappa > 0 $, equation (7) is solvable if and only if $ g $ satisfies the $ \kappa $ conditions:
$$ \int\limits _ \Gamma t ^ {k} b ( t) \omega ( t) g ( t) dt = 0,\ \ k = 0, \dots, \kappa - 1, $$
and if these conditions hold, the solution is given by (8) with $ p _ {- \kappa - 1 } = 0 $.
Noether's theorems. Let $ \nu $ and $ \nu ^ \prime $ be the numbers of linearly independent solutions of the homogeneous equations $ K _ {0} \phi = 0 $ and $ K _ {0} ^ \prime \psi = 0 $, respectively. Then the difference $ \nu - \nu ^ \prime $ is called the index of the operator $ K _ {0} $ or of the equation $ K _ {0} \phi = 0 $:
$$ \mathop{\rm ind} K = \nu - \nu ^ \prime . $$
The homogeneous singular integral equations $ K _ {0} \phi = 0 $ and $ K _ {0} ^ \prime \psi = 0 $ have a finite number of linearly independent solutions.
Necessary and sufficient conditions for the solvability of the non-homogeneous equation (3) are:
$$ \int\limits _ \Gamma f ( t) \psi _ {j} ( t) \ dt = 0,\ \ j = 1, \dots, \nu ^ \prime , $$
where $ \psi _ {1}, \dots, \psi _ {\nu ^ \prime } $ is a complete set of linearly independent solutions of the adjoint homogeneous equation $ K _ {0} ^ \prime \psi = 0 $.
The index of $ K _ {0} $ (cf. Index of an operator) is equal to the index of the function $ G $ defined by equation (5), i.e.
$$ \tag{9 } \mathop{\rm ind} K _ {0} = \ { \frac{1}{2 \pi } } \left [ \mathop{\rm arg} \frac{a - b }{a + b } \right ] _ \Gamma . $$
These theorems remain valid in the case of the general singular integral equation (1), that is, in these theorems $ K _ {0} $, $ K _ {0} ^ \prime $ can be replaced by $ K $, $ K ^ \prime $, respectively. It is only necessary to bear in mind that, in the case of general singular integral equations, $ \nu $ and $ \nu ^ \prime $ are both non-zero in general, in contrast to the case of characteristic singular integral equations, when one of them must be zero.
Theorems 1–3 are called after F. Noether, who first proved them [9] in the case of a one-dimensional singular integral equation with Hilbert kernel:
$$ \tag{10 } a ( s) \phi ( s) + \frac{b ( s) }{2 \pi } \int\limits _ {- \pi } ^ \pi \phi ( t) \mathop{\rm cot} \ { \frac{t - s }{2} } dt + $$
$$ + \int\limits _ {- \pi } ^ \pi k ( s, t) \phi ( t) dt = f ( s),\ - \pi \leq s \leq \pi . $$
These theorems are analogous to the Fredholm theorems (see Fredholm equation) and differ from them only in that the numbers of linearly independent solutions of the homogeneous equation and its adjoint are in general distinct, that is, whereas the index of a Fredholm equation is always equal to zero, a singular integral equation can have non-zero index.
Like the Noether theorems, the formulas (6) and (8) remain valid in the case when $ \Gamma = \cup \Gamma _ {k} $ consists of a finite number of smooth mutually-disjoint closed contours. In this case the symbol $ [ \cdot ] _ \Gamma $ in (4) denotes the sum of the increments of the function between brackets after a circuit of each of the contours $ \Gamma _ {k} $.
The case when $ \Gamma $ is a finite union of smooth mutually-disjoint open contours requires special consideration. If $ \phi $ is an $ H $-function inside any closed part of every $ \Gamma _ {k} $ not containing the end points, and if close to either end $ c $ it can be written in the form $ \phi ( t) = \phi _ {*} ( t) | t - c | ^ {- \alpha } $, $ 0 \leq \alpha = \textrm{ const } < 1 $, where $ \phi _ {*} $ is an $ H $-function in a neighbourhood of $ c $ containing $ c $, then one says that $ \phi $ belongs to the class $ H ^ {*} $. If $ a, b \in H $, $ g, g \in H ^ {*} $ and in $ H ^ {*} $ one looks for solutions of the equations (3), (7), then one can define $ \kappa $ and $ \phi $ in such a way that (6) and (8) remain valid. Furthermore, if one defines in a corresponding way subclasses of $ H ^ {*} $ in which one looks for solutions of a given singular integral equation and its adjoint, then the Noether theorems also remain valid (see [1]).
The above results can be extended in various ways. It can be shown (see [1]) that under certain conditions they also remain valid in the case of a piecewise-smooth contour $ \Gamma $ (that is, when $ \Gamma $ is the union of a finite number of smooth open arcs, which are mutually-disjoint except for their end points). Singular integral equations can also be studied in the Lebesgue function spaces $ L _ {p} ( \Gamma ) $ and $ L _ {p} ( \Gamma , \rho ) $, where $ p > 1 $ and $ \rho $ is a certain weight (see [4]–[7]). [4]–[6] contain results which directly extend those stated above.
Let $ \Gamma $ be a simple rectifiable contour with equation $ t = t ( s) $, $ 0 \leq s \leq \gamma $, where $ s $ is the arc-length on $ \Gamma $ starting from some fixed point and $ \gamma $ is the length of $ \Gamma $. One says that a function $ f $ defined on $ \Gamma $ is almost-everywhere finite, measurable, integrable, etc., if the function $ f ( t ( s)) $ has the corresponding property on the interval $ [ 0, \gamma ] $. The Lebesgue integral of $ f $ on $ \Gamma $ is defined by
$$ \int\limits _ \Gamma f ( t) dt = \ \int\limits _ { 0 } ^ \gamma f ( t ( s)) t ^ \prime ( s) ds. $$
Let $ L _ {p} ( \Gamma ) $ denote the set of measurable functions on $ \Gamma $ such that $ | f | ^ {p} $ is integrable on $ \Gamma $. The function class $ L _ {p} ( \Gamma ) $, $ p \geq 1 $, becomes a Banach space if one introduces the norm by
$$ \| f \| = \ \left ( \int\limits _ \Gamma | f | ^ {p} ds \right ) ^ {1/p} . $$
If in equations (3), (7) the equalities hold almost-everywhere, with continuous coefficients $ a $ and $ b $ satisfying the normality condition and $ f, g \in L _ {p} ( \Gamma ) $, $ p > 1 $, then 1) and 2) remain valid upon replacing $ H $ by $ L _ {p} ( \Gamma ) $, $ p > 1 $. Furthermore, if the solutions of $ K \phi = f $, where $ K $ has the form (2), are sought in $ L _ {p} ( \Gamma ) $, $ p > 1 $, and the solutions of its homogeneous adjoint $ K ^ \prime \psi = 0 $ are sought in $ L _ {p ^ \prime } ( \Gamma ) $, where $ p ^ \prime = p/( p - 1) $, then the Noether theorems also remain valid and $ V $ may be any completely-continuous operator on $ L _ {p} ( \Gamma ) $.
When $ \Gamma $ is a finite union of open contours, or if $ \Gamma $ is closed but the coefficients of the singular integral equation are not continuous, then solutions of the equations can often be found in weighted function spaces $ L _ {p} ( \Gamma , \rho ) $, $ p > 1 $ ($ f \in L _ {p} ( \Gamma , \rho ) \iff \rho f \in L _ {p} ( \Gamma ) $). Under specific conditions on the weight function $ \rho $, results analogous to the above are valid.
The regularization problem.
One of the basic problems in the theory of singular integral equations is the regularization problem, that is, the problem of reducing a singular integral equation to a Fredholm equation.
Let $ E $ and $ E _ {1} $ be Banach spaces, which may coincide, and let $ A: E \rightarrow E _ {1} $ be a bounded linear operator. A bounded operator $ B $ is called a left regularizer of $ A $ if $ BA = I + V $, where $ I $, $ V $ are the identity and a completely-continuous operator on $ E $, respectively. If the equations $ A \phi = f $ and $ BA \phi = Bf $ are equivalent for each $ f \in E _ {1} $, then $ B $ is called a left equivalent regularizer of $ A $. A bounded operator $ B $ is called a right regularizer of $ A $ if $ AB = I _ {1} + V _ {1} $, where $ I _ {1} $, $ V _ {1} $ are the identity and a completely-continuous operator on $ E _ {1} $, respectively. If the equations $ A \phi = f $ and $ AB \psi = f $ are simultaneously solvable or unsolvable as $ f $ ranges over $ E _ {1} $, and in the case of solvability the relation $ \phi = B \psi $ holds between their solutions, then $ B $ is called a right equivalent regularizer of $ A $. If $ B $ is simultaneously a left and right regularizer of $ A $, then it is called a two-sided regularizer, or simply a regularizer of $ A $. One says that $ A $ admits left, right, two-sided, equivalent, regularization if it has a left, right, two-sided, or equivalent regularizer, respectively.
Let $ K $ be the operator defined by (2), where $ \Gamma $ is a closed simple smooth contour, $ a $ and $ b $ are $ H $-functions (or continuous functions) satisfying the normality condition and $ V $ is a completely-continuous operator on $ L _ {p} ( \Gamma ) $, $ p > 1 $. Then $ K $ has an uncountable set of regularizers on $ L _ {p} ( \Gamma ) $, e.g. one of which is the operator
$$ M = \ { \frac{a}{a ^ {2} - b ^ {2} } } I - { \frac{b}{a ^ {2} - b ^ {2} } } S. $$
For $ K $ to admit left equivalent regularization, it is necessary and sufficient that its index $ \kappa $ is non-negative [7]. One can take $ M $ to be an equivalent left regularizer. If $ \kappa < 0 $, then $ K $ admits right equivalent regularization, which can be realized using $ M $ (see [1]).
Systems of singular integral equations.
If in (1) $ a $, $ b $ and $ k $ are square matrices of order $ n $, regarded as matrices of linear transformations of an unknown vector $ \phi = ( \phi _ {1}, \dots, \phi _ {n} ) $, and $ f = ( f _ {1}, \dots, f _ {n} ) $ is a known vector, then (1) is called a system of singular integral equations. It is said to be of normal type if the matrices $ A = a + b $ and $ B = a - b $ are non-singular on $ \Gamma $, that is, $ \mathop{\rm det} A \neq 0 $ and $ \mathop{\rm det} B \neq 0 $ for all $ t \in \Gamma $.
The Noether theorems remain valid for a system of singular integral equations in the class $ H $ (see [1], [3]), and can be extended to the case of Lebesgue function spaces (see [4], [5]). In contrast to the case of a single equation, a characteristic system of singular integral equations cannot, in general, be solved by quadratures, although there is a formula similar to (9) for the index (see [1]):
$$ \mathop{\rm ind} K = \ { \frac{1}{2 \pi } } [ \mathop{\rm arg} \mathop{\rm det} \ A ^ {- 1} B] _ \Gamma . $$
For a system of singular integral equations, regularization problems (see [3]) are similar to those for a single equation.
There have been several investigations of both one singular integral equation and a system of such equations when the normality condition is violated (see [11] and the bibliography contained therein).
Multi-dimensional singular integral equations.
These are equations of the form
$$ \tag{11 } a ( t) \phi ( t) + \int\limits _ \Gamma \frac{g ( t, \theta ) }{r ^ {m} } \phi ( \tau ) d \tau + ( V \phi ) ( t) = \ f ( t),\ \ t \in \Gamma , $$
where $ \Gamma $ is a domain in the Euclidean space $ E _ {m} $, $ m > 1 $. $ \Gamma $ may be bounded or unbounded, and can, in particular cases, coincide with $ E _ {m} $; $ t $ and $ \tau $ are points of $ E _ {m} $, $ r = | t - \tau | $, $ \theta = ( \tau - t)/r $, $ d \tau $ is the volume element in $ E _ {m} $, and $ V $ is a completely-continuous operator on the Banach function space in which the solution is sought. Further, $ a $ and $ g $ are given functions and the improper singular integral is understood in the principle value sense, that is,
$$ \tag{12 } \int\limits _ \Gamma \phi ( \tau ) \frac{g ( t, \theta ) }{r ^ {m} } \ d \tau = \ \lim\limits _ {\epsilon \rightarrow 0 } \ \int\limits _ {\Gamma \setminus \{ r < \epsilon \} } \phi ( \tau ) \frac{g ( t, \theta ) }{r ^ {m} } \ d \tau . $$
Here $ t $ is called the pole, $ g ( t, \theta ) $ the characteristic and $ \phi $ the density of the singular integral (12). As a rule, the limit in (12) does not exist when the following condition is violated:
$$ \tag{13 } \int\limits _ \sigma g ( t, \theta ) d \sigma = 0, $$
where $ \sigma $ is the unit sphere with centre at the origin. Thus it is assumed that (13) always holds.
In the theory of multi-dimensional singular integral equations, an important role is played by the notion of a symbol (cf. Symbol of an operator). It is defined in terms of the functions $ a $ and $ g $, and from a given symbol the original singular operator can be recovered up to a completely-continuous term. Composition of singular operators corresponds to multiplication of their symbols. It has been shown [7] that under certain restrictions (11) admits a regularization in the space $ L _ {p} $, $ p > 1 $, if and only if the absolute value of its symbol has a positive lower bound, and in this case the Fredholm theorems hold.
Historical survey.
The study of one-dimensional singular integral equations originated in the works of D. Hilbert and H. Poincaré at almost the same time as the formulation of the theory of Fredholm equations (cf. Fredholm equation). A special case, a singular integral equation with Cauchy kernel, was considered much earlier in the doctoral thesis of Yu.V. Sokhotskii, published in St. Petersburg in 1873; however, this research remained unnoticed.
Basic results on the formulation of a general theory of the equations (1) and (10) were obtained at the beginning of the 1920s by Noether [9] and T. Carleman [10]. Noether first introduced the concept of an index and proved the theorems 1–3 above by applying the method of left regularization. This method was first described (in various special cases) by Poincaré and Hilbert, but its general form is due to Noether. A crucial point in the realization of the above method involves the application of a permutation (composition) formula for repeated singular integrals in the Cauchy principal value sense (the Poincaré–Bertrand formula). For certain special classes of equations (3), Carleman had the basic idea behind a method for reducing this equation to the following boundary value problem in the theory of analytic functions (the linear conjugacy problem, see [1] and Riemann–Hilbert problem (analytic functions)):
$$ \Phi ^ {+} ( t) = G ( t) \Phi ^ {-} ( t) + g ( t),\ \ t \in \Gamma , $$
and found a way of constructing an explicit solution. Carleman and I.N. Vekua found a method of regularizing equation (1) involving a solution of the characteristic equation (3).
The great significance, both theoretical and practical, of singular integral equations became especially apparent towards the end of the 1930s in connection with the solution of certain very important problems in the mechanics of a solid medium (the theory of elasticity, hydro- and aeromechanics, and others) and theoretical physics. The theory of one-dimensional singular integral equations was significantly advanced in the 1940s and reached a final form (in a definite sense) in the works of Soviet mathematicians. A presentation of this theory in Hölder classes of functions is to be found in a monograph of one of its creators, N.I. Muskhelishvili (see [1]). This monograph also stimulated scientific investigations in certain other directions, for example in the theory of singular integral equations not satisfying the Hausdorff normality condition, singular integral equations with non-diagonal singularities (with displacements), Wiener–Hopf equations, multi-dimensional singular integral equations, etc.
The earliest studies of multi-dimensional singular integral equations were carried out in 1928 by F. Tricomi, who established a permutation formula for two-dimensional singular integrals and applied it to the solution of a class of singular integral equations. In this direction, the fundamental work was done in 1934 by G. Giraud, who proved the validity of the Fredholm theorems for certain classes of multi-dimensional singular integral equations on Lyapunov manifolds.
[1] N.I. Muskhelishvili, "Singular integral equations" , Wolters-Noordhoff (1972) (Translated from Russian) MR0355494 Zbl 0488.45002 Zbl 0174.16202 Zbl 0174.16201 Zbl 0103.07502 Zbl 0108.29203 Zbl 0051.33203 Zbl 0041.22601
[2] F.D. Gakhov, "Boundary value problems" , Pergamon (1966) (Translated from Russian) MR0198152 Zbl 0141.08001
[3] N.P. Vekua, "Systems of singular integral equations and some boundary value problems" , Moscow (1970) (In Russian)
[4] B.V. Khvedelidze, "Linear discontinuous boundary problems in the theory of functions, singular integral equations and some applications" Trudy Tbilis. Mat. Inst. Akad. Nauk. GruzSSR , 23 (1956) pp. 3–158 (In Russian) Zbl 0083.30002
[5] I.I. Danilyuk, "Nonregular boundary value problems on the plane" , Moscow (1975) (In Russian)
[6] I. [I.Ts. Gokhberg] Gohberg, N. Krupnik, "Einführung in die Theorie der eindimensionalen singulären Integraloperatoren" , Birkhäuser (1979) (Translated from Russian) MR0545507 Zbl 0413.47040
[7] S.G. Mikhlin, "Multidimensional singular integrals and integral equations" , Pergamon (1965) (Translated from Russian) MR0185399 Zbl 0129.07701
[8] A.V. Bitsadze, "Boundary value problems for second-order elliptic equations" , North-Holland (1968) (Translated from Russian) MR0226183 Zbl 0167.09401
[9] F. Noether, "Ueber eine Klasse singulärer Integralgleichungen" Math. Ann. , 82 (1921) pp. 42–63 Zbl 47.0369.02
[10] T. Carleman, "Sur le résolution des certaines équations intégrales" Arkiv. Mat. Astron. Fys. , 16 : 26 (1922) pp. 1–19
[11] S. Prössdorf, "Einige Klassen singulärer Gleichungen" , Birkhäuser (1974) MR0499984 Zbl 0302.45009 Zbl 0302.45008
For certain systems of singular integral equations explicit solution formulas can be obtained by using the state-space approach from systems theory (cf. [a1] and Integral equation of convolution type).
[a1] H. Bart, I. Gohberg, M.A. Kaashoek, "Minimal factorization of matrix and operation functions" , Birkhäuser (1979)
[a2] K. Clancey, I. Gohberg, "Factorization of matrix functions and singular integral operators" , Birkhäuser (1981) MR0657762 Zbl 0474.47023
Singular integral equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Singular_integral_equation&oldid=52327
This article was adapted from an original article by A.V. BitsadzeB.V. Khvedelidze (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Singular_integral_equation&oldid=52327"
TeX auto | CommonCrawl |
Grigorova, M., et al., 2018. Doubly Reflected BSDEs and $\mathcal{E}$$^ƒ$-Dynkin games: beyond the right-continuous case, Center for Mathematical Economics Working Papers, no.598, Bielefeld: Center for Mathematical Economics.
Demeze-Jouatsa, G.-H., 2018. Repetition and cooperation: A model of finitely repeated games with objective ambiguity, Center for Mathematical Economics Working Papers, no.585, Bielefeld: Center for Mathematical Economics.
Demeze-Jouatsa, G.-H., 2018. A complete folk theorem for finitely repeated games, Center for Mathematical Economics Working Papers, no.584, Bielefeld: Center for Mathematical Economics.
Demeze-Jouatsa, G.-H., 2018. A note on "Necessary and sufficient conditions for the perfect finite horizon folk theorem" [Econometrica, 63 (2): 425-430, 1995.], Center for Mathematical Economics Working Papers, no.583, Bielefeld: Center for Mathematical Economics.
Hellmann, T., & Panebianco, F., 2018. The transmission of continuous cultural traits in endogenous social networks, Center for Mathematical Economics Working Papers, no.579, Bielefeld: Center for Mathematical Economics.
Hellmann, T., & Landwehr, J., 2018. Pairwise stable networks in homogeneous societies, Center for Mathematical Economics Working Papers, no.517, Version Januar 2018., Bielefeld: Center for Mathematical Economics.
Obradović, L., 2018. Robust Maximum Detection: Full Information Best Choice Problem under Multiple Priors, Center for Mathematical Economics Working Papers, no.580, Bielefeld: Center for Mathematical Economics.
Li, H., Peng, S., & Soumana Hima, A., 2017. Reflected Solutions of BSDEs Driven by $\textit{G}$-Brownian Motion, Center for Mathematical Economics Working Papers, no.590, Bielefeld: Center for Mathematical Economics.
Ferrari, G., & Koch, T., 2017. On a Strategic Model of Pollution Control , Center for Mathematical Economics Working Papers, no.586, Bielefeld: Center for Mathematical Economics.
Riedel, F., 2017. Uncertain acts in games, Center for Mathematical Economics Working Papers, no.571, Bielefeld: Center for Mathematical Economics.
Ferrari, G., 2017. On a Class of Singular Stochastic Control Problems for Reflected Diffusions , Center for Mathematical Economics Working Papers, no.592, Bielefeld: Center for Mathematical Economics.
Ferrari, G., & Vargiolu, T., 2017. On the Singular Control of Exchange Rates , Center for Mathematical Economics Working Papers, no.594, Bielefeld: Center for Mathematical Economics.
Grigorova, M., et al., 2017. Optimal Stopping With ƒ-Expectations: the irregular case, Center for Mathematical Economics Working Papers, no.587, Bielefeld: Center for Mathematical Economics.
Beißner, P., Lin, Q., & Riedel, F., 2017. Dynamically Consistent α-Maxmin Expected Utility, Center for Mathematical Economics Working Papers, no.593, Bielefeld: Center for Mathematical Economics.
de Angelis, T., Ferrari, G., & Hamadène, S., 2017. A Note on a New Existence Result for Reflected BSDES with Interconnected Obstacles, Center for Mathematical Economics Working Papers, no.591, Bielefeld: Center for Mathematical Economics.
Burzoni, M., Riedel, F., & Soner, H.M., 2017. Viability and arbitrage under Knightian Uncertainty, Center for Mathematical Economics Working Papers, no.575, Bielefeld: Center for Mathematical Economics.
Sürücü, O., Brangewitz, S., & Mir Djawadi, B., 2017. Asymmetric dominance effect with multiple decoys for low- and high-variance lotteries, Center for Mathematical Economics Working Papers, no.574, Bielefeld: Center for Mathematical Economics.
Trockel, W., & Haake, C.-J., 2017. Thoughts on social design, Center for Mathematical Economics Working Papers, no.577, Bielefeld: Center for Mathematical Economics.
Rebien, M., Stops, M., & Zaharieva, A., 2017. Formal search and referrals from a firm's perspective, Center for Mathematical Economics Working Papers, no.578, Bielefeld: Center for Mathematical Economics.
Berens, S., & Chochua, L., 2017. The impartial observer under uncertainty, Center for Mathematical Economics Working Papers, no.576, Bielefeld: Center for Mathematical Economics.
Gauer, F., & Hellmann, T., 2017. Strategic formation of homogeneous bargaining networks, Center for Mathematical Economics Working Papers, no.529, aktual. Version August 2017., Bielefeld: Center for Mathematical Economics.
Riedel, F., Tallon, J.-M., & Vergopoulos, V., 2017. Dynamically consistent preferences under imprecise probabilistic information, Center for Mathematical Economics Working Papers, no.573, Bielefeld: Center for Mathematical Economics.
Schopohl, S., 2017. Information transmission in hierarchies, Center for Mathematical Economics Working Papers, no.570, Bielefeld: Center for Mathematical Economics.
Günther, M., 2017. A Note on "Renegotiation in Repeated Games" [Games Econ. Behav. 1 (1989) 327–360], Center for Mathematical Economics Working Papers, no.572, Bielefeld: Center for Mathematical Economics.
Duman, P., & Trockel, W., 2016. On non-cooperative foundation and implementation of the Nash Solution in subgame perfect equilibrium via Rubinstein's game, Center for Mathematical Economics Working Papers, no.550, Bielefeld: Center for Mathematical Economics.
Schopohl, S., 2016. Communication games with optional verification, Center for Mathematical Economics Working Papers, no.569, Bielefeld: Center for Mathematical Economics.
Gauer, F., & Landwehr, J., 2016. Continuous homophily and clustering in random networks, Center for Mathematical Economics Working Papers, no.515, Version August 2016., Bielefeld: Center for Mathematical Economics.
de Angelis, T., et al., 2016. Optimal entry to an irreversible investment plan with non convex costs , Center for Mathematical Economics Working Papers, no.566, Bielefeld: Center for Mathematical Economics.
Herzberg, F., 2016. The Transfer Principle holds for definable nonstandard models under Countable Choice, Center for Mathematical Economics Working Papers, no.560, Bielefeld: Center for Mathematical Economics.
Obradović, L., 2016. A note on the perpetual American straddle, Center for Mathematical Economics Working Papers, no.559, Bielefeld: Center for Mathematical Economics.
Dawid, H., & Hellmann, T., 2016. R&D Investments under Endogenous Cluster Formation, Center for Mathematical Economics Working Papers, no.555, Bielefeld: Center for Mathematical Economics.
Külpmann, P., & Khantadze, D., 2016. Identifying the reasons for coordination failure in a laboratory experiment, Center for Mathematical Economics Working Papers, no.567, Bielefeld: Center for Mathematical Economics.
Sun, L., 2016. Hypothesis testing equilibrium in signaling games, Center for Mathematical Economics Working Papers, no.557, Bielefeld: Center for Mathematical Economics.
Steg, J.-H., 2016. On preemption in discrete and continuous time, Center for Mathematical Economics Working Papers, no.556, Bielefeld: Center for Mathematical Economics.
Beißner, P., & Riedel, F., 2016. Knight-Walras equilibria, Center for Mathematical Economics Working Papers, no.558, Bielefeld: Center for Mathematical Economics.
Decerf, B., & Riedel, F., 2016. Disambiguation of Ellsberg equilibria in 2x2 normal form games, Center for Mathematical Economics Working Papers, no.554, Bielefeld: Center for Mathematical Economics.
Rosenmüller, J., 2016. Convex vNM-Stable Sets for a Semi Orthogonal Game. Part V: all games have vNM-Stable Sets, Center for Mathematical Economics Working Papers, no.552, Bielefeld.
Kuzmics, C., & Steg, J.-H., 2016. On public good provision mechanisms with dominant strategies and balanced budget, Center for Mathematical Economics Working Papers, no.553, Bielefeld: Center for Mathematical Economics.
Gauer, F., & Kuzmics, C., 2016. Cognitive empathy in conflict situations, Center for Mathematical Economics Working Papers, no.551, Bielefeld: Center for Mathematical Economics.
Hellmann, T., & Thijssen, J.J.J., 2016. Fear of the market or fear of the competitor? Ambiguity in a real options game, Center for Mathematical Economics Working Papers, no.533, Januar 2016., Bielefeld: Center for Mathematical Economics.
Iftikhar, Z., & Zaharieva, A., 2016. General equilibrium effects of immigration in Germany: search and matching approach, Center for Mathematical Economics Working Papers, no.568, Bielefeld: Center for Mathematical Economics.
de Angelis, T., Ferrari, G., & Moriarty, J., 2016. A solvable two-dimensional singular stochastic control problem with non convex costs, Center for Mathematical Economics Working Papers, no.561, Bielefeld: Center for Mathematical Economics.
Ferrari, G., & Yang, S., 2016. On an optimal extraction problem with regime switching, Center for Mathematical Economics Working Papers, no.562, Bielefeld: Center for Mathematical Economics.
de Angelis, T., Ferrari, G., & Moriarty, J., 2016. Nash equilibria of threshold type for two-player nonzero-sum games of stopping, Center for Mathematical Economics Working Papers, no.563, Bielefeld: Center for Mathematical Economics.
Ferrari, G., 2016. Controlling public debt without forgetting Inflation, Center for Mathematical Economics Working Papers, no.564, Bielefeld: Center for Mathematical Economics.
de Angelis, T., & Ferrari, G., 2016. Stochastic nonzero-sum games: a new connection between singular control and optimal stopping , Center for Mathematical Economics Working Papers, no.565, Bielefeld: Center for Mathematical Economics.
Stupnytska, Y., 2015. Asymmetric information in a search model with social contacts, Center for Mathematical Economics Working Papers, no.548, Bielefeld: Center for Mathematical Economics.
Günther, M., & Hellmann, T., 2015. Local and Global Pollution and International Environmental Agreements in a Network Approach, Center for Mathematical Economics Working Papers, no.545, Bielefeld: Center for Mathematical Economics.
Külpmann, P., 2015. Procrastination and projects, Center for Mathematical Economics Working Papers, no.544, Bielefeld: Center for Mathematical Economics.
Rosenmüller, J., 2015. Convex vNM-Stable Sets for a Semi Orthogonal Game. Part IV: Large Economies: the Existence Theorem, Center for Mathematical Economics Working Papers, no.534, Bielefeld: Center for Mathematical Economics.
issn=0931-6558
Zitationsstil: harvard1 | CommonCrawl |
Cognitive control is a broad concept that refers to guidance of cognitive processes in situations where the most natural, automatic, or available action is not necessarily the correct one. Such situations typically evoke a strong inclination to respond but require people to resist responding, or they evoke a strong inclination to carry out one type of action but require a different type of action. The sources of these inclinations that must be overridden are various and include overlearning (e.g., the overlearned tendency to read printed words in the Stroop task), priming by recent practice (e.g., the tendency to respond in the go/no-go task when the majority of the trials are go trials, or the tendency to continue sorting cards according to the previously correct dimension in the Wisconsin Card Sorting Test [WCST]; Grant & Berg, 1948) and perceptual salience (e.g., the tendency to respond to the numerous flanker stimuli as opposed to the single target stimulus in the flanker task). For the sake of inclusiveness, we also consider the results of studies of reward processing in this section, in which the response tendency to be overridden comes from the desire to have the reward immediately.
Soldiers should never be treated like children; because then they will act like them. However, There's a reason why the 1SG is known as the Mother of the Company and the Platoon Sergeant is known as a Platoon Daddy. Because they run the day to day operations of the household, get the kids to school so to speak, and focus on the minutia of readiness and operational execution in all its glory. Officers forget they are the second link in the Chain of Command and a well operating duo of Team Leader and Squad Leader should be handling 85% of all Soldier issues, while the Platoon sergeant handles the other 15% with 1SG. Platoon Leaders and Commanders should always be present; training, leading by example, focusing on culture building, tracking and supporting NCO's. They should be focused on big business sides of things, stepping in to administer punishment or award and reward performance. If an officer at any level is having to step into a Soldier's day to day lives an NCO at some level is failing. Officers should be junior Officers and junior Enlisted right along side their counterparts instead of eating their young and touting their "maturity" or status. If anything Officers should be asking their NCO's where they should effect, assist, support or provide cover toward intitiatives and plans that create consistency and controlled chaos for growth of individuals two levels up and one level down of operational capabilities at every echelon of command.
For proper brain function, our CNS (Central Nervous System) requires several amino acids. These derive from protein-rich foods. Consider amino acids to be protein building blocks. Many of them are dietary precursors to vital neurotransmitters in our brain. Epinephrine (adrenaline), serotonin, dopamine, and norepinephrine assist in enhancing mental performance. A few examples of amino acid nootropics are:
In contrast to the types of memory discussed in the previous section, which are long-lasting and formed as a result of learning, working memory is a temporary store of information. Working memory has been studied extensively by cognitive psychologists and cognitive neuroscientists because of its role in executive function. It has been likened to an internal scratch pad; by holding information in working memory, one keeps it available to consult and manipulate in the service of performing tasks as diverse as parsing a sentence and planning a route through the environment. Presumably for this reason, working memory ability correlates with measures of general intelligence (Friedman et al., 2006). The possibility of enhancing working memory ability is therefore of potential real-world interest.
(On a side note, I think I understand now why modafinil doesn't lead to a Beggars in Spain scenario; BiS includes massive IQ and motivation boosts as part of the Sleepless modification. Just adding 8 hours a day doesn't do the world-changing trick, no more than some researchers living to 90 and others to 60 has lead to the former taking over. If everyone were suddenly granted the ability to never need sleep, many of them would have no idea what to do with the extra 8 or 9 hours and might well be destroyed by the gift; it takes a lot of motivation to make good use of the time, and if one cannot, then it is a curse akin to the stories of immortals who yearn for death - they yearn because life is not a blessing to them, though that is a fact more about them than life.)
A study mentioned in Neuropsychopharmacology as of August 2002, revealed that Bacopa Monnieri decreases the rate of forgetting newly acquired information, memory consolidations, and verbal learning rate. It also helps in enhancing the nerve impulse transmission, which leads to increased alertness. It is also known to relieve the effects of anxiety and depression. All these benefits happen as Bacopa Monnieri dosage helps in activating choline acetyltransferase and inhibiting acetylcholinesterase which enhances the levels of acetylcholine in the brain, a chemical that is also associated in improving memory and attention.
Table 5 lists the results of 16 tasks from 13 articles on the effects of d-AMP or MPH on cognitive control. One of the simplest tasks used to study cognitive control is the go/no-go task. Subjects are instructed to press a button as quickly as possible for one stimulus or class of stimuli (go) and to refrain from pressing for another stimulus or class of stimuli (no go). De Wit et al. (2002) used a version of this task to measure the effects of d-AMP on subjects' ability to inhibit a response and found enhancement in the form of decreased false alarms (responses to no-go stimuli) and increased speed of correct go responses. They also found that subjects who made the most errors on placebo experienced the greatest enhancement from the drug.
Proteus Digital Health (Redwood City, Calif.) offers an FDA-approved microchip—an ingestible pill that tracks medication-taking behavior and how the body is responding to medicine. Through the company's Digital Health Feedback System, the sensor monitors blood flow, body temperature and other vital signs for people with heart problems, schizophrenia or Alzheimer's disease.
But like any other supplement, there are some safety concerns negative studies like Fish oil fails to hold off heart arrhythmia or other reports cast doubt on a protective effect against dementia or Fish Oil Use in Pregnancy Didn't Make Babies Smart (WSJ) (an early promise but one that faded a bit later) or …Supplementation with DHA compared with placebo did not slow the rate of cognitive and functional decline in patients with mild to moderate Alzheimer disease..
As with other nootropics, the way it works is still partially a mystery, but most research points to it acting as a weak dopamine reuptake inhibitor. Put simply, it increases your dopamine levels the same way cocaine does, but in a much less extreme fashion. The enhanced reward system it creates in the brain, however, makes it what Patel considers to be the most potent cognitive enhancer available; and he notes that some people go from sloth to superman within an hour or two of taking it.
One symptom of Alzheimer's disease is a reduced brain level of the neurotransmitter called acetylcholine. It is thought that an effective treatment for Alzheimer's disease might be to increase brain levels of acetylcholine. Another possible treatment would be to slow the death of neurons that contain acetylcholine. Two drugs, Tacrine and Donepezil, are both inhibitors of the enzyme (acetylcholinesterase) that breaks down acetylcholine. These drugs are approved in the US for treatment of Alzheimer's disease.
A synthetic derivative of Piracetam, aniracetam is believed to be the second most widely used nootropic in the Racetam family, popular for its stimulatory effects because it enters the bloodstream quickly. Initially developed for memory and learning, many anecdotal reports also claim that it increases creativity. However, clinical studies show no effect on the cognitive functioning of healthy adult mice.
Phenserine, as well as the drugs Aricept and Exelon, which are already on the market, work by increasing the level of acetylcholine, a neurotransmitter that is deficient in people with the disease. A neurotransmitter is a chemical that allows communication between nerve cells in the brain. In people with Alzheimer's disease, many brain cells have died, so the hope is to get the most out of those that remain by flooding the brain with acetylcholine.
ATTENTION CANADIAN CUSTOMERS: Due to delays caused by it's union's ongoing rotating strikes, Canada Post has suspended its delivery standard guarantees for parcel services. This may cause a delay in the delivery of your shipment unless you select DHL Express or UPS Express as your shipping service. For more information or further assistance, please visit the Canada Post website. Thank you.
The choline-based class of smart drugs play important cognitive roles in memory, attention, and mood regulation. Acetylcholine (ACh) is one of the brain's primary neurotransmitters, and also vital in the proper functioning of the peripheral nervous system. Studies with rats have shown that certain forms of learning and neural plasticity seem to be impossible in acetylcholine-depleted areas of the brain. This is particularly worth mentioning because (as noted above under the Racetams section), the Racetam class of smart drugs tends to deplete cholines from the brain, so one of the classic "supplement stacks" – chemical supplements that are used together – are Piracetam and Choline Bitartrate. Cholines can also be found in normal food sources, like egg yolks and soybeans.
If stimulants truly enhance cognition but do so to only a small degree, this raises the question of whether small effects are of practical use in the real world. Under some circumstances, the answer would undoubtedly be yes. Success in academic and occupational competitions often hinges on the difference between being at the top or merely near the top. A scholarship or a promotion that can go to only one person will not benefit the runner-up at all. Hence, even a small edge in the competition can be important.
Finally, a workforce high on stimulants wouldn't necessarily be more productive overall. "One thinks 'are these things dangerous?' – and that's important to consider in the short term," says Huberman. "But there's also a different question, which is: 'How do you feel the day afterwards?' Maybe you're hyper-focused for four hours, 12 hours, but then you're below baseline for 24 or 48."
Nootrobox co-founder Geoffrey Woo declines a caffeinated drink in favour of a capsule of his newest product when I meet him in a San Francisco coffee shop. The entire industry has a "wild west" aura about it, he tells me, and Nootrobox wants to fix it by pushing for "smarter regulation" so safe and effective drugs that are currently unclassified can be brought into the fold. Predictably, both companies stress the higher goal of pushing forward human cognition. "I am trying to make a smarter, better populace to solve all the problems we have created," says Nootroo founder Eric Matzner.
The abuse liability of caffeine has been evaluated.147,148 Tolerance development to the subjective effects of caffeine was shown in a study in which caffeine was administered at 300 mg twice each day for 18 days.148 Tolerance to the daytime alerting effects of caffeine, as measured by the MSLT, was shown over 2 days on which 250 g of caffeine was given twice each day48 and to the sleep-disruptive effects (but not REM percentage) over 7 days of 400 mg of caffeine given 3 times each day.7 In humans, placebo-controlled caffeine-discontinuation studies have shown physical dependence on caffeine, as evidenced by a withdrawal syndrome.147 The most frequently observed withdrawal symptom is headache, but daytime sleepiness and fatigue are also often reported. The withdrawal-syndrome severity is a function of the dose and duration of prior caffeine use…At higher doses, negative effects such as dysphoria, anxiety, and nervousness are experienced. The subjective-effect profile of caffeine is similar to that of amphetamine,147 with the exception that dysphoria/anxiety is more likely to occur with higher caffeine doses than with higher amphetamine doses. Caffeine can be discriminated from placebo by the majority of participants, and correct caffeine identification increases with dose.147 Caffeine is self-administered by about 50% of normal subjects who report moderate to heavy caffeine use. In post-hoc analyses of the subjective effects reported by caffeine choosers versus nonchoosers, the choosers report positive effects and the nonchoosers report negative effects. Interestingly, choosers also report negative effects such as headache and fatigue with placebo, and this suggests that caffeine-withdrawal syndrome, secondary to placebo choice, contributes to the likelihood of caffeine self-administration. This implies that physical dependence potentiates behavioral dependence to caffeine.
Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%.
For instance, they point to the U.S. Army's use of stimulants for soldiers to stave off sleep and to stay sharp. But the Army cares little about the long-term health effects of soldiers, who come home scarred physically or mentally, if they come home at all. It's a risk-benefit decision for the Army, and in a life-or-death situation, stimulants help.
The title question, whether prescription stimulants are smart pills, does not find a unanimous answer in the literature. The preponderance of evidence is consistent with enhanced consolidation of long-term declarative memory. For executive function, the overall pattern of evidence is much less clear. Over a third of the findings show no effect on the cognitive processes of healthy nonelderly adults. Of the rest, most show enhancement, although impairment has been reported (e.g., Rogers et al., 1999), and certain subsets of participants may experience impairment (e.g., higher performing participants and/or those homozygous for the met allele of the COMT gene performed worse on drug than placebo; Mattay et al., 2000, 2003). Whereas the overall trend is toward enhancement of executive function, the literature contains many exceptions to this trend. Furthermore, publication bias may lead to underreporting of these exceptions.
Another interpretation of the mixed results in the literature is that, in some cases at least, individual differences in response to stimulants have led to null results when some participants in the sample are in fact enhanced and others are not. This possibility is not inconsistent with the previously mentioned ones; both could be at work. Evidence has already been reviewed that ability level, personality, and COMT genotype modulate the effect of stimulants, although most studies in the literature have not broken their samples down along these dimensions. There may well be other as-yet-unexamined individual characteristics that determine drug response. The equivocal nature of the current literature may reflect a mixture of substantial cognitive-enhancement effects for some individuals, diluted by null effects or even counteracted by impairment in others.
The important factors seem to be: #1/MR6 (Creativity.self.rating, Time.Bitcoin, Time.Backups, Time.Blackmarkets, Gwern.net.linecount.log), #2/MR1 (Time.PDF, Time.Stats), #7/MR7 (Time.Writing, Time.Sysadmin, Time.Programming, Gwern.net.patches.log), and #8/MR8 (Time.States, Time.SRS, Time.Sysadmin, Time.Backups, Time.Blackmarkets). The rest seem to be time-wasting or reflect dual n-back/DNB usage (which is not relevant in the LLLT time period).
Sometimes called smart drugs, brain boosters, or memory-enhancing drugs, the term "nootropics" was coined by scientist Dr. Corneliu E. Giurgea, who developed the compound piracetam as a brain enhancer, according to The Atlantic. The word is derived from the Greek noo, meaning mind, and trope, which means "change" in French. In essence, all nootropics aim to change your mind by enhancing functions like memory or attention.
Brain-imaging studies are consistent with the existence of small effects that are not reliably captured by the behavioral paradigms of the literature reviewed here. Typically with executive function tasks, reduced activation of task-relevant areas is associated with better performance and is interpreted as an indication of higher neural efficiency (e.g., Haier, Siegel, Tang, Abel, & Buchsbaum, 1992). Several imaging studies showed effects of stimulants on task-related activation while failing to find effects on cognitive performance. Although changes in brain activation do not necessarily imply functional cognitive changes, they are certainly suggestive and may well be more sensitive than behavioral measures. Evidence of this comes from a study of COMT variation and executive function. Egan and colleagues (2001) found a genetic effect on executive function in an fMRI study with sample sizes as small as 11 but did not find behavioral effects in these samples. The genetic effect on behavior was demonstrated in a separate study with over a hundred participants. In sum, d-AMP and MPH measurably affect the activation of task-relevant brain regions when participants' task performance does not differ. This is consistent with the hypothesis (although by no means positive proof) that stimulants exert a true cognitive-enhancing effect that is simply too small to be detected in many studies.
This formula presents a relatively high price and one bottle of 60 tables, at the recommended dosage of two tablets per day with a meal, a bottle provides a month's supply. The secure online purchase is available on the manufacturer's site as well as at several online retailers. Although no free trials or money back guarantees are available at this time, the manufacturer provides free shipping if the desired order exceeds a certain amount. With time different online retailers could offer some advantages depending on the amount purchased, so an online research is advised before purchase, as to assess the market and find the best solution.
After my rudimentary stacking efforts flamed out in unspectacular fashion, I tried a few ready-made stacks—brand-name nootropic cocktails that offer to eliminate the guesswork for newbies. They were just as useful. And a lot more expensive. Goop's Braindust turned water into tea-flavored chalk. But it did make my face feel hot for 45 minutes. Then there were the two pills of Brain Force Plus, a supplement hawked relentlessly by Alex Jones of InfoWars infamy. The only result of those was the lingering guilt of knowing that I had willingly put $19.95 in the jorts pocket of a dipshit conspiracy theorist.
Many of these supplements include exotic-sounding ingredients. Ginseng root and an herb called bacopa are two that have shown some promising memory and attention benefits, says Dr. Guillaume Fond, a psychiatrist with France's Aix-Marseille University Medical School who has studied smart drugs and cognitive enhancement. "However, data are still lacking to definitely confirm their efficacy," he adds.
At this point I began to get bored with it and the lack of apparent effects, so I began a pilot trial: I'd use the LED set for 10 minutes every few days before 2PM, record, and in a few months look for a correlation with my daily self-ratings of mood/productivity (for 2.5 years I've asked myself at the end of each day whether I did more, the usual, or less work done that day than average, so 2=below-average, 3=average, 4=above-average; it's ad hoc, but in some factor analyses I've been playing with, it seems to load on a lot of other variables I've measured, so I think it's meaningful).
Two additional studies assessed the effects of d-AMP on visual–motor sequence learning, a form of nondeclarative, procedural learning, and found no effect (Kumari et al., 1997; Makris, Rush, Frederich, Taylor, & Kelly, 2007). In a related experimental paradigm, Ward, Kelly, Foltin, and Fischman (1997) assessed the effect of d-AMP on the learning of motor sequences from immediate feedback and also failed to find an effect.
These pills don't work. The reality is that MOST of these products don't work effectively. Maybe we're cynical, but if you simply review the published studies on memory pills, you can quickly eliminate many of the products that don't have "the right stuff." The active ingredients in brain and memory health pills are expensive and most companies sell a watered down version that is not effective for memory and focus. The more brands we reviewed, the more we realized that many of these marketers are slapping slick labels on low-grade ingredients.
Ngo has experimented with piracetam himself ("The first time I tried it, I thought, 'Wow, this is pretty strong for a supplement.' I had a little bit of reflux, heartburn, but in general it was a cognitive enhancer. . . . I found it helpful") and the neurotransmitter DMEA ("You have an idea, it helps you finish the thought. It's for when people have difficulty finishing that last connection in the brain").
It may also be necessary to ask not just whether a drug enhances cognition, but in whom. Researchers at the University of Sussex have found that nicotine improved performance on memory tests in young adults who carried one variant of a particular gene but not in those with a different version. In addition, there are already hints that the smarter you are, the less smart drugs will do for you. One study found that modafinil improved performance in a group of students whose mean IQ was 106, but not in a group with an average of 115.
Many studies suggest that Creatine helps in treating cognitive decline in individuals when combined with other therapies. It also helps people suffering from Parkinson's and Huntington's disease. Though there are minimal side effects associated with creatine, pretty much like any nootropic, it is not entirely free of side-effects. An overdose of creatine can lead to gastrointestinal issues, weight gain, stress, and anxiety.
Smart Pill is formulated with herbs, amino acids, vitamins and co-factors to provide nourishment for the brain, which may enhance memory, cognitive function, and clarity. , which may enhance memory, cognitive function, and clarity. In a natural base containing potent standardized extract 24% flavonoid glycosides. Fast acting super potent formula. A unique formulation containing a blend of essential nutrients, herbs and co-factors.
This mental stimulation is what increases focus and attention span in the user. The FDA permitted treatments for Modafinil include extreme sleepiness and shift work disorder. It can also get prescribed for narcolepsy, and obstructive sleep apnea. Modafinil is not FDA approved for the treatment of ADHD. Yet, many medical professionals feel it is a suitable Adderall alternative.
This doesn't fit the U-curve so well: while 60mg is substantially negative as one would extrapolate from 30mg being ~0, 48mg is actually better than 15mg. But we bought the estimates of 48mg/60mg at a steep price - we ignore the influence of magnesium which we know influences the data a great deal. And the higher doses were added towards the end, so may be influenced by the magnesium starting/stopping. Another fix for the missingness is to impute the missing data. In this case, we might argue that the placebo days of the magnesium experiment were identical to taking no magnesium at all and so we can classify each NA as a placebo day, and rerun the desired analysis:
Regarding other methods of cognitive enhancement, little systematic research has been done on their prevalence among healthy people for the purpose of cognitive enhancement. One exploratory survey found evidence of modafinil use by people seeking cognitive enhancement (Maher, 2008), and anecdotal reports of this can be found online (e.g., Arrington, 2008; Madrigal, 2008). Whereas TMS requires expensive equipment, tDCS can be implemented with inexpensive and widely available materials, and online chatter indicates that some are experimenting with this method.
Only two of the eight experiments reviewed in this section found that stimulants enhanced performance, on a nonverbal fluency task in one case and in Raven's Progressive Matrices in the other. The small number of studies of any given type makes it difficult to draw general conclusions about the underlying executive function systems that might be influenced.
If you could take a drug to boost your brainpower, would you? This question, faced by Bradley Cooper's character in the big-budget movie Limitless, is now facing students who are frantically revising for exams. Although they are nowhere near the strength of the drug shown in the film, mind-enhancing drugs are already on the pharmacy shelves, and many people are finding the promise of sharper thinking through chemistry highly seductive.
That said, there are plenty of studies out there that point to its benefits. One study, published in the British Journal of Pharmacology, suggests brain function in elderly patients can be greatly improved after regular dosing with Piracetam. Another study, published in the journal Psychopharmacology, found that Piracetam improved memory in most adult volunteers. And another, published in the Journal of Clinical Psychopharmacology, suggests it can help students, especially dyslexic students, improve their nonverbal learning skills, like reading ability and reading comprehension. Basically, researchers know it has an effect, but they don't know what or how, and pinning it down requires additional research.
This tendency is exacerbated by general inefficiencies in the nootropics market - they are manufactured for vastly less than they sell for, although the margins aren't as high as they are in other supplement markets, and not nearly as comical as illegal recreational drugs. (Global Price Fixing: Our Customers are the Enemy (Connor 2001) briefly covers the vitamin cartel that operated for most of the 20th century, forcing food-grade vitamins prices up to well over 100x the manufacturing cost.) For example, the notorious Timothy Ferriss (of The Four-hour Work Week) advises imitators to find a niche market with very high margins which they can insert themselves into as middlemen and reap the profits; one of his first businesses specialized in… nootropics & bodybuilding. Or, when Smart Powders - usually one of the cheapest suppliers - was dumping its piracetam in a fire sale of half-off after the FDA warning, its owner mentioned on forums that the piracetam was still profitable (and that he didn't really care because selling to bodybuilders was so lucrative); this was because while SP was selling 2kg of piracetam for ~$90, Chinese suppliers were offering piracetam on AliBaba for $30 a kilogram or a third of that in bulk. (Of course, you need to order in quantities like 30kg - this is more or less the only problem the middlemen retailers solve.) It goes without saying that premixed pills or products are even more expensive than the powders.
Actually, researchers are studying substances that may improve mental abilities. These substances are called "cognitive enhancers" or "smart drugs" or "nootropics." ("Nootropic" comes from Greek - "noos" = mind and "tropos" = changed, toward, turn). The supposed effects of cognitive enhancement can be several things. For example, it could mean improvement of memory, learning, attention, concentration, problem solving, reasoning, social skills, decision making and planning.
A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea.
Before you try nootropics, I suggest you start with the basics: get rid of the things in your diet and life that reduce cognitive performance first. That is easiest. Then, add in energizers like Brain Octane and clean up your diet. Then, go for the herbals and the natural nootropics. Use the pharmaceuticals selectively only after you've figured out your basics. | CommonCrawl |
Analysis of a variational model for motion compensated inpainting
Accelerated bregman operator splitting with backtracking
December 2017, 11(6): 1027-1046. doi: 10.3934/ipi.2017047
Some remarks on the small electromagnetic inhomogeneities reconstruction problem
Batoul Abdelaziz , Abdellatif El Badia , and Ahmad El Hajj
Sorbonne University, Université de Technologie de Compiègne, Laboratoire de Mathématiuqes Appliquées de Compiègne LMAC, 60205 Compiègne Cedex, France
* Corresponding author: Abdellatif El Badia
Received October 2016 Revised July 2017 Published September 2017
This work considers the problem of recovering small electromagnetic inhomogeneities in a bounded domain $Ω \subset \mathbb{R}^3$, from a single Cauchy data, at a fixed frequency. This problem has been considered by several authors, in particular in [4]. In this paper, we revisit this work with the objective of providing another identification method and establishing stability results from a single Cauchy data and at a fixed frequency. Our approach is based on the asymptotic expansion of the boundary condition derived in [4] and the extension of the direct algebraic algorithm proposed in [1].
Keywords: Inverse problem, small electromagnetic inhomogeneities, algebraic method, boundary asymptotic expansion, Hölder Stability, Helmholtz equation, Maxwell equation.
Mathematics Subject Classification: Primary: 49N45, 35J05, 35Q61, 34E05, 58K25.
Citation: Batoul Abdelaziz, Abdellatif El Badia, Ahmad El Hajj. Some remarks on the small electromagnetic inhomogeneities reconstruction problem. Inverse Problems & Imaging, 2017, 11 (6) : 1027-1046. doi: 10.3934/ipi.2017047
B. Abdelaziz, A. El Badia and A. El Hajj, Direct algorithm for multipolar sources reconstruction, Journal of Mathematical Analysis and Applications, 428 (2015), 306-336. doi: 10.1016/j.jmaa.2015.03.013. Google Scholar
H. Ammari, M. S Vogelius and D. Volkov, Asymptotic formulas for perturbations in the electromagnetic fields due to the presence of inhomogeneities of small diameter Ⅱ, Journal de Mathématiques Pures et Appliquées, 80 (2001), 769-814. doi: 10.1016/S0021-7824(01)01217-X. Google Scholar
H. Ammari and H. Kang, A new method for reconstructing electromagnetic inhomogeneities of small volume, Inverse problems, 19 (2003), 63-71. doi: 10.1088/0266-5611/19/1/304. Google Scholar
H. Ammari and H. Kang, Boundary layer techniques for solving the Helmholtz equation in the presence of small inhomogeneities, Journal of Mathematical Analysis and Applications, 296 (2004), 190-208. doi: 10.1016/j.jmaa.2004.04.003. Google Scholar
H. Ammari, H. Kang, E. Kim, M. Lim and K. Louati, A direct algorithm for ultrasound imaging of internal corrosion, SIAM Journal on Numerical Analysis, 49 (2011), 1177-1193. doi: 10.1137/100784710. Google Scholar
M. Brühl, M. Hanke and M. S Vogelius, A direct impedance tomography algorithm for locating small inhomogeneities, Numerische Mathematik, 93 (2003), 635-654. doi: 10.1007/s002110200409. Google Scholar
D. J. Cedio-Fengya, S. Moskow and M. S. Vogelius, Identification of conductivity imperfections of small diameter by boundary measurements. Continuous dependence and computational reconstruction, Inverse Problems, 14 (1998), 553-595. doi: 10.1088/0266-5611/14/3/011. Google Scholar
M. Cheney, D. Isaacson and J. C Newell, Electrical impedance tomography, SIAM Review, 41 (1999), 85-101. doi: 10.1137/S0036144598333613. Google Scholar
D. Colton and A. Kirsch, A simple method for solving inverse scattering problems in the resonance region, Inverse Problems, 12 (1996), 383-393. doi: 10.1088/0266-5611/12/4/003. Google Scholar
A. El Badia and T. Ha-Duong, An inverse source problem in potential analysis, Inverse Problems, 16 (2000), 651-663. doi: 10.1088/0266-5611/16/3/308. Google Scholar
A. El Badia and T. Nara, Inverse dipole source problem for time-harmonic Maxwell equations: algebraic algorithm and Hölder stability, Inverse Problems, 29 (2013), 015007, 19pp. doi: 10.1088/0266-5611/29/1/015007. Google Scholar
A. El Badia and A. El Hajj, Stability estimates for an inverse source problem of Helmholtz's equation from single Cauchy data at a fixed frequency, Inverse Problems, 29 (2013), 125008, 20pp. doi: 10.1088/0266-5611/29/12/125008. Google Scholar
A. Friedman and M. Vogelius, Identification of small inhomogeneities of extreme conductivity by boundary measurements: a theorem on continuous dependence, Archive for Rational Mechanics and Analysis, 105 (1989), 299-326. doi: 10.1007/BF00281494. Google Scholar
P. C. Hansen, Rank-deficient and Discrete Ill-Posed Problems, Philadelphia, PA, 1998. doi: 10.1137/1.9780898719697. Google Scholar
H. Kang and H. Lee, Identification of simple poles via boundary measurements and an application of EIT, Inverse Problems, 20 (2004), 1853-1863. doi: 10.1088/0266-5611/20/6/010. Google Scholar
A. Kirsch, The MUSIC-algorithm and the factorization method in inverse scattering theory for inhomogeneous media, Inverse Problems, 18 (2002), 1025-1040. doi: 10.1088/0266-5611/18/4/306. Google Scholar
O. Kwon, J. K. Seo and J. R. Yoon, A real time algorithm for the location search of discontinuous conductivities with one measurement, Communications on Pure and Applied Mathematics, 55 (2002), 1-29. doi: 10.1002/cpa.3009. Google Scholar
T. D. Mast, A. I. Nachman and R. C. Waag, Focusing and imaging using eigenfunctions of the scattering operator, The Journal of the Acoustical Society of America, 102 (1997), 715-725. doi: 10.1121/1.419898. Google Scholar
M. S. Vogelius and D. Volkov, Asymptotic formulas for perturbations in the electromagnetic fields due to the presence of inhomogeneities of small diameter, ESAIM: Mathematical Modelling and Numerical Analysis, 34 (2000), 723-748. doi: 10.1051/m2an:2000101. Google Scholar
D. Volkov, An Inverse Problem for the Time Harmonic Maxwell's Equations, Ph. D thesis, Rutgers The State University of New Jersey -New Brunswick, 2001. Google Scholar
Atsushi Kawamoto. Hölder stability estimate in an inverse source problem for a first and half order time fractional diffusion equation. Inverse Problems & Imaging, 2018, 12 (2) : 315-330. doi: 10.3934/ipi.2018014
S. L. Ma'u, P. Ramankutty. An averaging method for the Helmholtz equation. Conference Publications, 2003, 2003 (Special) : 604-609. doi: 10.3934/proc.2003.2003.604
Michael V. Klibanov. A phaseless inverse scattering problem for the 3-D Helmholtz equation. Inverse Problems & Imaging, 2017, 11 (2) : 263-276. doi: 10.3934/ipi.2017013
Gen Nakamura, Michiyuki Watanabe. An inverse boundary value problem for a nonlinear wave equation. Inverse Problems & Imaging, 2008, 2 (1) : 121-131. doi: 10.3934/ipi.2008.2.121
Jinpeng An. Hölder stability of diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 315-329. doi: 10.3934/dcds.2009.24.315
Zhousheng Ruan, Sen Zhang, Sican Xiong. Solving an inverse source problem for a time fractional diffusion equation by a modified quasi-boundary value method. Evolution Equations & Control Theory, 2018, 7 (4) : 669-682. doi: 10.3934/eect.2018032
Sergey P. Degtyarev. On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions. Evolution Equations & Control Theory, 2015, 4 (4) : 391-429. doi: 10.3934/eect.2015.4.391
Samia Challal, Abdeslem Lyaghfouri. Hölder continuity of solutions to the $A$-Laplace equation involving measures. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1577-1583. doi: 10.3934/cpaa.2009.8.1577
Luciano Abadías, Carlos Lizama, Marina Murillo-Arcila. Hölder regularity for the Moore-Gibson-Thompson equation with infinite delay. Communications on Pure & Applied Analysis, 2018, 17 (1) : 243-265. doi: 10.3934/cpaa.2018015
Li Liang. Increasing stability for the inverse problem of the Schrödinger equation with the partial Cauchy data. Inverse Problems & Imaging, 2015, 9 (2) : 469-478. doi: 10.3934/ipi.2015.9.469
Beatrice Bugert, Gunther Schmidt. Analytical investigation of an integral equation method for electromagnetic scattering by biperiodic structures. Discrete & Continuous Dynamical Systems - S, 2015, 8 (3) : 435-473. doi: 10.3934/dcdss.2015.8.435
Haiyang He. Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2393-2408. doi: 10.3934/cpaa.2013.12.2393
Vladimir Varlamov. Eigenfunction expansion method and the long-time asymptotics for the damped Boussinesq equation. Discrete & Continuous Dynamical Systems - A, 2001, 7 (4) : 675-702. doi: 10.3934/dcds.2001.7.675
Nguyen Huy Tuan, Mokhtar Kirane, Long Dinh Le, Van Thinh Nguyen. On an inverse problem for fractional evolution equation. Evolution Equations & Control Theory, 2017, 6 (1) : 111-134. doi: 10.3934/eect.2017007
Walter Allegretto, Yanping Lin, Shuqing Ma. Hölder continuous solutions of an obstacle thermistor problem. Discrete & Continuous Dynamical Systems - B, 2004, 4 (4) : 983-997. doi: 10.3934/dcdsb.2004.4.983
John Sylvester. An estimate for the free Helmholtz equation that scales. Inverse Problems & Imaging, 2009, 3 (2) : 333-351. doi: 10.3934/ipi.2009.3.333
Jianhai Bao, Xing Huang, Chenggui Yuan. New regularity of kolmogorov equation and application on approximation of semi-linear spdes with Hölder continuous drifts. Communications on Pure & Applied Analysis, 2019, 18 (1) : 341-360. doi: 10.3934/cpaa.2019018
Masaru Ikehata, Mishio Kawashita. An inverse problem for a three-dimensional heat equation in thermal imaging and the enclosure method. Inverse Problems & Imaging, 2014, 8 (4) : 1073-1116. doi: 10.3934/ipi.2014.8.1073
Andreas Kirsch. An integral equation approach and the interior transmission problem for Maxwell's equations. Inverse Problems & Imaging, 2007, 1 (1) : 159-179. doi: 10.3934/ipi.2007.1.159
Bastian Gebauer, Nuutti Hyvönen. Factorization method and inclusions of mixed type in an inverse elliptic boundary value problem. Inverse Problems & Imaging, 2008, 2 (3) : 355-372. doi: 10.3934/ipi.2008.2.355
HTML views (99)
Batoul Abdelaziz Abdellatif El Badia Ahmad El Hajj | CommonCrawl |
Distribution of sum of random variables
Let $X_1, X_2, . . .$ be independent exponential random variables with mean $1/\mu$ and let $N$ be a discrete random variable with $P(N = k) = (1 − p)p^{k-1}$ for $k = 1, 2, . . . $ where $0 ≤ p < 1$ (i.e. $N$ is a shifted geometric random variable). Show that $S$ defined as $S =\sum_{n=1}^{N}X_n$ is again exponentially distributed with parameter $(1 − p)\mu$.
My approach:
$S = \sum_{n=1}^{N}\sum_{k=1}^{\infty}\frac{1}{\mu}e^{-\frac{n}{\mu}}(1-p)p^{k-1}$
How to solve this? Is it the right approach to solve this problem?
probability probability-distributions
marcellamarcella
$\begingroup$ $N$ and $X_i$ should be independent, right? $\endgroup$ – Zhanxiong Aug 5 '15 at 20:34
$\begingroup$ Yes, that's correct. $\endgroup$ – marcella Aug 5 '15 at 20:35
Not quite, as $S$ will be a convolution of distribution functions. This is much easier to do with either characteristic or moment generating functions. Specifically, let
$M_X(t)=E[e^{tX}],$
be the moment generating function of $X$. For an exponential with parameter $\lambda$, it's not hard to show that:
$$M(t)=\frac{\lambda}{\lambda-t}, \ t<\lambda.$$
Then by independence:
\begin{align*} M_S(t)=\sum_{N\geq 1} M_X(t)^NP(N)&=\sum_{N\geq 1}\left(\frac{\lambda}{\lambda-t}\right)^N(1-p)p^{N-1}\\ &=(1-p)\left(\frac{\lambda}{\lambda-t}\right)\frac{1}{1-p\lambda/(\lambda-t)}\\ &=(1-p)\frac{\lambda}{\lambda-t-p\lambda}\\ &=\frac{\nu}{\nu-t}, \end{align*}
where $\nu:=(1-p)\lambda$, which we recognize to be the MGF of an exponential with parameter $\nu$.
Alex R.Alex R.
I would calculate the characteristic function :
$$E[ e^{itS} ] = E\left[ \prod_{i=1^N} e^{itX_1} \right]$$
$$= \sum_{k=1}^\infty (1-p)p^k \prod_{i=1}^k \int_{\mathbb{R}^+} e^{itx} \mu e^{-\mu x} dx$$
$$= \sum_{k=1}^\infty (1-p)p^{k-1} \prod_{i=1}^k \frac{\mu}{\mu-it}$$
$$ =\frac{(1-p)}{p}\sum_{k=1}^\infty \left( \frac{ \mu p }{\mu - it} \right)^k$$
$$ = \frac{(1-p)}{p} \frac{ 1 }{ 1- \frac{ \mu p }{\mu - it} } - (1-p)$$
$$= \frac{(1-p)}{p} \frac{\mu - it - (\mu - it - \mu p)}{ \mu - it - \mu p}$$
$$ = \frac{(1-p)\mu }{ (1-p) \mu - it}$$
And this is exactly the characteristic function pof an exponential variable with parameter $(1-p)\mu$
TryssTryss
Your approach is in fact OK if you compute more carefully (although characteristic function method is always preferable).
Given $N = n$, $S$ has Gamma distribution $\Gamma(n, \mu)$ (see here), i.e., the conditional density of $S$ given $N = n$ is $$f_{S|N = n}(s) = \frac{\mu^n}{\Gamma(n)}s^{n - 1}e^{-\mu s}, \quad s > 0. $$ Therefore the marginal cdf of $S$ can be computed as \begin{align*} & P(S \leq x) = \sum_{n = 1}^\infty P(S \leq x|N = n)P(N = n) \\ = & \sum_{n = 1}^\infty\left(\int_0^x \frac{\mu^n}{\Gamma(n)}s^{n - 1}e^{-\mu s} ds\right) \times (1 - p)p^{n - 1} \\ = & \mu(1 - p) \int_0^x \left(\sum_{n = 1}^\infty\frac{(\mu p s)^{n - 1}}{(n - 1)!}\right)e^{-\mu s}ds \\ = & \mu(1 - p)\int_0^x e^{\mu p s}e^{-\mu s} ds \\ = & \mu(1 - p) \int_0^x e^{-\mu(1 - p)s} ds \\ = & 1 - e^{-\mu(1 - p)x}, \end{align*} which shows that $S \sim \exp(\mu(1 - p))$. The interchange of $\int$ and $\sum$ follows from the Fubini's theorem.
ZhanxiongZhanxiong
$\begingroup$ Thank you very much for your elegant solution. $\endgroup$ – marcella Aug 5 '15 at 21:17
Not the answer you're looking for? Browse other questions tagged probability probability-distributions or ask your own question.
X1 X2 independent variables exponential distribution - Looking for simpler solution
Distribution of minimum and sum of two independent exponential random variables
Finding the distribution of a sum of random variables
Construct a sequence of i.i.d random variables with a given a distribution function
Prove the sum of exponential random variables converges almost surely
Intuitive explanation of distribution of ratio of independent random variables
Distribution of independent random variables
Show independence of four random variables as combination of four other independent variables
independence of sum of independent random variables
What is the mean value of $(X_1^2+X_2^2+\cdots X_N^2)/(X_1+X_2 + \cdots +X_N)$ | CommonCrawl |
Lecture Series on Sheaves and Motives – Part III
Posted by Jeffrey Morton under category theory, cohomology, geometry, homotopy theory, localization, motives, simplicial sets, spans
This is the 100th entry on this blog! It's taken a while, but we've arrived at a meaningless but convenient milestone. This post constitutes Part III of the posts on the topics course which I shared with Susama Agarwala. In the first, I summarized the core idea in the series of lectures I did, which introduced toposes and sheaves, and explained how, at least for appropriate sites, sheaves can be thought of as generalized spaces. In the second, I described the guest lecture by John Huerta which described how supermanifolds can be seen as an example of that notion.
In this post, I'll describe the machinery I set up as part of the context for Susama's talks. The connections are a bit tangential, but it gives some helpful context for what's to come. Namely, my last couple of lectures were on sheaves with structure, and derived categories. In algebraic geometry and elsewhere, derived categories are a common tool for studying spaces. They have a cohomological flavour, because they involve sheaves of complexes (or complexes of sheaves) of abelian groups. Having talked about the background of sheaves in Part I, let's consider how these categories arise.
Structured Sheaves and Internal Constructions in Toposes
The definition of a (pre)sheaf as a functor valued in is the basic one, but there are parallel notions for presheaves valued in categories other than – for instance, in Abelian groups, rings, simplicial sets, complexes etc. Abelian groups are particularly important for geometry/cohomology.
But for the most part, as long as the target category can be defined in terms of sets and structure maps (such as the multiplication map for groups, face maps for simplicial sets, or boundary maps in complexes), we can just think of these in terms of objects "internal to a category of sheaves". That is, we have a definition of "abelian group object" in any reasonably nice category – in particular, any topos. Then the category of "abelian group objects in " is equivalent to a category of "abelian-group-valued sheaves on ", denoted . (As usual, I'll omit the Grothendieck topology in the notation from now on, though it's important that it is still there.)
Sheaves of abelian groups are supposed to generalize the prototypical example, namely sheaves of functions valued in abelian groups, (indeed, rings) such as , , or .
To begin with, we look at the category , which amounts to the same as the category of abelian group objects in . This inherits several properties from itself. In particular, it's an abelian category: this gives us that there is a direct sum for objects, a zero object, exact sequences split, all morphisms have kernels and cokernels, and so forth. These useful properties all hold because at each , the direct sum of sheaves of abelian group just gives , and all the properties hold locally at each .
So, sheaves of abelian groups can be seen as abelian groups in a topos of sheaves . In the same way, other kinds of structures can be built up inside the topos of sheaves, and there are corresponding "external" point of view. One good example would be simplicial objects: one can talk about the simplicial objects in , or sheaves of simplicial sets, . (Though it's worth noting that since simplicial sets model infinity-groupoids, there are more sophisticated forms of the sheaf condition which can be applied here. But for now, this isn't what we need.)
Recall that simplicial objects in a category are functors – that is, -valued presheaves on , the simplex category. This has nonnegative integers as its objects, and the morphisms from to are the order-preserving functions from to . If , we get "simplicial sets", where is the "set of -dimensional simplices". The various morphisms in turn into (composites of) the face and degeneracy maps. Simplicial sets are useful because they are a good model for "spaces".
Just as with abelian groups, simplicial objects in can also be seen as sheaves on valued in the category of simplicial sets, i.e. objects of . These things are called, naturally, "simplicial sheaves", and there is a rather extensive body of work on them. (See, for instance, the canonical book by Goerss and Jardine.)
This correspondence is just because there is a fairly obvious bunch of isomorphisms turning functors with two inputs into functors with one input returning another functor with one input:
(These are all presheaf categories – if we put a trivial topology on , we can refine this to consider only those functors which are sheaves in every position, where we use a certain product topology on .)
Another relevant example would be complexes. This word is a bit overloaded, but here I'm referring to the sort of complexes appearing in cohomology, such as the de Rahm complex, where the terms of the complex are the sheaves of differential forms on a space, linked by the exterior derivative. A complex is a sequence of Abelian groups with boundary maps (or just for short), like so:
with the property that . Morphisms between these are sequences of morphisms between the terms of the complexes where each which commute with all the boundary maps. These all assemble into a category of complexes . We also have and , the (full) subcategories of complexes where all the negative (respectively, positive) terms are trivial.
One can generalize this to replace by any category enriched in abelian groups, which we need to make sense of the requirement that a morphism is zero. In particular, one can generalize it to sheaves of abelian groups. This is an example where the above discussion about internalization can be extended to more than one structure at a time: "sheaves-of-(complexes-of-abelian-groups)" is equivalent to "complexes-of-(sheaves-of-abelian-groups)".
This brings us to the next point, which is that, within , the last two examples, simplicial objects and complexes, are secretly the same thing.
Dold-Puppe Correspondence
The fact I just alluded to is a special case of the Dold-Puppe correspondence, which says:
Theorem: In any abelian category , the category of simplicial objects is equivalent to the category of positive chain complexes .
The better-known name "Dold-Kan Theorem" refers to the case where . If is a category of -valued sheaves, the Dold-Puppe correspondence amounts to using Dold-Kan at each .
The point is that complexes have only coboundary maps, rather than a plethora of many different face and boundary maps, so we gain some convenience when we're looking at, for instance, abelian groups in our category of spaces, by passing to this equivalent description.
The correspondence works by way of two maps (for more details, see the book by Goerss and Jardine linked above, or see the summary here). The easy direction is the Moore complex functor, . On objects, it gives the intersection of all the kernels of the face maps:
The boundary map from this is then just . This ends up satisfying the "boundary-squared is zero" condition because of the identities for the face maps.
The other direction is a little more complicated, so for current purposes, I'll leave you to follow the references above, except to say that the functor from complexes to simplicial objects in is defined so as to be adjoint to . Indeed, and together form an adjoint equivalence of the categories.
Chain Homotopies and Quasi-Isomorphisms
One source of complexes in mathematics is in cohomology theories. So, for example, there is de Rahm cohomology, where one starts with the complex with the space of smooth differential -forms on some smooth manifold , with the exterior derivatives as the coboundary maps. But no matter which complex you start with, there is a sequence of cohomology groups, because we have a sequence of cohomology functors:
given by the quotients
That is, it's the cocycles (things whose coboundary is zero), up to equivalence where cocycles are considered equivalent if their difference is a coboundary (i.e. something which is itself the coboundary of something else). In fact, these assemble into a functor , since there are natural transformations between these functors
which just come from the restrictions of the to the kernel . (In fact, this makes the maps trivial – but the main point is that this restriction is well-defined on equivalence classes, and so we get an actual complex again.) The fact that we get a functor means that any chain map gives a corresponding .
Now, the original motivation of cohomology for a space, like the de Rahm cohomology of a manifold , is to measure something about the topology of . If is trivial (say, a contractible space), then its cohomology groups are all trivial. In the general setting, we say that is acyclic if all the . But of course, this doesn't mean that the chain itself is zero.
More generally, just because two complexes have isomorphic cohomology, doesn't mean they are themselves isomorphic, but we say that is a quasi-isomorphism if is an isomorphism. The idea is that, as far as we can tell from the information that coholomology detects, it might as well be an isomorphism.
Now, for spaces, as represented by simplicial sets, we have a similar notion: a map between spaces is a quasi-isomorphism if it induces an isomorphism on cohomology. Then the key thing is the Whitehead Theorem (viz), which in this language says:
Theorem: If is a quasi-isomorphism, it is a homotopy equivalence.
That is, it has a homotopy inverse , which means there is a homotopy .
What about for complexes? We said that in an abelian category, simplicial objects and complexes are equivalent constructions by the Dold-Puppe correspondence. However, the question of what is homotopy equivalent to what is a bit more complicated in the world of complexes. The convenience we gain when passing from simplicial objects to the simpler structure of complexes must be paid for it with a little extra complexity in describing what corresponds to homotopy equivalences.
The usual notion of a chain homotopy between two maps is a collection of maps which shift degrees, , such that . That is, the coboundary of is the difference between and . (The "co" version of the usual intuition of a homotopy, whose ingoing and outgoing boundaries are the things which are supposed to be homotopic).
The Whitehead theorem doesn't work for chain complexes: the usual "naive" notion of chain homotopy isn't quite good enough to correspond to the notion of homotopy in spaces. (There is some discussion of this in the nLab article on the subject. That is the reason for…
Derived Categories
Taking "derived categories" for some abelian category can be thought of as analogous, for complexes, to finding the homotopy category for simplicial objects. It compensates for the fact that taking a quotient by chain homotopy doesn't give the same "homotopy classes" of maps of complexes as the corresponding operation over in spaces.
That is, simplicial sets, as a model category, know everything about the homotopy type of spaces: so taking simplicial objects in is like internalizing the homotopy theory of spaces in a category . So, if what we're interested in are the homotopical properties of spaces described as simplicial sets, we want to "mod out" by homotopy equivalences. However, we have two notions which are easy to describe in the world of complexes, which between them capture the notion "homotopy" in simplicial sets. There are chain homotopies and quasi-isomorphisms. So, naturally, we mod out by both notions.
So, suppose we have an abelian category . In the background, keep in mind the typical example where , and even where for some reasonably nice space , if it helps to picture things. Then the derived category of is built up in a few steps:
Take the category of complexes. (This stands in for "spaces in " as above, although we've dropped the " ", so the correct analogy is really with spectra. This is a bit too far afield to get into here, though, so for now let's just ignore it.)
Take morphisms only up to homotopy equivalence. That is, define the equivalence relation with whenever there is a homotopy with . Then is the quotient by this relation.
Localize at quasi-isomorphisms. That is, formally throw in inverses for all quasi-isomorphisms , to turn them into actual isomorphisms. The result is .
(Since we have direct sums of complexes (componentwise), it's also possible to think of the last step as defining , where is the category of acyclic complexes – the ones whose cohomology complexes are zero.)
Explicitly, the morphisms of can be thought of as "zig-zags" in ,
where all the left-pointing arrows are quasi-isomorphisms. (The left-pointing arrows are standing in for their new inverses in , pointing right.) This relates to the notion of a category of spans: in a reasonably nice category, we can always compose these zig-zags to get one of length two, with one leftward and one rightward arrow. In general, though, this might not happen.
Now, the point here is that this is a way of extracting "homotopical" or "cohomological" information about , and hence about if or something similar. In the next post, I'll talk about Susama's series of lectures, on the subject of motives. This uses some of the same technology described above, in the specific context of schemes (which introduces some extra considerations specific to that world). It's aim is to produce a category (and a functor into it) which captures all the cohomological information about spaces – in some sense a universal cohomology theory from which any other can be found.
Talk by John Huerta – The Functor of Points approach to Supermanifolds
Posted by Jeffrey Morton under sheaves, Supergeometry, toposes
John Huerta visited here for about a week earlier this month, and gave a couple of talks. The one I want to write about here was a guest lecture in the topics course Susama Agarwala and I were teaching this past semester. The course was about topics in category theory of interest to geometry, and in the case of this lecture, "geometry" means supergeometry. It follows the approach I mentioned in the previous post about looking at sheaves as a kind of generalized space. The talk was an introduction to a program of seeing supermanifolds as a kind of sheaf on the site of "super-points". This approach was first proposed by Albert Schwartz, though see, for instance, this review by Christophe Sachse for more about this approach, and this paper (comparing the situation for real and complex (super)manifolds) for more recent work.
It's amazing how many geometrical techniques can be applied in quite general algebras once they're formulated correctly. It's perhaps less amazing for supermanifolds, in which commutativity fails in about the mildest possible way. Essentially, the algebras in question split into bosonic and fermionic parts. Everything in the bosonic part commutes with everything, and the fermionic part commutes "up to a negative sign" within itself.
Supermanifolds
Supermanifolds are geometric objects, which were introduced as a setting on which "supersymmetric" quantum field theories could be defined. Whether or not "real" physics has this symmetry (the evidence is still pending, though ), these are quite nicely behaved theories. (Throwing in extra symmetry assumptions tends to make things nicer, and supersymmetry is in some sense the maximum extra symmetry we might reasonably hope for in a QFT).
Roughly, the idea is that supermanifolds are spaces like manifolds, but with some non-commuting coordinates. Supermanifolds are therefore in some sense "noncommutative spaces". Noncommutative algebraic or differential geometry start with various dualities to the effect that some category of spaces is equivalent to the opposite of a corresponding category of algebras – for instance, a manifold corresponds to the algebra . So a generalized category of "spaces" can be found by dropping the "commutative" requirement from that statement. The category of supermanifolds only weakens the condition slightly: the algebras are -graded, and are "supercommutative", i.e. commute up to a sign which depends on the grading.
Now, the conventional definition of supermanifolds, as with schemes, is to say that they are spaces equipped with a "structure sheaf" which defines an appropriate class of functions. For ordinary (real) manifolds, this would be the sheaf assigning to an open set the ring of all the smooth real-valued functions. The existence of an atlas of charts for the manifold amounts to saying that the structure sheaf locally looks like for some open set . (For fixed dimension ).
For supermanifolds, the condition on the local rings says that, for fixed dimension , a -dimensional supermanifold has structure sheaf in which $they look like
In this, is as above, and the notation
refers to the exterior algebra, which we can think of as polynomials in the , with the wedge product, which satisfies . The idea is that one is supposed to think of this as the algebra of smooth functions on a space with ordinary dimensions, and "anti-commuting" dimensions with coordinates . The commuting variables, say , are called "bosonic" or "even", and the anticommuting ones are "fermionic" or "odd". (The term "fermionic" is related to the fact that, in quantum mechanics, when building a Hilbert space for a bunch of identical fermions, one takes the antisymmetric part of the tensor product of their individual Hilbert spaces, so that, for instance, ).
The structure sheaf picture can therefore be thought of as giving an atlas of charts, so that the neighborhoods locally look like "super-domains", the super-geometry equivalent of open sets .
In fact, there's a long-known theorem of Batchelor which says that any real supermanifold is given exactly by the algebra of "global sections", which looks like . That is, sections in the local rings ("functions on" open neighborhoods of ) always glue together to give a section in .
Another way to put this is that every supermanifold can be seen as just bundle of exterior algebras. That is, a bundle over a base manifold , whose fibres are the "super-points" corresponding to . The base space is called the "reduced" manifold. Any such bundle gives back a supermanifold, where the algebras in the structure sheaf are the algebras of sections of the bundle.
One shouldn't be too complacent about saying they are exactly the same, though: this correspondence isn't functorial. That is, the maps between supermanifolds are not just bundle maps. (Also, Batchelor's theorem works only for real, not for complex, supermanifolds, where only the local neighborhoods necessarily look like such bundles).
Why, by the way, say that is a super "point", when is a whole vector space? Since the fermionic variables are anticommuting, no term can have more than one of each , so this is a finite-dimensional algebra. This is unlike , which suggests that the noncommutative directions are quite different. Any element of is nilpotent, so if we think of a Taylor series for some function – a power series in the – we see note that no term has a coefficient for greater than 1, or of degree higher than in all the – so imagines that only infinitesimal behaviour in these directions exists at all. Thus, a supermanifold is like an ordinary -dimensional manifold , built from the ordinary domains , equipped with a bundle whose fibres are a sort of "infinitesimal fuzz" about each point of the "even part" of the supermanifold, described by the .
But this intuition is a bit vague. We can sharpen it a bit using the functor of points approach…
Supermanifolds as Manifold-Valued Sheaves
As with schemes, there is also a point of view that sees supermanifolds as "ordinary" manifolds, constructed in the topos of sheaves over a certain site. The basic insight behind the picture of these spaces, as in the previous post, is based on the fact that the Yoneda lemma lets us think of sheaves as describing all the "probes" of a generalized space (actually an algebra in this case). The "probes" are the objects of a certain category, and are called "superpoints".
This category is just , the opposite of the category of Grassman algebras (i.e. exterior algebras) – that is, polynomial algebras in noncommuting variables, like . These objects naturally come with a -grading, which are spanned, respectively, by the monomials with even and odd degree: latex \mathbf{SMan}$ (\Lambda_q)_0 \oplus (\Lambda_q)_1$
This is a -grading since the even ones commute with anything, and the odd ones anti-commute with each other. So if and are homogeneous (live entirely in one grade or the other), then .
The should be thought of as the -dimensional supermanifold: it looks like a point, with a -dimensional fermionic tangent space (the "infinitesimal fuzz" noted above) attached. The morphisms in from to $llatex \Lambda_r$ are just the grade-preserving algebra homomorphisms from to . There are quite a few of these: these objects are not terminal objects like the actual point. But this makes them good probes. Thi gets to be a site with the trivial topology, so that all presheaves are sheaves.
Then, as usual, a presheaf on this category is to be understood as giving, for each object , the collection of maps from to a space . The case gives the set of points of , and the various other algebras give sets of " -points". This term is based on the analogy that a point of a topological space (or indeed element of a set) is just the same as a map from the terminal object , the one point space (or one element set). Then an " -point" of a space is just a map from another object . If is not terminal, this is close to the notion of a "subspace" (though a subspace, strictly, would be a monomorphism from ). These are maps from in , or as algebra maps, consists of all the maps .
What's more, since this is a functor, we have to have a system of maps between the . For any algebra maps , we should get corresponding maps . These are really algebra maps , of which there are plenty, all determined by the images of the generators .
Now, really, a sheaf on is actually just what we might call a "super-set", with sets for each . To make super-manifolds, one wants to say they are "manifold-valued sheaves". Since manifolds themselves don't form a topos, one needs to be a bit careful about defining the extra structure which makes a set a manifold.
Thus, a supermanifold is a manifold constructed in the topos . That is, must also be equipped with a topology and a collection of charts defining the manifold structure. These are all construed internally using objects and morphisms in the category of sheaves, where charts are based on super-domains, namely those algebras which look like , for an open subset of .
The reduced manifold which appears in Batchelor's theorem is the manifold of ordinary points . That is, it is all the -points, where is playing the role of functions on the zero-dimensional domain with just one point. All the extra structure in an atlas of charts for all of to make it a supermanifold amounts to putting the structure of ordinary manifolds on the – but in compatible ways.
(Alternatively, we could have described as sheaves in , where is a site of "superdomains", and put all the structure defining a manifold into . But working over super-points is preferable for the moment, since it makes it clear that manifolds and supermanifolds are just manifestations of the same basic definition, but realized in two different toposes.)
The fact that the manifold structure on the must be put on them compatibly means there is a relatively nice way to picture all these spaces.
Values of the Functor of Points as Bundles
The main idea which I find helps to understand the functor of points is that, for every superpoint (i.e. for every Grassman algebra ), one gets a manifold . (Note the convention that is the odd dimension of , and is the odd dimension of the probe superpoint).
Just as every supermanifold is a bundle of superpoints, every manifold is a perfectly conventional vector bundle over the conventional manifold of ordinary points. So for each , we get a bundle, .
Now this manifold, , consists exactly of all the "points" of – this tells us immediately that is not a category of concrete sheaves (in the sense I explained in the previous post). Put another way, it's not a concrete category – that would mean that there is an underlying set functor, which gives a set for each object, and that morphisms are determined by what they do to underlying sets. Non-concrete categories are, by nature, trickier to understand.
However, the functor of points gives a way to turn the non-concrete into a tower of concrete manifolds , and the morphisms between various amount to compatible towers of maps between the various for each . The fact that the compatibility is controlled by algebra maps explains why this is the same as maps between these bundles of superpoints.
Specifically, then, we have
This splits into maps of the even parts, and of the odd parts, where the grassman algebra has even and odd parts: , as above. Similarly, splits into odd and even parts, and since the functions on are entirely even, this is:
Now, the duality of "hom" and tensor means that , and algebra maps preserve the grading. So we just have tensor products of these with the even and odd parts, respectively, of the probe superpoint. Since the even part includes the multiples of the constants, part of this just gives a copy of itself. The remaining part of is nilpotent (since it's made of even-degree polynomials in the nilpotent , so what we end up with, looking at the bundle over an open neighborhood , is:
The projection map is the obvious projection onto the first factor. These assemble into a bundle over .
We should think of these bundles as "shifting up" the nilpotent part of (which are invisible at the level of ordinary points in ) by the algebra . Writing them this way makes it clear that this is functorial in the superpoints : given choices and , and any morphism between the corresponding and , it's easy to see how we get maps between these bundles.
Now, maps between supermanifolds are the same thing as natural transformations between the functors of points. These include maps of the base manifolds, along with maps between the total spaces of all these bundles. More, this tower of maps must commute with all those bundle maps coming from algebra maps . (In particular, since , the ordinary point, is one of these, they have to commute with the projection to .) These conditions may be quite restrictive, but it leaves us with, at least, a quite concrete image of what maps of supermanifolds
Super-Poincaré Group
One of the main settings where super-geometry appears is in so-called "supersymmetric" field theories, which is a concept that makes sense when fields live on supermanifolds. Supersymmetry, and symmetries associated to super-Lie groups, is exactly the kind of thing that John has worked on. A super-Lie group, of course, is a supermanifold that has the structure of a group (i.e. it's a Lie group in the topos of presheaves over the site of super-points – so the discussion above means it can be thought of as a big tower of Lie groups, all bundles over a Lie group ).
In fact, John has mostly worked with super-Lie algebras (and the connection between these and division algebras, though that's another story). These are -graded algebras with a Lie bracket whose commutation properties are the graded version of those for an ordinary Lie algebra. But part of the value of the framework above is that we can simply borrow results from Lie theory for manifolds, import it into the new topos , and know at once that super-Lie algebras integrate up to super-Lie groups in just the same way that happens in the old topos (of sets).
Supersymmetry refers to a particular example, namely the "super-Poincaré group". Just as the Poincaré group is the symmetry group of Minkowski space, a 4-manifold with a certain metric on it, the super-Poincaré group has the same relation to a certain supermanifold. (There are actually a few different versions, depending on the odd dimension.) The algebra is generated by infinitesimal translations and boosts, plus some "translations" in fermionic directions, which generate the odd part of the algebra.
Now, symmetry in a quantum theory means that this algebra (or, on integration, the corresponding group) acts on the Hilbert space of possible states of the theory: that is, the space of states is actually a representation of this algebra. In fact, to make sense of this, we need a super-Hilbert space (i.e. a graded one). The even generators of the algebra then produce grade-preserving self-maps of , and the odd generators produce grade-reversing ones. (This fact that there are symmetries which flip the "bosonic" and "fermionic" parts of the total is why supersymmetric theories have "superpartners" for each particle, with the opposite parity, since particles are labelled by irreducible representations of the Poincaré group and the gauge group).
To date, so far as I know, there's no conclusive empirical evidence that real quantum field theories actually exhibit supersymmetry, such as detecting actual super-partners for known particles. Even if not, however, it still has some use as a way of developing toy models of quite complicated theories which are more tractable than one might expect, precisely because they have lots of symmetry. It's somewhat like how it's much easier to study computationally difficult theories like gravity by assuming, for instance, spherical symmetry as an extra assumption. In any case, from a mathematician's point of view, this sort of symmetry is just a particularly simple case of symmetries for theories which live on noncommutative backgrounds, which is quite an interesting topic in its own right. As usual, physics generates lots of math which remains both true and interesting whether or not it applies in the way it was originally suggested.
In any case, what the functor-of-points viewpoint suggests is that ordinary and super- symmetries are just two special cases of "symmetries of a field theory" in two different toposes. Understanding these and other examples from this point of view seems to give a different understanding of what "symmetry", one of the most fundamental yet slippery concepts in mathematics and science, actually means.
Lecture Series on Sheaves and Motives – Part I (Sheaves as Spaces)
Posted by Jeffrey Morton under category theory, geometry, sheaves, smooth spaces, Supergeometry, toposes
This semester, Susama Agarwala and I have been sharing a lecture series for graduate students. (A caveat: there are lecture notes there, by student request, but they're rough notes, and contain some mistakes, omissions, and represent a very selective view of the subject.) Being a "topics" course, it consists of a few different sections, loosely related, which revolve around the theme of categorical tools which are useful for geometry (and topology).
What this has amounted to is: I gave a half-semester worth of courses on toposes, sheaves, and the basics of derived categories. Susama is now giving the second half, which is about motives. This post will talk about the part of the course I gave. Though this was a whole series of lectures which introduced all these topics more or less carefully, I want to focus here on the part of the lecture which built up to a discussion of sheaves as spaces. Nothing here, or in the two posts to follow, is particularly new, but they do amount to a nice set of snapshots of some related ideas.
Coming up soon: John Huerta is currently visiting Hamburg, and on July 8, he gave a guest-lecture which uses some of this machinery to talk about supermanifolds, which will be the subject of the next post in this series. In a later post, I'll talk about Susama's lectures about motives and how this relates to the discussion here (loosely).
Grothendieck Toposes
The first half of our course was about various aspects of Grothendieck toposes. In the first lecture, I talked about "Elementary" (or Lawvere-Tierney) toposes. One way to look at these is to say that they are categories which have all the properties of the category of Sets which make it useful for doing most of ordinary mathematics. Thus, a topos in this sense is a category with a bunch of properties – there are various equivalent definitions, but for example, toposes have all finite limits (in particular, products), and all colimits.
More particularly, they have "power objects". That is, if and are objects of , then there is an object , with an "evaluation map" , which makes it possible to think of as the object of "morphisms from A to B".
The other main thing a topos has is a "subobject classifier". Now, a subobject of is an equivalence class of monomorphisms into – think of sets, where this amounts to specifying the image, and the monomorphisms are the various inclusions which pick out the same subset as their image. A classifier for subobjects should be thought of as something like the two-element set is , whose elements we can tall "true" and "false". Then every subset of corresponds to a characteristic function . In general, a subobject classifies is an object together with a map from the terminal object, , such that every inclusion of subobject is a pullback of along a characteristic function.
Now, elementary toposes were invented chronologically later than Grothendieck toposes, which are a special class of example. These are categories of sheaves on (Grothendieck) sites. A site is a category together with a "topology" , which is a rule which, for each , picks out , a set of collections of maps into , called seives for . They collections have to satisfy certain conditions, but the idea can be understood in terms of the basic example, . Given a topological space, is the category whose objects are the open sets , and the morphisms are all the inclusions. Then that each collection in is an open cover of – that is, a bunch of inclusions of open sets, which together cover all of in the usual sense.
(This is a little special to , where every map is an inclusion – in a general site, the need to be closed under composition with any other morphism (like an ideal in a ring). So for instance, , the category of topological spaces, the usual choice of consists of all collections of maps which are jointly surjective.)
The point is that a presheaf on is just a functor . That is, it's a way of assigning a set to each . So, for instance, for either of the cases we just mentioned, one has , which assigns to each open set the set of all bounded functions on , and to every inclusion the restriction map. Or, again, one has , which assigns the set of all continuous functions.
These two examples illustrate the condition which distinguishes those presheaves which are sheaves – namely, those which satisfy some "gluing" conditions. Thus, suppose we're, given an open cover , and a choice of one element from each , which form a "matching family" in the sense that they agree when restricted to any overlaps. Then the sheaf condition says that there's a unique "amalgamation" of this family – that is, one element which restricts to all the under the maps .
Sheaves as Generalized Spaces
There are various ways of looking at sheaves, but for the purposes of the course on categorical methods in geometry, I decided to emphasize the point of view that they are a sort of generalized spaces.
The intuition here is that all the objects and morphisms in a site have corresponding objects and morphisms in . Namely, the objects appear as the representable presheaves, , and the morphisms show up as the induced natural transformations between these functors. This map is called the Yoneda embedding. If is at all well-behaved (as it is in all the examples we're interested in here), these presheaves will always be sheaves: the image of lands in .
In this case, the Yoneda embedding embeds as a sub-category of . What's more, it's a full subcategory: all the natural transformations between representable presheaves come from the morphisms of -objects in a unique way. So is, in this sense, a generalization of itself.
More precisely, it's the Yoneda lemma which makes sense of all this. The idea is to start with the way ordinary -objects (from now on, just call them "spaces") become presheaves: they become functors which assign to each the set of all maps into . So the idea is to turn this around, and declare that even non-representable sheaves should have the same interpretation. The Yoneda Lemma makes this a sensible interpretation: it says that, for any presheaf , and any , the set is naturally isomorphic to : that is, literally is the collection of morphisms from (or rather, its image under the Yoneda embedding) and a "generalized space" . (See also Tom Leinster's nice discussion of the Yoneda Lemma if this isn't familiar.) We describe as a "probe" object: one probes the space by mapping into it in various ways. Knowing the results for all tells you all about the "space" . (Thus, for instance, one can get all the information about the homotopy type of a space if you know all the maps into it from spheres of all dimensions up to homotopy. So spheres are acting as "probes" to reveal things about the space.)
Furthermore, since is a topos, it is often a nicer category than the one you start with. It has limits and colimits, for instance, which the original category might not have. For example, if the kind of spaces you want to generalize are manifolds, one doesn't have colimits, such as the space you get by gluing together two lines at a point. The sheaf category does. Likewise, the sheaf category has exponentials, and manifolds don't (at least not without the more involved definitions needed to allow infinite-dimensional manifolds).
These last remarks about manifolds suggest the motivation for the first example…
Diffeological Spaces
The lecture I gave about sheaves as spaces used this paper by John Baez and Alex Hoffnung about "smooth spaces" (they treat Souriau's diffeological spaces, and the different but related Chen spaces in the same framework) to illustrate the point. They describe In that case, the objects of the sites are open (or, for Chen spaces, convex) subsets of , for all choices of , the maps are the smooth maps in the usual sense (i.e. the sense to be generalized), and the covers are jointly surjective collections of maps.
Now, that example is a somewhat special situation: they talk about concrete sheaves, on concrete sites, and the resulting categories are only quasitoposes – a slightly weaker condition than being a topos, but one still gets a useful collection of spaces, which among other things include all manifolds. The "concreteness" condition – that has a terminal object to play the role of "the point". Being a concrete sheaf then means that all the "generalized spaces" have an underlying set of points (namely, the set of maps from the point object), and that all morphisms between the spaces are completely determined by what they do to the underlying set of points. This means that the "spaces" really are just sets with some structure.
Now, if the site happens to be , then we have a slightly intuition: the "generalized" spaces are something like generalized bundles over , and the "probes" are now sections of such a bundle. A simple example would be an actual sheaf of functions: these are sections of a trivial bundle, since, say, -valued functions are sections of the bundle . Given a nontrivial bundle , there is a sheaf of sections – on each , one gets to be all the one-sided inverses which are one-sided inverses of . For a generic sheaf, we can imagine a sort of "generalized bundle" over .
Another example of the fact that sheaves can be seen as spaces is the category of schemes: these are often described as topological spaces which are themselves equipped with a sheaf of rings. "Scheme" is to algebraic geometry what "manifold" is to differential geometry: a kind of space which looks locally like something classical and familiar. Schemes, in some neighborhood of each point, must resemble varieties – i.e. the locus of zeroes of some algebraic function on $\mathbb{k}^n$. For varieties, the rings attached to neighborhoods are rings of algebraic functions on this locus, which will be a quotient of the ring of polynomials.
But another way to think of schemes is as concrete sheaves on a site whose objects are varieties and whose morphisms are algebraic maps. This is dual to the other point of view, just as thinking of diffeological spaces as sheaves is dual to a viewpoint in which they're seen as topological spaces equipped with a notion of "smooth function".
(Some general discussion of this in a talk by Victor Piercey)
These two viewpoints (defining the structure of a space by a class of maps into it, or by a class of maps out of it) in principle give different definitions. To move between them, you really need everything to be concrete: the space has an underlying set, the set of probes is a collection of real set-functions. Likewise, for something like a scheme, you'd need the ring for any open set to be a ring of actual set-functions. In this case, one can move between the two descriptions of the space as long as there is a pre-existing concept of the right kind of function on the "probe" spaces. Given a smooth space, say, one can define a sheaf of smooth functions on each open set by taking those whose composites with every probe are smooth. Conversely, given something like a scheme, where the structure sheaf is of function rings on each open subspace (i.e. the sheaf is representable), one can define the probes from varieties to be those which give algebraic functions when composed with every function in these rings. Neither of these will work in general: the two approaches define different categories of spaces (in the smooth context, see Andrew Stacey's comparison of various categories of smooth spaces, defined either by specifying the smooth maps in, or out, or both). But for very concrete situations, they fit together neatly.
The concrete case is therefore nice for getting an intuition for what it means to think of sheaves as spaces. For sheaves which aren't concrete, morphisms aren't determined by what they do to the underlying points i.e. the forgetful "underlying set" functor isn't faithful. Here, we might think of a "generalized space" which looks like two copies of the same topological space: the sheaf gives two different elements of for each map of underlying sets. We could think of such generalized space as built from sets equipped with extra "stuff" (say, a set consisting of pairs – so it consists of a "blue" copy of X and a "green" copy of X, but the underlying set functor ignores the colouring.
Still, useful as they may be to get a first handle on this concept of sheaf as generalized space, one shouldn't rely on these intuitions too much: if doesn't even have a "point" object, there is no underlying set functor at all. Eventually, one simply has to get used to the idea of defining a space by the information revealed by probes.
In the next post, I'll talk more about this in the context of John Huerta's guest lecture, applying this idea to the category of supermanifolds, which can be seen as manifolds built internal to the topos of (pre)sheaves on a site whose objects are called "super-points".
Seminar on the Cobordism Hypothesis and (Infinity,n)-Categories
Posted by Jeffrey Morton under category theory, groupoids, higher dimensional algebra, homotopy theory, tqft
Well, it's been a while, but it's now a new semester here in Hamburg, and I wanted to go back and look at some of what we talked about in last semester's research seminar. This semester, Susama Agarwala and I are sharing the teaching in a topics class on "Category Theory for Geometry", in which I'll be talking about categories of sheaves, and building up the technology for Susama to talk about Voevodsky's theory of motives (enough to give a starting point to read something like this).
As for last semester's seminar, one of the two main threads, the one which Alessandro Valentino and I helped to organize, was a look at some of the material needed to approach Jacob Lurie's paper on the classification of topological quantum field theories. The idea was for the research seminar to present the basic tools that are used in that paper to a larger audience, mostly of graduate students – enough to give a fairly precise statement, and develop the tools needed to follow the proof. (By the way, for a nice and lengthier discussion by Chris Schommer-Pries about this subject, which includes more details on much of what's in this post, check out this video.)
So: the key result is a slightly generalized form of the Cobordism Hypothesis.
Cobordism Hypothesis
The sort of theory which the paper classifies are those which "extend down to a point". So what does this mean? A topological field theory can be seen as a sort of "quantum field theory up to homotopy", which abstract away any geometric information about the underlying space where the fields live – their local degrees of freedom. We do this by looking only at the classes of fields up to the diffeomorphism symmetries of the space. The local, geometric, information gets thrown away by taking this quotient of the space of solutions.
In spite of reducing the space of fields this way, we want to capture the intuition that the theory is still somehow "local", in that we can cut up spaces into parts and make sense of the theory on those parts separately, and determine what it does on a larger space by gluing pieces together, rather than somehow having to take account of the entire space at once, indissolubly. This reasoning should apply to the highest-dimensional space, but also to boundaries, and to any figures we draw on boundaries when cutting them up in turn.
Carrying this on to the logical end point, this means that a topological quantum field theory in the fully extended sense should assign some sort of data to every geometric entity from a zero-dimensional point up to an -dimensional cobordism. This is all expressed by saying it's an -functor:
Well, once we know what this means, we'll know (in principle) what a TQFT is. It's less important, for the purposes of Lurie's paper, what is than what is. The reason is that we want to classify these field theories (i.e. functors). It will turn out that has the sort of structure that makes it easy to classify the functors out of it into any target -category . A guess about what kind of structure is actually there was expressed by Baez and Dolan as the Cobordism Hypothesis. It's been slightly rephrased from the original form to get a form which has a proof. The version Lurie proves says:
The -category is equivalent to the free symmetric monoidal -category generated by one fully-dualizable object.
The basic point is that, since is a free structure, the classification means that the extended TQFT's amount precisely to the choice of a fully-dualizable object of (which includes a choice of a bunch of morphisms exhibiting the "dualizability"). However, to make sense of this, we need to have a suitable idea of an -category, and know what a fully dualizable object is. Let's begin with the first.
-Categories
In one sense, the Cobordism Hypothesis, which was originally made about -categories at a time when these were only beginning to be defined, could be taken as a criterion for an acceptable definition. That is, it expressed an intuition which was important enough that any definition which wouldn't allow one to prove the Cobordism Hypothesis in some form ought to be rejected. To really make it work, one had to bring in the "infinity" part of -categories. The point here is that we are talking about category-like structures which have morphisms between objects, 2-morphisms between morphisms, and so on, with -morphisms between -morphisms for every possible degree. The inspiration for this comes from homotopy theory, where one has maps, homotopies of maps, homotopies of homotopies, etc.
Nowadays, there are several possible concrete models for -categories (see this survey article by Julie Bergner for a summary of four of them). They are all equivalent definitions, in a suitable up-to-homotopy way, but for purposes of the proof, Lurie is taking the definition that an -category is an n-fold complete Segal space. One theme that shows up in all the definitions is that of simplicial methods. (In our seminar, we started with a series of two talks introducing the notions of simplicial sets, simplicial objects in a category, and Kan complexes. If you don't already know this, essentially everything we need is nicely explained in here.)
One of the underlying ideas is that a category can be associated with a simplicial set, its nerve , where the set of -dimensional simplexes is just the set of composable -tuples of morphisms in . If is a groupoid (everything is invertible), then the simplicial set is a Kan complex – it satisfies some filling conditions, which ensure that any morphism has an inverse. Not every Kan complex is the nerve of a groupoid, but one can think of them as weak versions of groupoids – -groupoids, or -categories – where the higher morphisms may not be completely trivial (as with a groupoid), but where at least they're all invertible. This leads to another desirable feature in any definition of -category, which is the Homotopy Hypothesis: that the -category of -categories, also called -groupoids, should be equivalent (in the same weak sense) to a category of Hausdorff spaces with some other nice properties, which we call for short. This is true of Kan complexes.
Thus, up to homotopy, specifying an -groupoid is the same as specifying a space.
The data which defines a Segal space (which was however first explicitly defined by Charlez Rezk) is a simplicial space : for each , there are spaces , thought of as the space of composable -tuples of morphisms. To keep things tame, we suppose that , the space of objects, is discrete – that is, we have only a set of objects. Being a simplicial space means that the come equipped with a collection of face maps , which we should think of as compositions: to get from an -tuple to an -tuple of morphisms, one can compose two morphisms together at any of positions in the tuple.
One condition which a simplicial space has to satisfy to be a Segal space has to do with the "weakening" which makes a Segal space a weaker notion than just a category lies in the fact that the cannot be arbitrary, but must be homotopy equivalent to the "actual" space of -tuples, which is a strict pullback . That is, in a Segal space, the pullback which defines these tuples for a category is weakened to be a homotopy pullback. Combining this with the various face maps, we therefore get a weakened notion of composition: . Because we start by replacing the space of -tuples with the homotopy-equivalent , the composition rule will only satisfy all the relations which define composition (associativity, for instance) up to homotopy.
To be complete, the Segal space must have a notion of equivalence for which agrees with that for Kan complexes seen as -groupoids. In particular, there is a sub-simplicial object , which we understand to consist of the spaces of invertible -morphisms. Since there should be nothing interesting happening above the top dimension, we ask that, for these spaces, the face and degeneracy maps are all homotopy equivalences: up to homotopy, the space of invertible higher morphisms has no new information.
Then, an -fold complete Segal space is defined recursively, just as one might define -categories (without the infinitely many layers of invertible morphisms "at the top"). In that case, we might say that a double category is just a category internal to : it has a category of objects, and a category of morphims, and the various maps and operations, such as composition, which make up the definition of a category are all defined as functors. That turns out to be the same as a structure with objects, horizontal and vertical morphisms, and square-shaped 2-cells. If we insist that the category of objects is discrete (i.e. really just a set, with no interesting morphisms), then the result amounts to a 2-category. Then we can define a 3-category to be a category internal to (whose 2-category of objects is discrete), and so on. This approach really defines an -fold category (see e.g. Chapter 5 of Cheng and Lauda to see a variation of this approach, due to Tamsamani and Simpson), but imposing the condition that the objects really amount to a set at each step gives exactly the usual intuition of a (strict!) -category.
This is exactly the approach we take with -fold complete Segal spaces, except that some degree of weakness is automatic. Since a C.S.S. is a simplicial object with some properties (we separately define objects of -tuples of morphisms for every , and all the various composition operations), the same recursive approach leads to a definition of an " -fold complete Segal space" as simply a simplicial object in -fold C.S.S.'s (with the same properties), such that the objects form a set. In principle, this gives a big class of "spaces of morphisms" one needs to define – one for every -fold product of simplexes of any dimension – but all those requirements that any space of objects "is just a set" (i.e. is homotopy-equivalent to a discrete set of points) simplifies things a bit.
Cobordism Category as -Category
So how should we think of cobordisms as forming an -category? There are a few stages in making a precise definition, but the basic idea is simple enough. One starts with manifolds and cobordisms embedded in some fixed finite-dimensional vector space , and then takes a limit over all . In each , the coordinates of the factor give ways of cutting the cobordism into pieces, and gluing them back together defines composition in a different direction. Now, this won't actually produce a complete Segal space: one has to take a certain kind of completion. But the idea is intuitive enough.
We want to define an -fold C.S.S. of cobordisms (and cobordisms between cobordisms, and so on, up to -morphisms). To start with, think of the case : then the space of objects of consists of all embeddings of a -dimensional manifold into . The space of -simplexes (of -tuples of morphisms) consists of all ways of cutting up a -dimensional cobordism embedded in by choosing , where we think of the cobordism having been glued from two pieces, where at the slice , we have the object where the two pieces were composed. (One has to be careful to specify that the Morse function on the cobordisms, got by projection only , has its critical points away from the – the generic case – to make sure that the objects where gluing happens are actual manifolds.)
Now, what about the higher morphisms of the -category? The point is that one needs to have an -groupoid – that is, a space! – of morphisms between two cobordisms and . To make sense of this, we just take the space of diffeomorphisms – not just as a set of morphisms, but including its topology as well. The higher morphisms, therefore, can be thought of precisely as paths, homotopies, homotopies between homotopies, and so on, in these spaces. So the essential difference between the 1-category of cobordisms and the -category is that in the first case, morphisms are diffeomorphism classes of cobordisms, whereas in the latter, the higher morphisms are made precisely of the space of diffeomorphisms which we quotient out by in the first case.
Now, -categories, can have non-invertible morphisms between morphisms all the way up to dimension , after which everything is invertible. An -fold C.S.S. does this by taking the definition of a complete Segal space and copying it inside -fold C.S.S's: that is, one has an -fold Complete Segal Space of -tuples of morphisms, for each , they form a simplicial object, and so forth.
Now, if we want to build an -category of cobordisms, the idea is the same, except that we have a simplicial object, in a category of simplicial objects, and so on. However, the way to define this is essentially similar. To specify an -fold C.S.S., we have to specify a whole collection of spaces associated to cobordisms equipped with embeddings into . In particular, for each tuple , we have the space of such embeddings, such that for each one has special points along the coordinate axis. These are the ways of breaking down a given cobordism into a composite of pieces. Again, one has to make sure that these critical points of the Morse functions defined by the projections onto these coordinate axes avoid these special which define the manifolds where gluing takes place. The composition maps which make these into a simplical object are quite natural – they just come by deleting special points.
Finally, we take a limit over all (to get around limits to embeddings due to the dimension of ). So we know (at least abstractly) what the -category of cobordisms should be. The cobordism hypothesis claims it is equivalent to one defined in a free, algebraically-flavoured way, namely as the free symmetric monoidal -category on a fully-dualizable object. (That object is "the point" – which, up to the kind of homotopically-flavoured equivalence that matters here, is the only object when our highest-dimensional cobordisms have dimension ).
Dualizability
So what does that mean, a "fully dualizable object"?
First, to get the idea, let's think of the 1-dimensional example. Instead of " -category", we would like to just think of this as a statement about a category. Then is the 1-category of framed bordisms. For a manifold (or cobordism, which is a manifold with boundary), a framing is a trivialization of the tangent bundle. That is, it amounts to a choice of isomorphism at each point between the tangent space there and the corresponding . So the objects of are collections of (signed) points, and the morphisms are equivalence classes of framed 1-dimensional cobordisms. These amount to oriented 1-manifolds with boundary, where the points (objects) on the boundary are the source and target of the cobordism.
Now we want to classify what TQFT's live on this category. These are functors . We have two generating objects, and , the two signed points. A TQFT must assign these objects vector spaces, which we'll call and . Collections of points get assigned tensor products of all the corresponding vector spaces, since the functor is monoidal, so knowing these two vector spaces determines what does to all objects.
What does do to morphisms? Well, some generating morphsims of interest are cups and caps: these are lines which connect a positive to a negative point, but thought of as cobordisms taking two points to the empty set, and vice versa. That is, we have an evaluation:This statement is what is generalized to say that -dimensional TQFT's are classified by "fully" dualizable objects.
and a coevaluation:
Now, since cobordisms are taken up to equivalence, which in particular includes topological deformations, we get a bunch of relations which these have to satisfy. The essential one is the "zig-zag" identity, reflecting the fact that a bent line can be straightened out, and we have the same 1-morphism in . This implies that:
is the same as the identity. This in turn means that the evaluation and coevaluation maps define a nondegenerate pairing between and . The fact that this exists means two things. First, is the dual of : . Second, this only makes sense if both and its dual are finite dimensional (since the evaluation will just be the trace map, which is not even defined on the identity if is infinite dimensional).
On the other hand, once we know, , this determines up to isomorphism, as well as the evaluation and coevaluation maps. In fact, this turns out to be enough to specify entirely. The classification then is: 1-D TQFT's are classified by finite-dimensional vector spaces . Crucially, what made finiteness important is the existence of the dual and the (co)evaluation maps which express the duality.
In an -category, to say that an object is "fully dualizable" means more that the object has a dual (which, itself, implies the existence of the morphisms and ). It also means that and have duals themselves – or rather, since we're talking about morphisms, "adjoints". This in turn implies the existence of 2-morphisms which are the unit and counit of the adjunctions (the defining properties are essentially the same as those for morphisms which define a dual). In fact, every time we get a morphism of degree less than in this process, "fully dualizable" means that it too must have a dual (i.e. an adjoint).
This does run out eventually, though, since we only require this goes up to dimension : the -morphisms which this forces to exist (quite a few) aren't required to have duals. This is good, because if they were, since all the higher morphisms available are invertible, this would mean that the dual -morphisms would actually be weak inverses (that is, their composite is isomorphic to the identity)… But that would mean that the dual -morphisms which forced them to exist would also be weak inverses (their composite would be weakly isomorphic to the identity)… and so on! In fact, if the property of "having duals" didn't stop, then everything would be weakly invertible: we'd actually have a (weak) -groupoid!
Classifying TQFT
So finally, the point of the Cobordism Hypothesis is that a (fully extended) TQFT is a functor out of this into some target -category . There are various options, but whatever we pick, the functor must assign something in to the point, say , and something to each of and , as well as all the higher morphisms which must exist. Then functoriality means that all these images have to again satisfy the properties which make a fully dualizable object. Furthermore, since is the free gadget with all these properties on the single object , this is exactly what it means that is a functor. Saying that is fully dualizable, by implication, includes all the choices of morphisms like etc. which show it as fully dualizable. (Conceivably one could make the same object fully dualizable in more than one way – these would be different functors).
So an extended -dimensional TQFT is exactly the choice of a fully dualizable object , for some -category . This object is "what the TQFT assigns to a point", but if we understand the structure of the object as a fully dualizable object, then we know what the TQFT assigns to any other manifold of any dimension up to , the highest dimension in the theory. This is how this algebraic characterization of cobordisms helps to classify such theories.
Moving to Hamburg; Talk in Brno: 2-Symmetry of Moduli Spaces
Posted by Jeffrey Morton under 2-groups, algebra, category theory, double categories, gauge theory, higher gauge theory, moduli spaces, talks
Since I moved to Hamburg, Alessandro Valentino and I have been organizing one series of seminar talks whose goal is to bring people (mostly graduate students, and some postdocs and others) up to speed on the tools used in Jacob Lurie's big paper on the classification of TQFT and proof of the Cobordism Hypothesis. This is part of the Forschungsseminar ("research seminar") for the working groups of Christoph Schweigert, Ingo Runkel, and Christoph Wockel. First, I gave one introducing myself and what I've done on Extended TQFT. In our main series We've had a series of four so far – two in which Alessandro outlined a sketch of what Lurie's result is, and another two by Sebastian Novak and Marc Palm that started catching our audience up on the simplicial methods used in the theory of -categories which it uses. Coming up in the New Year, Nathan Bowler and I will be talking about first -categories, and then -categories. I'll do a few posts summarizing the talks around then.
Some people in the group have done some work on quantum field theories with defects, in relation to which, there's this workshop coming up here in February! The idea here is that one could have two regions of space where different field theories apply, which are connected along a boundary. We might imagine these are theories which are different approximations to what's going on physically, with a different approximation useful in each region. Whatever the intuition, the regions will be labelled by some category, and boundaries between regions are labelled by functors between categories. Where different boundary walls meet, one can have natural transformations. There's a whole theory of how a 3D TQFT can be associated to modular tensor categories, in sort of the same sense that a 2D TQFT is associated to a Frobenius algebra. This whole program is intimately connected with the idea of "extending" a given TQFT, in the sense that it deals with theories that have inputs which are spaces (or, in the case of defects, sub-spaces of given ones) of many different dimensions. Lurie's paper describing the n-dimensional cobordism category, is very much related to the input to a theory like this.
Brno Visit
This time, I'd like to mention something which I began working on with Roger Picken in Lisbon, and talked about for the first time in Brno, Czech Republic, where I was invited to visit at Masaryk University. I was in Brno for a week or so, and on Thursday, December 13, I gave this talk, called "Higher Gauge Theory and 2-Group Actions". But first, some pictures!
This fellow was near the hotel I stayed in:
Since this sculpture is both faceless and hard at work on nonspecific manual labour, I assume he's a Communist-era artwork, but I don't really know for sure.
The Christmas market was on in Náměstí Svobody (Freedom Square) in the centre of town. This four-headed dragon caught my eye:
On the way back from Brno to Hamburg, I met up with my wife to spend a couple of days in Prague. Here's the Christmas market in the Old Town Square of Prague:
Anyway, it was a good visit to the Czech Republic. Now, about the talk!
Moduli Spaces in Higher Gauge Theory
The motivation which I tried to emphasize is to define a specific, concrete situation in which to explore the concept of "2-Symmetry". The situation is supposed to be, if not a realistic physical theory, then at least one which has enough physics-like features to give a good proof of concept argument that such higher symmetries should be meaningful in nature. The idea is that Higher Gauge theory is a field theory which can be understood as one in which the possible (classical) fields on a space/spacetime manifold consist of maps from that space into some target space . For the topological theory, they are actually just homotopy classes of maps. This is somewhat related to Sigma models used in theoretical physics, and mathematically to Homotopy Quantum Field Theory, which considers these maps as geometric structure on a manifold. An HQFT is a functor taking such structured manifolds and cobordisms into Hilbert spaces and linear maps. In the paper Roger and I are working on, we don't talk about this stage of the process: we're just considering how higher-symmetry appears in the moduli spaces for fields of this kind, which we think of in terms of Higher Gauge Theory.
Ordinary topological gauge theory – the study of flat connections on -bundles for some Lie group , can be looked at this way. The target space is the "classifying space" of the Lie group – homotopy classes of maps in are the same as groupoid homomorphisms in . Specifically, the pair of functors and relating groupoids and topological spaces are adjoints. Now, this deals with the situation where is a homotopy 1-type, which is to say that it has a fundamental groupoid , and no other interesting homotopy groups. To deal with more general target spaces , one should really deal with infinity-groupoids, which can capture the whole homotopy type of – in particular, all its higher homotopy groups at once (and various relations between them). What we're talking about in this paper is exactly one step in that direction: we deal with 2-groupoids.
We can think of this in terms of maps into a target space which is a 2-type, with nontrivial fundamental groupoid , but also interesting second homotopy group (and nothing higher). These fit together to make a 2-groupoid , which is a 2-group if is connected. The idea is that is the classifying space of some 2-group , which plays the role of the Lie group in gauge theory. It is the "gauge 2-group". Homotopy classes of maps into correspond to flat connections in this 2-group.
For practical purposes, we use the fact that there are several equivalent ways of describing 2-groups. Two very directly equivalent ways to define them are as group objects internal to , or as categories internal to – which have a group of objects and a group of morphisms, and group homomorphisms that define source, target, composition, and so on. This second way is fairly close to the equivalent formulation as crossed modules . The definition is in the slides, but essentially the point is that is the group of objects, and with the action , one gets the semidirect product which is the group of morphisms. The map makes it possible to speak of and acting on each other, and that these actions "look like conjugation" (the precise meaning of which is in the defining properties of the crossed module).
The reason for looking at the crossed-module formulation is that it then becomes fairly easy to understand the geometric nature of the fields we're talking about. In ordinary gauge theory, a connection can be described locally as a 1-form with values in , the Lie algebra of . Integrating such forms along curves gives another way to describe the connection, in terms of a rule assigning to every curve a holonomy valued in which describes how to transport something (generally, a fibre of a bundle) along the curve. It's somewhat nontrivial to say how this relates to the classic definition of a connection on a bundle, which can be described locally on "patches" of the manifold via 1-forms together with gluing functions where patches overlap. The resulting categories are equivalent, though.
In higher gauge theory, we take a similar view. There is a local view of "connections on gerbes", described by forms and gluing functions (the main difference in higher gauge theory is that the gluing functions related to higher cohomology). But we will take the equivalent point of view where the connection is described by -valued holonomies along paths, and -valued holonomies over surfaces, for a crossed module , which satisfy some flatness conditions. These amount to 2-functors of 2-categories .
The moduli space of all such 2-connections is only part of the story. 2-functors are related by natural transformations, which are in turn related by "modifications". In gauge theory, the natural transformations are called "gauge transformations", and though the term doesn't seem to be in common use, the obvious term for the next layer would be "gauge modifications". It is possible to assemble a 2-groupoid , whose space of objects is exactly the moduli space of 2-connections, and whose 1- and 2-morphisms are exactly these gauge transformations and modifications. So the question is, what is the meaning of the extra information contained in the 2-groupoid which doesn't appear in the moduli space itself?
Our claim is that this information expresses how the moduli space carries "higher symmetry".
2-Group Actions and the Transformation Double Category
What would it mean to say that something exhibits "higher" symmetry? A rudimentary way to formalize the intuition of "symmetry" is to say that there is a group (of "symmetries") which acts on some object. One could get more subtle, but this should be enough to begin with. We already noted that "higher" gauge theory uses 2-groups (and beyond into -groups) in the place of ordinary groups. So in this context, the natural way to interpret it is by saying that there is an action of a 2-group on something.
Just as there are several equivalent ways to define a 2-group, there are different ways to say what it means for it to have an action on something. One definition of a 2-group is to say that it's a 2-category with one object and all morphisms and 2-morphisms invertible. This definition makes it clear that a 2-group has to act on an object of some 2-category . For our purposes, just as we normally think of group actions on sets, we will focus on 2-group actions on categories, so that is the 2-category of interest. Then an action is just a map:
The unique object of – let's call it , gets taken to some object . This object is the thing being "acted on" by . The existence of the action implies that there are automorphisms for every morphism in (which correspond to the elements of the group of the crossed module). This would be enough to describe ordinary symmetry, but the higher symmetry is also expressed in the images of 2-morphisms , which we might call 2-symmetries relating 1-symmetries.
What we want to do in our paper, which the talk summarizes, is to show how this sort of 2-group action gives rise to a 2-groupoid (actually, just a 2-category when the being acted on is a general category). Then we claim that the 2-groupoid of connections can be seen as one that shows up in exactly this way. (In the following, I have to give some credit to Dany Majard for talking this out and helping to find a better formalism.)
To make sense of this, we use the fact that there is a diagrammatic way to describe the transformation groupoid associated to the action of a group on a set . The set of morphisms is built as a pullback of the action map, .
This means that morphisms are pairs , thought of as going from to . The rule for composing these is another pullback. The diagram which shows how it's done appears in the slides. The whole construction ends up giving a cubical diagram in , whose top and bottom faces are mere commuting diagrams, and whose four other faces are all pullback squares.
To construct a 2-category from a 2-group action is similar. For now we assume that the 2-group action is strict (rather than being given by a weak 2-functor). In this case, it's enough to think of our 2-group not as a 2-category, but as a group-object in – the same way that a 1-group, as well as being a category, can be seen as a group object in . The set of objects of this category is the group of morphisms of the 2-category, and the morphisms make up the group of 2-morphisms. Being a group object is the same as having all the extra structure making up a 2-group.
To describe a strict action of such a on , we just reproduce in the diagram that defines an action in :
The fact that is an action just means this commutes. In principle, we could define a weak action, which would mean that this commutes up to isomorphism, but we won't be looking at that here.
Constructing the same diagram which describes the structure of a transformation groupoid (p29 in the slides for the talk), we get a structure with a "category of objects" and a "category of morphisms". The construction in gives us directly a set of morphisms, while itself is the set of objects. Similarly, in , the category of objects is just , while the construction gives a category of morphisms.
The two together make a category internal to , which is to say a double category. By analogy with , we call this double category .
We take as the category of objects, as the "horizontal category", whose morphisms are the horizontal arrows of the double category. The category of morphisms of shows up by letting its objects be the vertical arrows of the double category, and its morphisms be the squares. These look like this:
The vertical arrows are given by pairs of objects , and just like the transformation 1-groupoid, each corresponds to the fact that the action of takes to . Each square (morphism in the category of morphisms) is given by a pair of morphisms, one from (given by an element in ), and one from .
The horizontal arrow on the bottom of this square is:
The fact that these are equal is exactly the fact that is a natural transformation.
The double category turns out to have a very natural example which occurs in higher gauge theory.
Higher Symmetry of the Moduli Space
The point of the talk is to show how the 2-groupoid of connections, previously described as , can be seen as coming from a 2-group action on a category – the objects of this category being exactly the connections. In the slides above, for various reasons, we did this in a discretized setting – a manifold with a decomposition into cells. This is useful for writing things down explicitly, but not essential to the idea behind the 2-symmetry of the moduli space.
The point is that there is a category we call , whose objects are the connections: these assign -holonomies to edges of our discretization (in general, to paths), and -holonomies to 2D faces. (Without discretization, one would describe these in terms of -valued 1-forms and -valued 2-forms.)
The morphisms of are one type of "gauge transformation": namely, those which assign -holonomies to edges. (Or: -valued 1-forms). They affect the edge holonomies of a connection just like a 2-morphism in . Face holonomies are affected by the -value that comes from the boundary of the face.
What's physically significant here is that both objects and morphisms of describe nonlocal geometric information. They describe holonomies over edges and surfaces: not what happens at a point. The "2-group of gauge transformations", which we call , on the other hand, is purely about local transformations. If is the vertex set of the discretized manifold, then : one copy of the gauge 2-group at each vertex. (Keeping this finite dimensional and avoiding technical details was one main reason we chose to use a discretization. In principle, one could also talk about the 2-group of -valued functions, whose objects and morphisms, thinking of it as a group object in , are functions valued in morphisms of .)
Now, the way acts on is essentially by conjugation: edge holonomies are affected by pre- and post-multiplication by the values at the two vertices on the edge – whether objects or morphisms of . (Face holonomies are unaffected). There are details about this in the slides, but the important thing is that this is a 2-group of purely local changes. The objects of are gauge transformations of this other type. In a continuous setting, they would be described by -valued functions. The morphisms are gauge modifications, and could be described by -valued functions.
The main conceptual point here is that we have really distinguished between two kinds of gauge transformation, which are the horizontal and vertical arrows of the double category . This expresses the 2-symmetry by moving some gauge transformations into the category of connections, and others into the 2-group which acts on it. But physically, we would like to say that both are "gauge transformations". So one way to do this is to "collapse" the double category to a bicategory: just formally allow horizontal and vertical arrows to compose, so that there is only one kind of arrow. Squares become 2-cells.
So then if we collapse the double category expressing our 2-symmetry relation this way, the result is exactly equivalent to the functor category way of describing connections. (The morphisms will all be invertible because is a groupoid and is a 2-group).
I'm interested in this kind of geometrical example partly because it gives a good way to visualize something new happening here. There appears to be some natural 2-symmetry on this space of fields, which is fairly easy to see geometrically, and distinguishes in a fundamental way between two types of gauge transformation. This sort of phenomenon doesn't occur in the world of – a set has no morphisms, after all, so the transformation groupoid for a group action on it is much simpler.
In broad terms, this means that 2-symmetry has qualitatively new features that familiar old 1-symmetry doesn't have. Higher categorical versions – -groups acting on -groupoids, as might show up in more complicated HQFT – will certainly be even more complicated. The 2-categorical version is just the first non-trivial situation where this happens, so it gives a nice starting point to understand what's new in higher symmetry that we didn't already know.
Higher Structures in China
Posted by Jeffrey Morton under algebra, categorification, cohomology, conferences, double categories, geometry, groupoids, quantization
Since the last post, I've been busily attending some conferences, as well as moving to my new job at the University of Hamburg, in the Graduiertenkolleg 1670, "Mathematics Inspired by String Theory and Quantum Field Theory". The week before I started, I was already here in Hamburg, at the conference they were organizing "New Perspectives in Topological Quantum Field Theory". But since I last posted, I was also at the 20th Oporto Meeting on Geometry, Topology, and Physics, as well as the third Higher Structures in China workshop, at Jilin University in Changchun. Right now, I'd like to say a few things about some of the highlights of that workshop.
Higher Structures in China III
So last year I had a bunch of discussions I had with Chenchang Zhu and Weiwei Pan, who at the time were both in Göttingen, about my work with Jamie Vicary, which I wrote about last time when the paper was posted to the arXiv. In that, we showed how the Baez-Dolan groupoidification of the Heisenberg algebra can be seen as a representation of Khovanov's categorification. Chenchang and Weiwei and I had been talking about how these ideas might extend to other examples, in particular to give nice groupoidifications of categorified Lie algebras and quantum groups.
That is still under development, but I was invited to give a couple of talks on the subject at the workshop. It was a long trip: from Lisbon, the farthest-west of the main cities of (continental) Eurasia all the way to one of the furthest-East. (Not quite the furthest, but Changchun is in the northeast of China, just a few hours north of Korea, and it took just about exactly 24 hours including stopovers to get there). It was a long way to go for a three day workshop, but as there were also three days of a big excursion to Changbai Mountain, just on the border with North Korea, for hiking and general touring around. So that was a sort of holiday, with 11 other mathematicians. Here is me with Dany Majard, in a national park along the way to the mountains:
Here's me with Alex Hoffnung, on Changbai Mountain (in the background is China):
And finally, here's me a little to the left of the previous picture, where you can see into the volcanic crater. The lake at the bottom is cut out of the picture, but you can see the crater rim, of which this particular part is in North Korea, as seen from China:
Well, that was fun!
Anyway, the format of the workshop involved some talks from foreigners and some from locals, with a fairly big local audience including a good many graduate students from Jilin University. So they got a chance to see some new work being done elsewhere – mostly in categorification of one kind or another. We got a chance to see a little of what's being done in China, although not as much as we might have. I gather that not much is being done yet that fit the theme of the workshop, which was part of the reason to organize the workshop, and especially for having a session aimed specially at the graduate students.
Categorified Algebra
This is a sort of broad term, but certainly would include my own talk. The essential point is to show how the groupoidification of the Heisenberg algebra is a representation of Khovanov's categorification of the same algebra, in a particular 2-category. The emphasis here is on the fact that it's a representation in a 2-category whose objects are groupoids, but whose morphisms aren't just functors, but spans of functors – that is, composites of functors and co-functors. This is a pretty conservative weakening of "representations on categories" – but it lets one build really simple combinatorial examples. I've discussed this general subject in recent posts, so I won't elaborate too much. The lecture notes are here, if you like, though – they have more detail than my previous post, but are less technical than the paper with Jamie Vicary.
Aaron Lauda gave a nice introduction to the program of categorifying quantum groups, mainly through the example of the special case , somewhat along the same lines as in his introductory paper on the subject. The story which gives the motivation is nice: one has knot invariants such as the Jones polynomial, based on representations of groups and quantum groups. The Jones polynomial can be categorified to give Khovanov homology (which assigns a complex to a knot, whose graded Euler characteristic is the Jones polynomial) – but also assigns maps of complexes to cobordisms of knots. One then wants to categorify the representation theory behind it – to describe actions of, for instance, quantum on categories. This starting point is nice, because it can work by just mimicking the construction of and representations in terms of weight spaces: one gets categories which correspond to the "weight spaces" (usually just vector spaces), and the and operators give functors between them, and so forth.
Finding examples of categories and functors with this structure, and satisfying the right relations, gives "categorified representations" of the algebra – the monoidal categories of diagrams which are the "categorifications of the algebra" then are seen as the abstraction of exactly which relations these are supposed to satisfy. One such example involves flag varieties. A flag, as one might eventually guess from the name, is a nested collection of subspaces in some -dimensional space. A simple example is the Grassmannian , which is the space of all 1-dimensional subspaces of (i.e. the projective space ), which is of course an algebraic variety. Likewise, , the space of all -dimensional subspaces of is a variety. The flag variety consists of all pairs , of a -dimensional subspace of , inside a -dimensional subspace (the case calls to mind the reason for the name: a plane intersecting a given line resembles a flag stuck to a flagpole). This collection is again a variety. One can go all the way up to the variety of "complete flags", (where is -dimenisonal), any point of which picks out a subspace of each dimension, each inside the next.
The way this relates to representations is by way of geometric representation theory. One can see those flag varieties of the form as relating the Grassmanians: there are projections and , which act by just ignoring one or the other of the two subspaces of a flag. This pair of maps, by way of pulling-back and pushing-forward functions, gives maps between the cohomology rings of these spaces. So one gets a sequence , and maps between the adjacent ones. This becomes a representation of the Lie algebra. Categorifying this, one replaces the cohomology rings with derived categories of sheaves on the flag varieties – then the same sort of "pull-push" operation through (derived categories of sheaves on) the flag varieties defines functors between those categories. So one gets a categorified representation.
Heather Russell's talk, based on this paper with Aaron Lauda, built on the idea that categorified algebras were motivated by Khovanov homology. The point is that there are really two different kinds of Khovanov homology – the usual kind, and an Odd Khovanov Homology, which is mainly different in that the role played in Khovanov homology by a symmetric algebra is instead played by an exterior (antisymmetric) algebra. The two look the same over a field of characteristic 2, but otherwise different. The idea is then that there should be "odd" versions of various structures that show up in the categorifications of (and other algebras) mentioned above.
One example is the fact that, in the "even" form of those categorifications, there is a natural action of the Nil Hecke algebra on composites of the generators. This is an algebra which can be seen to act on the space of polynomials in commuting variables, , generated by the multiplication operators , and the "divided difference operators" based on the swapping of two adjacent variables. The Hecke algebra is defined in terms of "swap" generators, which satisfy some -deformed variation of the relations that define the symmetric group (and hence its group algebra). The Nil Hecke algebra is so called since the "swap" (i.e. the divided difference) is nilpotent: the square of the swap is zero. The way this acts on the objects of the diagrammatic category is reflected by morphisms drawn as crossings of strands, which are then formally forced to satisfy the relations of the Nil Hecke algebra.
The ODD Nil Hecke algebra, on the other hand, is an analogue of this, but the are anti-commuting, and one has different relations satisfied by the generators (they differ by a sign, because of the anti-commutation). This sort of "oddification" is then supposed to happen all over. The main point of the talk was to to describe the "odd" version of the categorified representation defined using flag varieties. Then the odd Nil Hecke algebra acts on that, analogously to the even case above.
Marco Mackaay gave a couple of talks about the web algebra, describing the results of this paper with Weiwei Pan and Daniel Tubbenhauer. This is the analog of the above, for , describing a diagram calculus which accounts for representations of the quantum group. The "web algebra" was introduced by Greg Kuperberg – it's an algebra built from diagrams which can now include some trivalent vertices, along with rules imposing relations on these. When categorifying, one gets a calculus of "foams" between such diagrams. Since this is obviously fairly diagram-heavy, I won't try here to reproduce what's in the paper – but an important part of is the correspondence between webs and Young Tableaux, since these are labels in the representation theory of the quantum group – so there is some interesting combinatorics here as well.
Algebraic Structures
Some of the talks were about structures in algebra in a more conventional sense.
Jiang-Hua Lu: On a class of iterated Poisson polynomial algebras. The starting point of this talk was to look at Poisson brackets on certain spaces and see that they can be found in terms of "semiclassical limits" of some associative product. That is, the associative product of two elements gives a power series in some parameter (which one should think of as something like Planck's constant in a quantum setting). The "classical" limit is the constant term of the power series, and the "semiclassical" limit is the first-order term. This gives a Poisson bracket (or rather, the commutator of the associative product does). In the examples, the spaces where these things are defined are all spaces of polynomials (which makes a lot of explicit computer-driven calculations more convenient). The talk gives a way of constructing a big class of Poisson brackets (having some nice properties: they are "iterated Poisson brackets") coming from quantum groups as semiclassical limits. The construction uses words in the generating reflections for the Weyl group of a Lie group .
Li Guo: Successors and Duplicators of Operads – first described a whole range of different algebra-like structures which have come up in various settings, from physics and dynamical systems, through quantum field theory, to Hopf algebras, combinatorics, and so on. Each of them is some sort of set (or vector space, etc.) with some number of operations satisfying some conditions – in some cases, lots of operations, and even more conditions. In the slides you can find several examples – pre-Lie and post-Lie algebras, dendriform algebras, quadri- and octo-algebras, etc. etc. Taken as a big pile of definitions of complicated structures, this seems like a terrible mess. The point of the talk is to point out that it's less messy than it appears: first, each definition of an algebra-like structure comes from an operad, which is a formal way of summing up a collection of operations with various "arities" (number of inputs), and relations that have to hold. The second point is that there are some operations, "successor" and "duplicator", which take one operad and give another, and that many of these complicated structures can be generated from simple structures by just these two operations. The "successor" operation for an operad introduces a new product related to old ones – for example, the way one can get a Lie bracket from an associative product by taking the commutator. The "duplicator" operation takes existing products and introduces two new products, whose sum is the previous one, and which satisfy various nice relations. Combining these two operations in various ways to various starting points yields up a plethora of apparently complicated structures.
Dany Majard gave a talk about algebraic structures which are related to double groupoids, namely double categories where all the morphisms are invertible. The first part just defined double categories: graphically, one has horizontal and vertical 1-morphisms, and square 2-morphsims, which compose in both directions. Then there are several special degenerate cases, in the same way that categories have as degenerate cases (a) sets, seen as categories with only identity morphisms, and (b) monoids, seen as one-object categories. Double categories have ordinary categories (and hence monoids and sets) as degenerate cases. Other degenerate cases are 2-categories (horizontal and vertical morphisms are the same thing), and therefore their own special cases, monoidal categories and symmetric monoids. There is also the special degenerate case of a double monoid (and the extra-special case of a double group). (The slides have nice pictures showing how they're all degenerate cases). Dany then talked about some structure of double group(oids) – and gave a list of properties for double groupoids, (such as being "slim" – having at most one 2-cell per boundary configuration – as well as two others) which ensure that they're equivalent to the semidirect product of an abelian group with the "bicrossed product" of two groups and (each of which has to act on the other for this to make sense). He gave the example of the Poincare double group, which breaks down as a triple bicrossed product by the Iwasawa decomposition:
( is certain group of matrices). So there's a unique double group which corresponds to it – it has squares labelled by , and the horizontial and vertical morphisms by elements of and respectively. Dany finished by explaining that there are higher-dimensional analogs of all this – -tuple categories can be defined recursively by internalization ("internal categories in -tuple-Cat"). There are somewhat more sophisticated versions of the same kind of structure, and finally leading up to a special class of -tuple groups. The analogous theorem says that a special class of them is just the same as the semidirect product of an abelian group with an -fold iterated bicrossed product of groups.
Also in this category, Alex Hoffnung talked about deformation of formal group laws (based on this paper with various collaborators). FGL's are are structures with an algebraic operation which satisfies axioms similar to a group, but which can be expressed in terms of power series. (So, in particular they have an underlying ring, for this to make sense). In particular, the talk was about formal group algebras – essentially, parametrized deformations of group algebras – and in particular for Hecke Algebras. Unfortunately, my notes on this talk are mangled, so I'll just refer to the paper.
I'm using the subject-header "physics" to refer to those talks which are most directly inspired by physical ideas, though in fact the talks themselves were mathematical in nature.
Fei Han gave a series of overview talks intorducing "Equivariant Cohomology via Gauged Supersymmetric Field Theory", explaining the Stolz-Teichner program. There is more, using tools from differential geometry and cohomology to dig into these theories, but for now a summary will do. Essentially, the point is that one can look at "fields" as sections of various bundles on manifolds, and these fields are related to cohomology theories. For instance, the usual cohomology of a space is a quotient of the space of closed forms (so the cohomology, , is a quotient of the space of closed -forms – the quotient being that forms differing by a coboundary are considered the same). There's a similar construction for the -theory , which can be modelled as a quotient of the space of vector bundles over . Fei Han mentioned topological modular forms, modelled by a quotient of the space of "Fredholm bundles" – bundles of Banach spaces with a Fredholm operator around.
The first two of these examples are known to be related to certain supersymmetric topological quantum field theories. Now, a TFT is a functor into some kind of vector spaces from a category of -dimensional manifolds and -dimensional cobordisms
Intuitively, it gives a vector space of possible fields on the given space and a linear map on a given spacetime. A supersymmetric field theory is likewise a functor, but one changes the category of "spacetimes" to have both bosonic and fermionic dimension. A normal smooth manifold is a ringed space , since it comes equipped with a sheaf of rings (each open set has an associated ring of smooth functions, and these glue together nicely). Supersymmetric theories work with manifolds which change this sheaf – so a -dimensional space has the sheaf of rings where one introduces some new antisymmetric coordinate functions , the "fermionic dimensions":
Then a supersymmetric TFT is a functor:
(where is the category of supersymmetric topological vector spaces – defined similarly). The connection to cohomology theories is that the classes of such field theories, up to a notion of equivalence called "concordance", are classified by various cohomology theories. Ordinary cohomology corresponds then to -dimensional extended TFT (that is, with 0 bosonic and 1 fermionic dimension), and -theory to a -dimensional extended TFT. The Stoltz-Teichner Conjecture is that the third example (topological modular forms) is related in the same way to a -dimensional extended TFT – so these are the start of a series of cohomology theories related to various-dimension TFT's.
Last but not least, Chris Rogers spoke about his ideas on "Higher Geometric Quantization", on which he's written a number of papers. This is intended as a sort of categorification of the usual ways of quantizing symplectic manifolds. I am still trying to catch up on some of the geometry This is rooted in some ideas that have been discussed by Brylinski, for example. Roughly, the message here is that "categorification" of a space can be thought of as a way of acting on the loop space of a space. The point is that, if points in a space are objects and paths are morphisms, then a loop space shifts things by one categorical level: its points are loops in , and its paths are therefore certain 2-morphisms of . In particular, there is a parallel to the fact that a bundle with connection on a loop space can be thought of as a gerbe on the base space. Intuitively, one can "parallel transport" things along a path in the loop space, which is a surface given by a path of loops in the original space. The local description of this situation says that a 1-form (which can give transport along a curve, by integration) on the loop space is associated with a 2-form (giving transport along a surface) on the original space.
Then the idea is that geometric quantization of loop spaces is a sort of higher version of quantization of the original space. This "higher" version is associated with a form of higher degree than the symplectic (2-)form used in geometric quantization of . The general notion of n-plectic geometry, where the usual symplectic geometry is the case , involves a -form analogous to the usual symplectic form. Now, there's a lot more to say here than I properly understand, much less can summarize in a couple of paragraphs. But the main theorem of the talk gives a relation between n-plectic manifolds (i.e. ones endowed with the right kind of form) and Lie n-algebras built from the complex of forms on the manifold. An important example (a theorem of Chris' and John Baez) is that one has a natural example of a 2-plectic manifold in any compact simple Lie group together with a 3-form naturally constructed from its Maurer-Cartan form.
At any rate, this workshop had a great proportion of interesting talks, and overall, including the chance to see a little more of China, was a great experience!
Paper on the Categorified Heisenberg Algebra
Posted by Jeffrey Morton under 2-Hilbert Spaces, categorification, quantization, spans
This blog has been on hiatus for a while, as I've been doing various other things, including spending some time in Hamburg getting set up for the move there. Another of these things has been working with Jamie Vicary on our project on the groupoidified Quantum Harmonic Oscillator (QHO for short). We've now put the first of two papers on the arXiv – this one is a relatively nonrigorous look at how this relates to categorification of the Heisenberg Algebra. Since John Baez is a high-speed blogging machine, he's already beaten me to an overview of what the paper says, and there's been some interesting discussion already. So I'll try to say some different things about what it means, and let you take a look over there, or read the paper, for details.
I've given some talks about this project, but as we've been writing it up, it's expanded considerably, including a lot of category-theoretic details which are going to be in the second paper in this series. But the basic point of this current paper is essentially visual and, in my opinion, fairly simple. The groupoidification of the QHO has a nice visual description, since it is all about the combinatorics of finite sets. This was described originally by Baez and Dolan, and in more detail in my very first paper. The other visual part here is the relation to Khovanov's categorification of the Heisenberg algebra using a graphical calculus. (I wrote about this back when I first became aware of it.)
As a Representation
The scenario here actually has some common features with my last post. First, we have a monoidal category with duals, let's say presented in terms of some generators and relations. Then, we find some concrete model of this abstractly-presented monoidal category with duals in a specific setting, namely .
Calling this "concrete" just refers to the fact that the objects in have some particular structure in terms of underlying sets and so on. By a "model" I just mean a functor ("model" and "representation" mean essentially the same thing in this context). In fact, for this to make sense, I think of as a 2-category with one object. Then a model is just some particular choices: a groupoid to represent the unique object, spans of groupoids to represent the generating morphisms, spans of spans to represent the generating 2-morphisms, all chosen so that the defining relations hold.
In my previous post, was a category of cobordisms, but in this case, it's essentially Khovanov's monoidal category whose objects are (oriented) dots and whose morphisms are certain classes of diagrams. The nice fact about the particular model we get is that the reasons these relations hold are easy to see in terms of a the combinatorics of sets. This is why our title describes what we got as "a combinatorial representation" Khovanov's category of diagrams, for which the ring of isomorphism classes of objects is the integral form of the algebra. This uses that is not just a monoidal category: it can be a monoidal 2-category. What's more, the monoidal category "is" also a 2-category – with one object. The objects of are really the morphisms of this 2-category.
So is in some sense a universal theory (because it's defined freely in terms of generators and relations) of what a categorification of the Heisenberg algebra must look like. Baez-Dolan groupoidification of the QHO then turns out to be a representation or model of it. In fact, the model is faithful, so that we can even say that it provides a combinatorial interpretation of that category.
The Combinatorial Model
Between the links above, you can find a good summary of the situation, so I'll be a bit cursory. The model is described in terms of structures on finite sets. This is why our title calls this a "combinatorial representation" of Khovanov's categorification.
This means that the one object of (as a 2-category) is taken to the groupoid of finite sets and bijections (which we just called in the paper for brevity). This is the "Fock space" object. For simplicity, we can take an equivalent groupoid, which has just one -element set for each .
Now, a groupoid represents a system, whose possible configurations are the objects and whose symmetries are the morphisms. In this case, the possible configurations are the different numbers of "quanta", and the symmetries (all set-bijections) show that all the quanta are interchangeable. I imagine a box containing some number of ping-pong balls.
A span of groupoids represents a process. It has a groupoid whose objects are histories (and morphisms are symmetries of histories). This groupoid has a pair of maps: to the system the process starts in, and to the system it ends in. In our model, the most important processes (which generate everything else) are the creation and annihilation operators, and – and their categorified equivalents, and . The spans that represent them are very simple: they are processes which put a new ball into the box, or take one out, respectively. (Algebraically, they're just a way to organize all the inclusions of symmetric groups .)
The "canonical commutation relation", which we write without subtraction thus:
is already understood in the Baez-Dolan story: it says that there is one more way to remove a ball from a box after putting a new one into it (one more history for the process ) than to remove a ball and then add a new one (histories for ). This is fairly obvious: in the first instance, you have one more to choose from when removing the ball.
But the original Baez-Dolan story has no interesting 2-morphisms (the actual diagrams which are the 1-morphisms in ), whereas these are absolutely the whole point of a categorification in the sense Khovanov gets one, since the 1-morphisms of determine what the isomorphism classes of objects even are.
So this means that we need to figure out what the 2-morphisms in need to be – first in general, and second in our particular representation of .
In general, a 2-morphism in is a span of span-maps. You'll find other people who take it to be a span-map. This would be a functor between the groupoids of histories: roughly, a map which assigns a history in the source span to a history in the target span (and likewise for symmetries), in a way that respects how they're histories. But we don't want just a map: we want a process which has histories of its own. We want to describe a "movie of processes" which change one process into another. These can have many histories of their own.
In fact, they're not too complicated. Here's one of Khovanov's relation in which forms part of how the commutation relation is expressed (shuffled to get rid of negatives, which we constantly need to do in the combinatorial model since we have no negative sets):
We read an upward arrow as "add a ball to the box", and a downward arrow as "remove a ball", and read right-to-left. Both processes begin and end with"add then remove". The right-hand side just leaves this process alone: it's the identity.
The left-hand side shows a process-movie whose histories have two different cases. Suppose we begin with a history for which we add and then remove . The first case is that : we remove the same ball we put in. This amounts to doing nothing, so the first part of the movie eliminates all the adding and removing. The second part puts the add-remove pair back in.
The second case ensures that , since it takes the initial history to the history (of a different process!) in which we remove and then add (impossible if , since we can't remove this ball before adding it). This in turn is taken to the history (of the original process!) where we add and then remove ; so this relates every history to itself, except for the case that . Overall the sum of these relations give the identity on histories, which is the right hand side.
This picture includes several of the new 2-morphisms that we need to add to the Baez-Dolan picture: swapping the order of two generators, and adding or removing a pair of add/remove operations. Finding spans of spans which accomplish this (and showing they satisfy the right relations) is all that's needed to finish up the combinatorial model. So, for instance, the span of spans which adds a "remove-then-add" pair is this one:
If this isn't clear, well, it's explained in more detail in the paper. (Do notice, though, that this is a diagram in groupoids: we need to specify that there are identity 2-cells in the span, rather than some other 2-cells.)
So this is basically how the combinatorial model works.
Adjointness
But in fact this description is (as often happens) chronologically backwards: what actually happened was that we had worked out what the 2-morphisms should be for different reasons. While trying to to understand what kind of structure this produced, we realized (thanks to Marco Mackaay) that the result was related to , which in turn shed more light on the 2-morphisms we'd found.
So far so good. But what makes it possible to represent the kind of monoidal category we're talking about in this setting is adjointness. This is another way of saying what I meant up at the top by saying we start with a monoidal category with duals. This means morphisms each have a partner – a dual, or adjoint – going in the opposite direction. The representations of the raising and lowering operators of the Heisenberg algebra on the Hilbert space for the QHO are linear adjoints. Their categorifications also need to be adjoints in the sense of adjoint 1-morphisms in a 2-category.
This is an abstraction of what it means for two functors and to be adjoint. In particular, it means there have to be certain 2-cells such as the unit and counit satisfying some nice relations. In fact, this only makes a left adjoint and a right adjoint – in this situation, we also have another pair which makes a right adjoint and a left one. That is, they should be "ambidextrous adjoints", or "ambiadjoints" for short. This is crucial if they're going to represent any graphical calculus of the kind that's involved here (see the first part of this paper by Aaron Lauda, for instance).
So one of the theorems in the longer paper will show concretely that any 1-morphism in has an ambiadjoint – which happens to look like the same span, but thought of as going in the reverse direction. This is somewhat like how the adjoint of a real linear map, expressed as a matrix relative to well-chosen bases, is just the transpose of the same matrix. In particular, and are adjoints in just this way. The span-of-span-maps I showed above is exactly the unit for one side of this ambi-adjunction – but it is just a special case of something that will work for any span and its adjoint.
Finally, there's something a little funny here. Since the morphisms of aren't functors or maps, this combinatorial model is not exactly what people often mean by a "categorified representation". That would be an action on a category in terms of functors and natural transformations. We do talk about how to get one of these on a 2-vector space out of our groupoidal representation toward the end.
In particular, this amounts to a functor into – the objects of being categories of a particular kind, and the morphisms being functors that preserve all the structure of those categories. As it turns out, the thing about this setting which is good for this purpose is that all those functors have ambiadjoints. The "2-linearization" that takes into is a 2-functor, and this means that all the 2-cells and equations that make two morphisms ambiadjoints carry over. In , it's very easy for this to happen, since all those ambiadjoints are already present. So getting representations of categorified algebras that are made using these monoidal categories of diagrams on 2-vector spaces is fairly natural – and it agrees with the usual intuition about what "representation" means.
Anything I start to say about this is in danger of ballooning, but since we're already some 40 pages into the second paper, I'll save the elaboration for that…
Cohomology, Groupoidification, and TQFT
Posted by Jeffrey Morton under 2-Hilbert Spaces, category theory, cohomology, groupoids, quantization, representation theory, spans
I've written here before about building topological quantum field theories using groupoidification, but I haven't yet gotten around to discussing a refinement of this idea, which is in the most recent version of my paper on the subject. I also gave a talk about this last year in Erlangen. The main point of the paper is to pull apart some constructions which are already fairly well known into two parts, as part of setting up a category which is nice for supporting models of fairly general physical systems, using an extension of the concept of groupoidification. So here's a somewhat lengthy post which tries to unpack this stuff a bit.
Factoring TQFT
The older version of this paper talked about the untwisted version of the Dijkgraaf-Witten (DW for short) model, which is a certain kind of TQFT based on a gauge theory with a finite gauge group. (Freed and Quinn put it as: "Chern-Simons theory with finite gauge group"). The new version gets the general – that is, the twisted – form in the same way: factoring the theory into two parts. So, the DW model, which was originally described by Dijkgraaf and Witten in terms of a state-sum, is a functor
The "twisting" is the point of their paper, "Topological Gauge Theories and Group Cohomology". The twisting has to do with the action for some physical theory. Now, for a gauge theory involving flat connections, the kind of gauge-theory actions which involve the curvature of a connection make no sense: the curvature is zero. So one wants an action which reflects purely global features of connections. The cohomology of the gauge group is where this comes from.
Now, the machinery I describe is based on a point of view which has been described in a famous paper by Freed, Hopkins, Lurie and Teleman (FHLT for short – see further discussion here) in terms in which the two stages are called the "classical field theory" (which has values in groupoids), and the "quantization functor", which takes one into Hilbert spaces.
Actually, we really want to have an "extended" TQFT: a TQFT gives a Hilbert space for each 2D manifold ("space"), and a linear map for a 3D cobordism ("spacetime") between them. An extended TQFT will assign (higher) algebraic data to lower-dimension boundaries still. My paper talks only about the case where we've extended down to codimension 2, whereas FHLT talk about extending "down to a point". The point of this first stopping point is to unpack explicitly and computationally what the factorization into two parts looks like at the first level beyond the usual TQFT.
In the terminology I use, the classical field theory is:
This depends on a cohomology class . The "quantization functor" (which in this case I call "2-linearization"):
The middle stage involves the monoidal 2-category I call . (In FHLT, they use different terminology, for instance "families" rather than "spans", but the principle is the same.)
Freed and Quinn looked at the quantization of the "extended" DW model, and got a nice geometric picture. In it, the action is understood as a section of some particular line-bundle over a moduli space. This geometric picture is very elegant once you see how it works, which I found was a little easier in light of a factorization through .
This factorization isolates the geometry of this particular situation in the "classical field theory" – and reveals which of the features of their setup (the line bundle over a moduli space) are really part of some more universal construction.
In particular, this means laying out an explicit definition of both and .
2-Linearization Recalled
While I've talked about it before, it's worth a brief recap of how 2-linearization works with a view to what happens when you twist it via groupoid cohomology. Here we have a 2-category , whose objects are groupoids ( , , etc.), whose morphisms are spans of groupoids:
and whose 2-morphisms are spans of span-maps (taken up to isomorphism), which look like so:
(And, by the by: how annoying that WordPress doesn't appear to support xypic figures…)
These form a (symmetric monoidal) 2-category, where composition of spans works by taking weak pullbacks. Physically, the idea is that a groupoid has objects which are configurations (in the cause of gauge theory, connections on a manifold), and morphisms which are symmetries (gauge transformations, in this case). Then a span is a groupoid of histories (connections on a cobordism, thought of as spacetime), and the maps pick out its starting and ending configuration. That is, is the groupoid of flat -connections on a manifold , and is the groupoid of flat -connections on some cobordism , of which is part of the boundary. So any such connection can be restricted to the boundary, and this restriction is .
Now 2-linearization is a 2-functor:
It gives a 2-vector space (a nice kind of category) for each groupoid . Specifically, the category of its representations, . Then a span turns into a functor which comes from "pulling" back along (the restricted representation where acts by first applying then the representation), then "pushing" forward along (to the induced representation).
What happens to the 2-morphisms is conceptually more complicated, but it depends on the fact that "pulling" and "pushing" are two-sided adjoints. Concretely, it ends up being described as a kind of "sum over histories" (where "histories" are the objects of ), which turns out to be exactly the path integral that occurs in the TQFT.
Or at least, it's the path integral when the action is trivial! That is, if , so that what's integrated over paths ("histories") is just . So one question is: is there a way to factor things in this way if there's a nontrivial action?
Cohomological Twisting
The answer is by twisting via cohomology. First, let's remember what that means…
We're talking about groupoid cohomology for some groupoid (which you can take to be a group, if you like). "Cochains" will measure how much some nice algebraic fact, such as being a homomorphism, or being associative, "fails to occur". "Twisting by a cocycle" is a controlled way to force some such failure to happen.
So, an -cocycle is some function of composable morphisms of (or, if there's only one object, "group elements", which amounts to the same thing). It takes values in some group of coefficients, which for us is always .
The trivial case where is actually slightly subtle: a 0-cocycle is an invariant function on the objects of a groupoid. (That is, it takes the same value on any two objects related by an (iso)morphism. (Think of the object as a sequence of zero composable morphisms: it tells you where to start, but nothing else.)
The case is maybe a little more obvious. A 1-cochain can measure how a function on objects might fail to be a 0-cocycle. It is a -valued function of morphisms (or, if you like, group elements). The natural condition to ask for is that it be a homomorphism:
This condition means that a cochain is a cocycle. They form an abelian group, because functions satisfying the cocycle condition are closed under pointwise multiplication in . It will automatically by satisfied for a coboundary (i.e. if comes from a function on objects as ). But not every cocycle is a coboundary: the first cohomology is the quotient of cocycles by coboundaries. This pattern repeats.
It's handy to think of this condition in terms of a triangle with edges , , and . It says that if we go from the source to the target of the sequence with or without composing, and accumulate -values, our gives the same result. Generally, a cocycle is a cochain satisfying a "coboundary" condition, which can be described in terms of an -simplex, like this triangle. What about a 2-cocycle? This describes how composition might fail to be respected.
So, for instance, a twisted representation of a group is not a representation in the strict sense. That would be a map into , such that . That is, the group composition rule gets taken directly to the corresponding rule for composition of endomorphisms of the vector space . A twisted representation only satisfies this up to a phase:
where is a function that captures the way this "representation" fails to respect composition. Still, we want some nice properties: is a "cocycle" exactly when this twisting still makes respect the associative law:
Working out what this says in terms of , the cocycle condition says that for any composable triple we have:
So – the second group-cohomology group of – consists of exactly these which satisfy this condition, which ensures we have associativity.
Given one of these maps, we get a category of all the -twisted representations of . It behaves just like an ordinary representation category… because in fact it is one! It's the category of representations of a twisted version of the group algebra of , called . The point is, we can use to twist the convolution product for functions on , and this is still an associative algebra just because satisfies the cocycle condition.
The pattern continues: a 3-cocycle captures how some function of 2 variable may fail to be associative: it specifies an associator map (a function of three variables), which has to satisfy some conditions for any four composable morphisms. A 4-cocycle captures how a map might fail to satisfy this condition, and so on. At each stage, the cocycle condition is automatically satisfied by coboundaries. Cohomology classes are elements of the quotient of cocycles by coboundaries.
So the idea of "twisted 2-linearization" is that we use this sort of data to change 2-linearization.
Twisted 2-Linearization
The idea behind the 2-category is that it contains , but that objects and morphisms also carry information about how to "twist" when applying the 2-linearization . So in particular, what we have is a (symmetric monoidal) 2-category where:
Objects consist of , where is a groupoid and $\theta \in Z^2(A,U(1))$
Morphisms from to consist of a span from to , together with
2-Morphisms from to consist of a span from , together with
The cocycles have to satisfy some compatibility conditions (essentially, pullbacks of the cocycles from the source and target of a span should land in the same cohomology class). One way to see the point of this requirement is to make twisted 2-linearization well-defined.
One can extend the monoidal structure and composition rules to objects with cocycles without too much trouble so that is a subcategory of . The 2-linearization functor extends to :
On Objects: , the category of -twisted representation of
On Morphisms: comes by pulling back a twisted representation in to one in , pulling it through the algebra map "multiplication by ", and pushing forward to
On 2-Morphisms: For a span of span maps, one uses the usual formula (see the paper for details), but a sum over the objects picks up a weight of at each object
When the cocycles are trivial (evaluate to 1 always), we get back the 2-linearization we had before. Now the main point here is that the "sum over histories" that appears in the 2-morphisms now carries a weight.
So the twisted form of 2-linearization uses the same "pull-push" ideas as 2-linearization, but applied now to twisted representations. This twisting (at the object level) uses a 2-cocycle. At the morphism level, we have a "twist" between "pull" and "push" in constructing . What the "twist" actually means depends on which cohomology degree we're in – in other words, whether it's applied to objects, morphisms, or 2-morphisms.
The "twisting" by a 0-cocycle just means having a weight for each object – in other words, for each "history", or connection on spacetime, in a big sum over histories. Physically, the 0-cocycle is playing the role of the Lagrangian functional for the DW model. Part of the point in the FHLT program can be expressed by saying that what Freed and Quinn are doing is showing how the other cocycles are also the Lagrangian – as it's seen at higher codimension in the more "local" theory.
For a TQFT, the 1-cocycles associated to morphisms describe how to glue together values for the Lagrangian that are associated to histories that live on different parts of spacetime: the action isn't just a number. It is a number only "locally", and when we compose 2-morphisms, the 0-cocycle on the composite picks up a factor from the 1-morphism (or 0-morphism, for a horizontal composite) where they're composed.
This has to do with the fact that connections on bits of spacetime can be glued by particular gauge transformations – that is, morphisms of the groupoid of connections. Just as the gauge transformations tell how to glue connections, the cocycles associated to them tell how to glue the actions. This is how the cohomological twisting captures the geometric insight that the action is a section of a line bundle – not just a function, which is a section of a trivial bundle – over the moduli space of histories.
So this explains how these cocycles can all be seen as parts of the Lagrangian when we quantize: they explain how to glue actions together before using them in a sum-over histories. Gluing them this way is essential to make sure that is actually a functor. But if we're really going to see all the cocycles as aspects of "the action", then what is the action really? Where do they come from, that they're all slices of this bigger thing?
Twisting as Lagrangian
Now the DW model is a 3D theory, whose action is specified by a group-cohomology class . But this is the same thing as a class in the cohomology of the classifying space: . This takes a little unpacking, but certainly it's helpful to understand that what cohomology classes actually classify are… gerbes. So another way to put a key idea of the FHLT paper, as Urs Schreiber put it to me a while ago, is that "the action is a gerbe on the classifying space for fields".
This map is given as a path integral over all connections on the space(-time) , which is actually just a sum, since the gauge group is finite and so all the connections are flat. The point is that they're described by assigning group elements to loops in :
But this amounts to the same thing as a map into the classifying space of :
This is essentially the definition of , and it implies various things, such as the fact that is a space whose fundamental group is , and has all other homotopy groups trivial. That is, is the Eilenberg-MacLane space . But the point is that the groupoid of connections and gauge transformations on just corresponds to the mapping space . So the groupoid cohomology classes we get amount to the same thing as cohomology classes on this space. If we're given , then we can get at these by "transgression" – which is very nicely explained in a paper by Simon Willerton.
The essential idea is that a 3-cocycle (representing the class ) amounts to a nice 3-form on which we can integrate over a 3-dimentional submanifold to get a number. For a -dimensional , we get such a 3-manifold from a -dimensional submanifold of : each point gives a copy of in . Then we get a -cocycle on whose values come from integrating over this image. Here's a picture I used to illustrate this in my talk:
Now, it turns out that this gives 2-cocycles for 1-manifolds (the objects of , 1-cocycles on 2D cobordisms between them, and 0-cocycles on 3D cobordisms between these cobordisms. The cocycles are for the groupoid of connections and gauge transformations in each case. In fact, because of Stokes' theorem in , these have to satisfy all the conditions that make them into objects, morphisms, and 2-morphisms of . This is the geometric content of the Lagrangian: all the cocycles are really "reflections" of as seen by transgression: pulling back along the evaluation map from the picture. Then the way you use it in the quantization is described exactly by .
What I like about this is that is a fairly universal sort of thing – so while this example gets its cocycles from the nice geometry of which Freed and Quinn talk about, the insight that an action is a section of a (twisted) line bundle, that actions can be glued together in particular ways, and so on… These presumably can be moved to other contexts. | CommonCrawl |
Journal of Analytical Science and Technology
Quantitation and speciation of inorganic arsenic in a biological sample by capillary ion chromatography combined with inductively coupled plasma mass spectrometry
Seon-Jin Yang1,
Yonghoon Lee1 &
Sang-Ho Nam ORCID: orcid.org/0000-0002-8749-87661
Journal of Analytical Science and Technology volume 13, Article number: 45 (2022) Cite this article
The toxicity and biological activity of arsenic depend on its chemical form. In particular, inorganic arsenics are more toxic than organic ones. Apart from the determination of total arsenics, their accurate speciation is important for toxicity assessment. To separate arsenic species using a cation or an anion separation column, at least 0.5–1.0 mL of sample is required because conventional ion chromatography columns use a sample loop of 100–200 μL. It is thus difficult to analyze samples with small volumes, such as clinical and biological samples. In this study, a method for separating arsenic species using a 5-μL sample loop combined with a capillary ion exchange column has been developed for analyzing small volume of samples. The separated arsenics were determined by inductively coupled plasma mass spectrometry. By oxidizing As(III) to As(V) prior to analysis, the total inorganic arsenics, As(III) and As(V), could be well separated from the organic ones. Linear calibration curves (0.5–50 μg/kg) were obtained for total inorganic arsenics dissolved in water. Sub-picogram-level detection limit was obtained. The analytical capability of this method was successfully validated for certified reference materials, namely water and human urine, with total inorganic arsenic recovery efficiencies of 100% and 121%, respectively. Our method requires less than ~ 10 μL of sample and will be very useful to analyze valuable samples available in limited amounts.
Arsenic, the twentieth most abundant element in the Earth's crust, is a non-metallic element known to be toxic to human. It is ubiquitously distributed in the soil, ocean, and air. Its accumulation in our body through food and drinking water can cause severe diseases, such as liver, bladder, lung, and skin cancers. Arsenic was designated as one of the ten chemicals harmful to public health by the World Health Organization (WHO) (Zhao and Wang 2020; World Health Organization. Arsenic 2018; Taylor et al. 2017; Pasias et al. 2013; Srivastava 2020).
The toxicity and bioavailability of arsenic compounds depend on their chemical forms (Llorente-Mirandes et al. 2017). Investigations regarding the different chemical forms of arsenic species have been performed (Cornelis et al. 2005). Arsenic compounds are mainly categorized into two groups: (i) inorganic arsenics, such as arsenite (As(III)) and arsenate (As(V)), and (ii) organic ones, namely monomethylarsonic acid (MMA), dimethylarsinic acid (DMA), arsenocholine (AsC), and arsenobetaine (AsB). The Environmental Protection Agency (EPA) has set the toxicity index, LD50, 50% of lethal dosage, of the main arsenic species, and it shows that inorganic arsenics are much more toxic than organic ones (Cornelis et al. 2005; Reid et al. 2020) The amount of inorganic arsenics in food and drinking water is thus of critical concern. The regulation limit has been set up for the safety of food in many countries (Fontcuberta et al. 2011; Llorente-Mirandes et al. 2012; Hamano-Nagaoka et al. 2008). Many researchers have developed analytical methodologies for the determination of total arsenic and also different arsenic species in various samples. For the separation of arsenic species, ion exchange (Lee et al. 2019; Chen et al. 2007), high-performance liquid chromatography (HPLC) (Jia et al. 2016; Carlin et al. 2016), and ion exclusion (Schriewer et al. 2017) chromatography have been used. The separated species were detected by inductively coupled plasma mass spectrometry (ICP-MS) (Fitzpatrick et al. 2002; Komorowicz et al. 2019), inductively coupled plasma-atomic emission spectroscopy (ICP-AES) (Cui et al. 2013), atomic fluorescence spectroscopy (AFS) (Guo et al. 2017), and atomic absorption spectroscopy (AAS) (Santos et al. 2017; Viitak and Volynsky 2006).
The sample loop volume for conventional chromatography column is generally more than 100–200 μL (Son et al. 2019; Rosa et al. 2019; Nogueira et al. 2018). The amount of sample required for analysis using this sample loop is typically 0.5–1.0 mL or more. However, this method will not be feasible with very limited amount of sample. In case of biological samples obtained from animals or humans are very limited, the conventional column might not be suitable. In this study, the volume of loop used for the capillary column was 5 μL. With this tiny-volume chromatography system, a new method for the quantitation of total inorganic arsenics has been developed. Prior to the analysis, As(III) was oxidized to As(V), and the total inorganic arsenics (As(III) + As(V)) were separated from the organic ones by a capillary column. Then, the separated species were determined by ICP-MS. The developed method was validated by the determination of arsenic species in the standard reference materials (SRMs) of water and human urine. Our method which requires a sample volume of ~ 10 μL will be very helpful to analyze valuable samples available in limited amounts, such as clinical or forensic liquid specimens.
The ion chromatography (IC) device used in this work was assembled using a sample loop with a volume of 5 μL, a capillary chromatography column (0.4 × 250 mm, Dionex IonPac AS-11 HC, Thermo Fisher Scientific, Waltham, USA), a guard column (0.4 × 50 mm, Dionex IonPac AG-11, Thermo Fisher Scientific), and a sample introduction pump (S 1130, Sykam GmbH, Eresing, Germany). The operating parameters of IC are summarized in Table 1. For sample injection, a 6-port 2-position injection valve (Rheodyne Model 7125, Bellefonte, USA) was used. All samples and standards were injected with a syringe (710 RN, Hamilton Company, Reno, NV, USA) through the sample loop. An ICP-MS instrument (PerkinElmer Life Sciences, Shelton, CT, USA) was used to detect separated arsenic species from IC. Argon chloride (ArCl) is a well-known polyatomic interferent for arsenic (Tan and Horlick 1986). The ICP-MS instrument has a reaction gas channel which supplied anhydrous ammonia to the direct reaction cell (DRC) mode to remove the interference of 40Ar35Cl+ to 75As+. The operating parameters of ICP-MS are summarized in Table 2. The experiment using conventional column (PRP-X100, Hamilton, Reno, NV, USA) was also performed to compare the detection limits of the methods using the capillary and conventional columns. A sample loop with a volume of 100 μL was used for the analysis using the conventional column. The other experimental conditions for IC and ICP-MS were similar to those of the analysis using the capillary column.
Table 1 Operating parameters of IC using the capillary column
Table 2 Operating parameters of ICP-MS
Reagents and standards
Deionized water (DIW, 18.2 MΩ cm) from a water purification system (PURESAB CLASSIC, CLASSIC UV MK2, ELGA, USA) was used for the preparation of standard solutions, eluents, and sample solutions. Stock solutions (100 mg/kg) of arsenic species were prepared using the following reagents: sodium arsenate dibasic heptahydrate (≥ 98%, Sigma-Aldrich) for As(V), sodium metaarsenite (98%, Sigma-Aldrich, Steinheim, Germany) for As(III), cacodylic acid (≥ 98%, Sigma-Aldrich) for DMA, and disodium methyl arsonate hexahydrate (98%, Sigma-Aldrich) for MMA. The prepared stock solutions were stored in the dark at 4 ℃. Working solutions were prepared daily. Calibration standards and samples were prepared using phosphate buffer solution (Sigma-Aldrich) at pH 6.0. Ammonium phosphate dibasic (≥ 98%, Sigma-Aldrich) was used as the eluent for IC.
Sample and sample preparation
The SRM of trace elements in water (NIST SRM 1643f, Trace Elements in Water, National Institute of Standards and Technology, Gaithersburg, MD, USA) and that of human urine (NIST SRM 2669, Arsenic Species in Frozen Human Urine, National Institute of Standards and Technology, Gaithersburg, MD, USA) was employed for the validation. Prior to IC-ICP-MS analysis, all the samples were filtered through a 0.2 μm PVDF membrane filter (Whatman International Ltd., Kent, England).
Speciation by capillary ion chromatography
First, standard solutions of As(III), As(V), MMA, and DMA and their mixture were prepared. Each of the standard solutions contained 5 μg/kg of arsenic. In the mixture solution, arsenic concentration from each species was set to 5 μg/kg. The prepared standard solutions were injected into the capillary column. The separated arsenic species were detected by ICP-MS. As shown in Fig. 1a, for the mixture solution, only two peaks were observed at around retention times 222 and 441 s. The two peaks could be identified by comparing the chromatogram with those of the standard solutions of MMA, DMA, As(III), and As(V) which are shown in Fig. 1b–e, respectively. In the chromatograms of the standard solutions, the peaks corresponding to MMA, DMA, As(III), and As(V) were observed at 227, 216, 215, and 440 s, respectively. This indicates that As(V) was well separated from the other species, but MMA, DMA, and As(III) were not resolved by the capillary column. Therefore, in the chromatogram of the mixture solution, the peak at 222 s was assigned to the mixture of As(III), MMA, and DMA, and that at 441 s was identified as the separated As(V). Also, it should be noted that the sensitivity was different among the four arsenic species (compare the intensities of the four peaks in Fig. 1b–e). The four chromatograms in Fig. 1b–e were added and plotted along with the chromatogram of the mixture solution in Fig. 1a. The two chromatograms agreed with each other. This indicates that the sensitivity difference was not due to experimental error but to the intrinsic properties of the different species.
Chromatograms of Arsenic species. a The chromatogram of the mixture solution (red) and the sum of the chromatograms in (b–e) (blue), and the chromatograms of the standard solutions of b MMA, c DMA, d As(III), and e As(V). In the mixture solution, the concentrations of As from each species were equally set to 5 μg/kg. Each standard solution contained 5 μg/kg of arsenic
In our previous works, the four arsenic species were completely separated by conventional column (Lee et al. 2019; Son et al. 2019; Nam et al. 2016). The lower resolving power of the capillary column used in this work could be attributed to its smaller inner diameter (0.4 mm) and particle size (4 μm) as compared to the conventional column (inner diameter = 4.6 mm and particle size = 10 μm). However, the aim of this work is to determine the total inorganic arsenics (As(III) and As(V)) using capillary column since the toxicity of the organic ones is ignorable in comparison with that of the inorganics. In this regard, hydrogen peroxide was used to selectively oxidize As(III) to As(V). Then, the solution was injected into the capillary column to separate As(V). The detected intensity for As(V) resulted from the sum of As(V) and As(III) oxidized to As(V) and thus represented the total inorganic arsenics (As(III) + As(V)). The selective oxidation of As(III) to As(V) could be confirmed as explained thereafter. Figure 2 compares the chromatograms of the non-oxidized (blue) and oxidized (red) mixture solutions. As observed in the chromatogram of the oxidized mixture solution, the peak at 222 s decreased and that at 441 s increased in contrast to the non-oxidized solution. This can be attributed to the oxidation of As(III) to As(V). Thus, total inorganic arsenics could be separated and determined by the capillary column of ion chromatography coupled with ICP-MS.
Chromatograms of the non-oxidized (blue) and oxidized (red) mixture solutions. In both mixture solutions, the concentrations of As from each species were equally set to 5 μg/kg
Calibration and detection limit
For the calibration curve of inorganic arsenic, 1.0, 2.0, 10, 20, and 50 µg/kg inorganic arsenic standard solutions were prepared from As(III) and As(V) stock solutions (100 mg/kg). Hydrogen peroxide was added to oxidize As(III) to As(V). A linear calibration curve was obtained using the capillary column of IC coupled with ICP-MS as shown in Fig. 3. The correlation coefficient was 0.9999. The linear dynamic range was more than two orders of magnitude. The detection limit was estimated as the concentration giving a signal equivalent to three times the noise, the standard deviation of three repetitive measurements of the background intensity. The detection limit of the inorganic arsenic was 0.13 μg/kg. In order to compare the value obtained using the capillary column with that of the conventional column, a separate experiment was performed. The volume of sample loop combined with the conventional column was 100 μL, which is 20 times larger than that (5 μL) using the capillary column. The detection limit, 0.033 μg/kg, from the experiment using the conventional column was much lower than that obtained using the capillary column. However, it might be unfair to compare the detection limits of the methods using capillary and conventional columns with different amounts of samples. Thus, instead of using concentrations, masses (= amount of sample × detection limit in concentration) were considered:
$${\text{Capillary:}}\quad 5.0 \times 10^{ - 6} \;{\text{kg}} \times \frac{{0.13\;\upmu {\text{g}}}}{{1\;{\text{kg}}}} = 0.65 \times 10^{ - 6} \;\upmu {\text{g}}$$
$${\text{Conventional:}}\quad 100 \times 10^{ - 6} \;{\text{kg}} \times \frac{{0.033\;\upmu {\text{g}}}}{{1\;{\text{kg}}}} = 3.33 \times 10^{ - 6} \;\upmu {\text{g}}$$
Calibration curve of inorganic arsenic
As a result, the capillary-column method was found to show even better detection-limit performance than the conventional column method.
Quantitation of total inorganic arsenic in water and human urine
For the validation of the developed analytical method, the determination of inorganic arsenic species in the standard reference material of water was performed. A 2.706 mL of the water SRM was mixed with 0.800 mL of hydrogen peroxide to oxidize As(III) to As(V), and the solution was diluted to 8.119 mL with the eluent. This process led to the dilution factor of 3. The resulting solution was analyzed by IC with the capillary column coupled with ICP-MS. As shown in Fig. 4a, the arsenic species present in the water SRM was identified as inorganic ones as compared to Fig. 1. The measured concentration of inorganic arsenic, 57.61 ± 0.90 μg/kg, was in agreement with the certified concentration of As, 56.85 ± 0.37 μg/kg (Fig. 4) in the SRM. The recovery efficiency was almost 101.3%. The result was obtained based on three analyses.
a Chromatogram of inorganic arsenic in water SRM and b the measured concentration of inorganic arsenic and the certified concentration of As in the SRM
Due to the toxicity of arsenics (particularly inorganic arsenics), human exposure to arsenic needs to be investigated by an appropriate method capable of determining the quantity and the species of arsenic. In this regard, arsenics in food and drinking water have been analyzed (Nam et al. 2016; Lai et al. 2004). Ingested arsenics undergo various metabolisms in the human body and are finally excreted. Thus, analysis of arsenics in human urine is known to be the most reliable medical diagnosis to check if a person has been exposed to arsenics (Lai et al. 2004; Scheer et al. 2012). The method developed in this work was also applied to analyze total inorganic arsenic in human urine. A 0.498 mL of the human urine SRM was mixed with 0.500 mL of hydrogen peroxide to oxidize As(III) to As(V), and the solution was diluted to 1.448 mL with the eluent. This process led to dilution factor of 3. The resulting solution was analyzed by IC with the capillary column coupled with ICP-MS. As shown in Fig. 5a, the inorganic arsenic species were well separated from the organic ones in the human urine. The measured concentration of total inorganic arsenic was 4.69 ± 0.47 μg/kg, which agreed with the certified value, 3.88 ± 0.40 μg/kg, within the experimental uncertainties (Fig. 5b). The result was obtained based on three analyses. The recovery efficiency was 121% and inferior to that obtained in the analysis of the water SRM. This could be attributed to the more complex matrix of human urine than that of water. The matrix effect on the recovery efficiency needs further detailed investigations. However, the bias (= measured concentration − certified concentration) values are very close to each other. The bias of the inorganic arsenic analysis in the human urine SRM is + 0.81 (= 4.69–3.88) μg/kg, and that in the water SRM is + 0.76 (= 57.61–56.85) μg/kg. Thus, the larger recovery efficiency observed in the analysis of the human urine SRM could be an accuracy issue raised by the instrument and not the sample matrix. The advantages of the method developed compared to other conventional methods including HPLC-ICP-MS were the improved detection capability, low sample volume, and fast analysis time.
a Chromatogram of arsenic species in human urine SRM and b the measured and certified concentrations of inorganic arsenic in human urine SRM
An analytical method based on the IC-ICP-MS using a capillary chromatography column was developed for the determination of total inorganic arsenics in small-volume samples. This method consumed 5 μL of the sample which is much smaller than that (100 μL) for the IC-ICP-MS analysis using a conventional column. Although the chromatographic resolution was not high enough to separate the four arsenic species (DMA, MMA, As(III), and As(V)), the capillary column could be used to separately quantitate total inorganic arsenics (As(III) + As(V)) by oxidizing As(III) to As(V) using hydrogen peroxide prior to IC. The detection limit was 0.13 μg/kg for inorganic arsenics dissolved in water, which was lower than that of the conventional column IC-ICP-MS analysis (0.033 μg/kg). However, in terms of masses, the detection limit of the capillary column method was found to be at the level of sub-picogram that is even better than that of the conventional column method. A linear calibration curve could be obtained for inorganic arsenics dissolved in water. The developed method was successfully validated using the water SRM and showed excellent recovery efficiency. Finally, the developed method was applied to analyze inorganic arsenics in the human urine SRM with 3.88 ± 0.40 μg/kg. In spite of the low concentration and the different matrix, the recovery efficiency was found to be 121%. Our method of analyzing total inorganic arsenics minimized sample consumption and thus would be particularly helpful for medical diagnosis and forensic investigations with limited amount of biological samples.
All data generated and analyzed in this study have been provided in the manuscript.
Carlin DJ, Bradham KD, Cowden J, Heacock M, Henry HF, Lee JS, Thomas DJ, Thompson C, Tokar EJ, Waalkes MP, Birnbaum LS, Suk WA. Arsenic and environmental health: state of the science and future research opportunities. Environ Health Perspect. 2016;124:890–9.
Chen Z, Mahmudur Rahman M, Naidu R. Speciation of vanadium by anion-exchange chromatography with inductively coupled plasma mass spectrometry and confirmation of vanadium complex formation using electrospray mass spectrometry. J Anal Spectrom. 2007;22(7):811–6.
Cornelis, R, Caruso J, Crews H., Heumann K (eds). Handbook of elemental speciation II: species in the environment, food, medicine and occupational health. 2005.
Cui S, Na J-S, Kim N-Y, Lee Y, Nam SH. An investigation on inorganic arsenic in seaweed by ion chromatography combined with inductively coupled plasma-atomic emission spectrometry. Bull Korean Chem Soc. 2013;34(11):3206–10. https://doi.org/10.5012/bkcs.2013.34.11.3206.
Fitzpatrick S, Ebdon L, Foulkes ME. Separation and detection of arsenic and selenium species in environmental samples by HPLC-ICP-MS. Int J Environ Anal Chem. 2002;82(11–12):835–41.
Fontcuberta M, Calderon J, Villalbí JR, Centrich F, Portaña S, Espelt A, et al. Total and inorganic arsenic in marketed food and associated health risks for the Catalan (Spain) Population. J Agric Food Chem. 2011;59(18):10013–22. https://doi.org/10.1021/jf2013502.
Guo M, Wang W, Hai X, Zhou J. HPLC-HG-AFS determination of arsenic species in acute promyelocytic leukemia (APL) plasma and blood cells. J Pharm Biomed Anal. 2017;145:356–63.
Hamano-Nagaoka M, Nishimura T, Matsuda R, Maitani T. Evaluation of a nitric acid-based partial digestion method for selective determination of inorganic arsenic in rice. Food Hyg Saf Sci. 2008;49(2):95–9. https://doi.org/10.3358/shokueishi.49.95.
Jia X, Gong D, Wang J, Huang F, Duan T, Zhang X. Arsenic speciation in environmental waters by a new specific phosphine modified polymer microsphere preconcentration and HPLC–ICP-MS determination. Talanta. 2016;160:437–43.
Komorowicz I, Hanć A, Lorenc W, Barałkiewicz D, Falandysz J, Wang Y. Arsenic speciation in mushrooms using dimensional chromatography coupled to ICP-MS detector. Chemosphere. 2019;233:223–33.
Lai VWM, Sun Y, Ting E, Cullen WR, Reimer KJ. Arsenic speciation in human urine: are we all the same? Toxicol Appl Pharmacol. 2004;198(3):297–306.
Lee WB, Lee SH, Lee Y, Nam SH. Accurate measurement of total arsenic in rice and oyster by considering arsenic species. Bull Korean Chem Soc. 2019;40(12):1178–82.
Llorente-Mirandes T, Calderon J, López-Sánchez JF, Centrich F, Rubio R. A fully validated method for the determination of arsenic species in rice and infant cereal products. Pure Appl Chem. 2012;84:225–38. https://doi.org/10.1351/PAC-CON-11-09-30.
Llorente-Mirandes T, Rubio R, López-Sánchez JF. Inorganic arsenic determination in food: a review of analytical proposals and quality assessment over the last six years. Appl Spectrosc. 2017;71(1):25–69.
Nam SH, Cui S, Park MY. Total arsenic and arsenic species in seaweed and seafood samples determined by ion chromatography coupled with inductively coupled end-on-plasma atomic emission spectrometry. Bull Korean Chem Soc. 2016;37(12):1920–6.
Nogueira R, Evyson AM, Figueiredo JLC, Santos JJ, Neto APN. Arsenic speciation in fish and rice by HPLC-ICP-MS using salt gradient elution. J Braz Chem Soc. 2018;29(8):1593–600. https://doi.org/10.21577/0103-5053.20180091.
Pasias IN, Thomaidis SN, Piperaki AE. Determination of total arsenic, total inorganic arsenic and inorganic arsenic species in rice and rice flour by electrothermal atomic absorption s-pectrometry. Microchem J. 2013;108:1–6. https://doi.org/10.1016/j.microc.2012.11.008.
Reid MS, Hoy KS, Schofield JRM, Uppal JS, Lin Y, Lu X, et al. Arsenic speciation analysis: a review with an emphasis on chromatographic separations. TrAC Trends in Anal Chem. 2020;123:115770.
Rosa FC, Nunes MAG, Duarte FA, Flores ÉMM, Hanzel FB, Vaz AS, et al. Arsenic speciation analysis in rice milk using LC-ICP-MS. Food Chem X. 2019;2(30):10028. https://doi.org/10.1016/j.fochx.2019.100028.
Santos GM, Pozebon D, Cerveira C, Moraes DP. Inorganic arsenic speciation in rice products using selective hydride generation and atomic absorption spectrometry (AAS). Microchem J. 2017;133:265–71.
Scheer J, Findenig S, Goessler W, Francesconi KA, Howard B, Umans JG, et al. Arsenic species and selected metals in human urine: validation of HPLC/ICPMS and ICPMS procedures for a long-term population-based epidemiological study. Anal Methods. 2012;4(2):406–13. https://doi.org/10.1039/C2AY05638K.
Schriewer A, Brink M, Gianmoena K, Cadenas C, Hayen H. Oxalic acid quantification in mouse urine and primary mouse hepatocyte cell culture samples by ion exclusion chromatography–mass spectrometry. J Chromatogr B. 2017;1068–1069:239–44.
Son SH, Lee WB, Kim D, Lee Y, Nam SH. An alternative analytical method for determining arsenic species in rice by using ion chromatography and inductively coupled plasma-mass spectrometry. Food Chem. 2019;270(1):353–8. https://doi.org/10.1016/j.foodchem.2018.07.066.
Srivastava S. Arsenic in drinking water and food. 2020.
Tan SH, Horlick G. Background spectral features in inductively coupled plasma/mass spectrometry. Appl Spectrosc. 1986;40:445–60.
Taylor V, Goodale B, Raab A, Schwerdtle T, Reimer K, Conklin S, et al. Human exposure to organic arsenic species from seafood. Sci Total Environ. 2017;580:266–82. https://doi.org/10.1016/j.scitotenv.2016.12.113.
Viitak A, Volynsky AB. Simple procedure for the determination of Cd, Pb, As and Se in biological samples by electrothermal atomic absorption spectrometry using colloidal Pd modifier. Talanta. 2006;70(4):890–5.
World Health Organization. Arsenic. 2018. https://www.who.int/news-room/fact-sheets/detail/arsenic.
Zhao F-J, Wang P. Arsenic and cadmium accumulation in rice and mitigation strategies. Plant Soil. 2020;446:1–21. https://doi.org/10.1007/s11104-019-04374-6.
This work was supported by the Korea Basic Science Institute (KBSI) National Research Facilities & Equipment Center (NFEC) grant funded by the Korea government (Ministry of Education) (No. 2019R1A6C1010005) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1A2C1003069). The authors thank Prof. Lee Wah Lim, Gifu University in Japan, for her technical support.
No applicable.
Department of Chemistry, Mokpo National University, 1666 Yeongsan-Ro, Cheonggye-Myeon, Jeonnam, 58554, Republic of Korea
Seon-Jin Yang, Yonghoon Lee & Sang-Ho Nam
Seon-Jin Yang
Yonghoon Lee
Sang-Ho Nam
SHN designed the study and directed all experiments. YL advised on technical support and measurement. The experiment was executed by SJY. All authors read and approved the final manuscript.
Correspondence to Yonghoon Lee or Sang-Ho Nam.
Yang, SJ., Lee, Y. & Nam, SH. Quantitation and speciation of inorganic arsenic in a biological sample by capillary ion chromatography combined with inductively coupled plasma mass spectrometry. J Anal Sci Technol 13, 45 (2022). https://doi.org/10.1186/s40543-022-00354-1
Arsenic speciation
Human urine
Capillary column
Inductively coupled plasma mass spectrometry | CommonCrawl |
Combinatorics: Prove that for a sequence with $mn+1$ one of these statements are correct
Assume $m$ and $n$ are to natural numbers. Prove that in a sequence of $mn+1$ numbers, there is an ascending sequence with length of $n+1$ or a descending sequence with length of $m+1$
How can i prove this statement?
sequences-and-series combinatorics ramsey-theory
Misha Lavrov
Ju BcJu Bc
$\begingroup$ See this page. It doesn't give a proof but should help. $\endgroup$ – Carl Schildkraut May 16 '17 at 21:54
Let $\{a_i\}$ be a sequence of $mn+1$ terms.
If there is an increasing sequence of length $n+1$, then the conclusion holds. Therefore, lets assume there is not an increasing sequence of length $n+1$.
Let $l(i)$ denote the length of the longest increasing subsequence ending at $a_i$. Since there is no increasing sequence of length $n+1$, we see that for each term index $i$, $l(i)$ is an integer such that $1 \leq l(i) \leq n$.
Since there are $mn+1$ terms in the sequence, there are $mn+1$ values $l(i)$ to consider. Each $l(i)$ can be one of $n$ values. By the Pigeonhole Principle, there are $m+1$ indices that have the same value for $l(i)$.
We order the indices as follows: $$ i_1 < i_2 <i_3 < \cdots < i_m < i_{m+1} $$ and we have that $$ l(i_1)= l(i_2)=l(i_3)= \cdots=l(i_m) =l(i_{m+1}) = L$$ The claim is that the terms selected by these indices form a decreasing sequence, that is: $$ a_{i_1} > a_{i_2} > a_{i_3} > \cdots > a_{i_m} > a_{i_{m+1}} $$ We show this by contradiction. Assume there are two indices $i_{k_1}$ and $i_{k_2}$ such that: $$ i_{k_1} < i_{k_2} \quad \wedge \quad a_{i_{k_1}} \leq a_{i_{k_2}}$$ Then we see that, since $l(i_{k_1})=L$, we could take the term $a_{i_{k_2}}$ and create an increasing subsequence of length $L+1$ ending at this term, meaning $l(i_{k_2}) \geq L+1$. This contradicts that it must equal $L$. Therefore, the terms form a decreasing sequence: $$ a_{i_1} > a_{i_2} > a_{i_3} > \cdots > a_{i_m} > a_{i_{m+1}} $$
Manuel GuillenManuel Guillen
$\begingroup$ You should point to a reference of this prove among many proves of Erdős–Szekeres theorem $\endgroup$ – JJR May 16 '17 at 22:54
That is usually mentioned as the Erdos-Szekeres', or Dilworth's, theorem. Let us assume that $h_1,\ldots,h_{mn+1}$ is our sequence, with $h_a$ representing the height of the $a$-th person. Let us assume that these people have to go to a post office with $m$ queues. The first person arrives and takes place in the first empty queue. The second person arrives and if $h_2>h_1$ he/she takes place behind the first person, otherwise takes place in the first free queue. The process continues till placing $h_{mn}$. When $h_{mn+1}$ arrives, only two mutually exclusive things might happen:
$h_{mn+1}$ is able to take place in some queue. In such a case there are $mn+1$ people in $m$ queues, hence some queue is made by at least $n+1$ people, and it gives an increasing subsequence;
$h_{mn+1}$ is not able to take place in any queue. Then the last person in every queue and $h_{mn+1}$ form a decreasing subsequence of $m+1$ terms.
This theorem can be used to give an unconventional proof of the Bolzano-Weierstrass theorem, for instance. Historically, it gave a fundamental Lemma for tackling the happy ending problem.
Jack D'AurizioJack D'Aurizio
Not the answer you're looking for? Browse other questions tagged sequences-and-series combinatorics ramsey-theory or ask your own question.
Unconventional (but instructive) proofs of basic theorems of calculus
Can a sequence with one element be a descending sequence?
Permutation of 1…9 with no ascending or descending subsequence of length 4
3 digit numbers with conditions
infinitely descending natural numbers
A sequence of $n^2$ real numbers which contains no monotonic subsequence of more than $n$ terms
Prove that these conditions happen in an increasing sequence of size $sr+1$
Prove that all numbers in a sequence are equal
Number coloring : same color sequence
How can I prove that this sequence is Cauchy? | CommonCrawl |
Hadamard well-posedness for a structure acoustic model with a supercritical source and damping terms
EECT Home
Approximation theorems for controllability problem governed by fractional differential equation
doi: 10.3934/eect.2020082
Blow-up criteria for linearly damped nonlinear Schrödinger equations
Van Duong Dinh 1,2,,
Laboratoire Paul Painlevé UMR 8524, Université de Lille CNRS, 59655 Villeneuve d'Ascq Cedex, France
Department of Mathematics, HCMC University of Pedagogy, 280 An Duong Vuong, Ho Chi Minh, Vietnam
* Corresponding author: Van Duong Dinh
Received February 2020 Revised May 2020 Published July 2020
Full Text(HTML)
We consider the Cauchy problem for linearly damped nonlinear Schrödinger equations
$ i\partial_t u + \Delta u + i a u = \pm |u|^\alpha u, \quad (t,x) \in [0,\infty) \times \mathbb R^N, $
$ a>0 $
$ \alpha>0 $
. We prove the global existence and scattering for a sufficiently large damping parameter in the energy-critical case. We also prove the existence of finite time blow-up
$ H^1 $
solutions to the focusing problem in the mass-critical and mass-supercritical cases.
Keywords: Damped nonlinear Schrödinger equation, Scattering, blow-up, localized virial estimates, radial Sobolev embedding.
Mathematics Subject Classification: Primary: 35Q55; Secondary: 35Q44.
Citation: Van Duong Dinh. Blow-up criteria for linearly damped nonlinear Schrödinger equations. Evolution Equations & Control Theory, doi: 10.3934/eect.2020082
G. D. Akrivis, V. A. Dougalis, O. A. Karakashian and V. R. McKinney, Numerical approximation of singular solutions of the damped nonlinear Schrödinger equation, ENUMATH, 97 (Heidelberg), World Scientific, River Edge, NJ, (1998), 117-124. Google Scholar
M. M. Cavalcanti, W. J. Corrêa, T. Özsari, M. Sepúlveda and R. Véjar-Asem, Exponential stability for the nonlinear Schrödinger equation with locally distributed damping, Comm. Partial Differential Equations, (in press), (2020). Google Scholar
T. Cazenave, Semilinear Schrödinger Equations, Courant Lecture Notes in Mathematics 10, American Mathematical Society, Courant Institute of Mathematical Sciences, 2003. doi: 10.1090/cln/010. Google Scholar
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in $ \mathbb R^3$, Annal. Math., 2008 (2008), 767-865. doi: 10.4007/annals.2008.167.767. Google Scholar
G. Chen, J. Zhang and Y. Wei, A small initial data criterion of global existence for the damped nonlinear Schrödinger equation, J. Phys. A: Math. Theor., 42 (2009), 055205. doi: 10.1088/1751-8113/42/5/055205. Google Scholar
M. Darwich, Blow-up for the damped $L^2$-critical nonlinear Schrödinger equation, Adv. Differential Equations, 17 (2012), 337-367. Google Scholar
V. D. Dinh, Blowup of $H^1$ solutions for a class of the focusing inhomogeneous nonlinear Schrödinger equation, Nonlinear Anal., 174 (2018), 169-188. doi: 10.1016/j.na.2018.04.024. Google Scholar
R. T. Glassey, On the blowing up of solutions to the Cauchy problem for nonlinear Schrödinger equations, J. Math. Phys., 18 (1977), 1794-1797. doi: 10.1063/1.523491. Google Scholar
M. V. Goldman, K. Rypdal and B. Hafizi, Dimensionality and dissipation in Langmuir collapse, Phys. Fluids, 23 (1980), 945-955. doi: 10.1063/1.863074. Google Scholar
H. Hajaiej, S. Ibrahim and N. Masmoudi, Ground state solutions of the complex Gross-Pitaevskii associated to Exciton-Polariton Bose-Einstein condensates, preprint arXiv: 1905.07660. Google Scholar
V. K. Kalantarov and T. Özsari, Qualitative properties of solutions for nonlinear Schrödinger equations with nonlinear boundary conditions on the half-line, J. Math. Phys., 18 (2016), 021511. doi: 10.1063/1.4941459. Google Scholar
C. E. Kenig and F. Merle, Global well-posedness, scattering and blow-up for the energy critical, focusing, nonlinear Schrödinger equation in the radial case, Invent. Math., 166 (2006), 645-675. doi: 10.1007/s00222-006-0011-4. Google Scholar
F. Merle and P. Raphael, Blow up dynamic and upper bound on the blow up rate for critical nonlinear Schrödinger equation, Ann. Math., 161 (2005), 157-222. doi: 10.4007/annals.2005.161.157. Google Scholar
G. Fibich, Self-focusing in the damped nonlinear Schrödinger equation, SIAM J. Appl. Math., 61 (2001), 1680-1705. doi: 10.1137/S0036139999362609. Google Scholar
G. Fibich, The nonlinear Schrödinger equations: Singular solutions and optical collapse, Applied Mathematical Sciences 192, Springer, New York, 2015. doi: 10.1007/978-3-319-12748-4. Google Scholar
T. Inui, Asymptotic behavior of the nonlinear damped Schrödinger equation, Proc. Amer. Math. Soc., 147 (2019), 763-773. doi: 10.1090/proc/14276. Google Scholar
T. Ogawa and Y. Tsutsumi, Blow-up of $H^1$ solutions for the nonlinear Schrödinger equation, J. Differential Equations, 92 (1991), 317-330. doi: 10.1016/0022-0396(91)90052-B. Google Scholar
M. Ohta and G. Todorova, Remarks on global existence and blowup for damped nonlinear Schrödinger equations, Discrete Contin. Dyn. Syst., 23 (2009), 1313-1325. doi: 10.3934/dcds.2009.23.1313. Google Scholar
T. Özsari, V. K. Kalantarov and I. Lasiecka, Uniform decay rates for the energy of weakly damped defocusing semilinear Schrödinger equations with inhomogeneous Dirichlet boundary control, J. Differential Equations, 251 (2011), 1841-1863. doi: 10.1016/j.jde.2011.04.003. Google Scholar
T. Özsari, Weakly-damped focusing nonlinear Schrödinger equations with Dirichlet control, J. Math. Anal. Appl., 389 (2012), 84-97. doi: 10.1016/j.jmaa.2011.11.053. Google Scholar
T. Özsari, Global existence and open loop exponential stabilization of weak solutions for nonlinear Schrödinger equations with localized external Neumann manipulation, Nonlinear Anal., 80 (2013), 179-193. doi: 10.1016/j.na.2012.10.006. Google Scholar
T. Özsari, Blow-up of solutions of nonlinear Schrödinger equations with oscillating nonlinearities, Commun. Pure Appl. Anal., 18 (2019), 549-558. doi: 10.3934/cpaa.2019027. Google Scholar
V. Perez-Garcia, M. Porras and L. Vazquez, The nonlinear Schrödinger equation with dissipation and the moment method, Phys. Lett. A, 202 (1995), 176-182. doi: 10.1016/0375-9601(95)00263-3. Google Scholar
K. O. Rasmussen, O. Bang and P. I. Christiansen, Driving and collapse in a nonlinear Schrödinger equation, Phys. Lett. A, 184 (1994), 241-244. doi: 10.1016/0375-9601(94)90382-4. Google Scholar
J. Sierra, A. Kasimov, P. Markowich and R. M. Weishäupl, On the Gross-Pitaevskii equation with pumping and decay: stationary states and their stability, J. Nonlinear Sci., 25 (2015), 709-739. doi: 10.1007/s00332-015-9239-8. Google Scholar
T. Tao, M. Visan and X. Zhang, The nonlinear Schrödinger equation with combined power-type nonlinearities, Comm. Partial Differential Equations, 32 (2007), 1281-1343. doi: 10.1080/03605300701588805. Google Scholar
M. Tsutsumi, Nonexistence of global solutions to the Cauchy problem for the damped nonlinear Schrödinger equations, SIAM J. Math. Anal., 15 (1984), 357-366. doi: 10.1137/0515028. Google Scholar
M. Tsutsumi, On global solutions to the initial-boundary value problem for the damped nonlinear Schrödinger equations, J. Math. Anal. Appl., 145 (1990), 328-341. doi: 10.1016/0022-247X(90)90403-3. Google Scholar
Y. Tsutsumi, $L^2$-solutions for nonlinear Schrödinger equations and nonlinear groups, Funkcial. Ekvac., 30 (1987), 115-125. Google Scholar
show all references
Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259
Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388
Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391
Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247
Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216
Norman Noguera, Ademir Pastor. Scattering of radial solutions for quadratic-type Schrödinger systems in dimension five. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021018
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436
Juliana Fernandes, Liliane Maia. Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1297-1318. doi: 10.3934/dcds.2020318
Jason Murphy, Kenji Nakanishi. Failure of scattering to solitary waves for long-range nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1507-1517. doi: 10.3934/dcds.2020328
Masaru Hamano, Satoshi Masaki. A sharp scattering threshold level for mass-subcritical nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1415-1447. doi: 10.3934/dcds.2020323
Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039
José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020376
Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276
Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284
Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450
Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316
Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298
HTML views (275)
Van Duong Dinh | CommonCrawl |
Boundary-Layer Meteorology
October 2013 , Volume 149, Issue 1, pp 105–132 | Cite as
Evaluation of the Weather Research and Forecast/Urban Model Over Greater Paris
Youngseob Kim
Karine Sartelet
Jean-Christophe Raut
Patrick Chazette
First Online: 27 July 2013
Meteorological modelling in the planetary boundary layer (PBL) over Greater Paris is performed using the Weather Research and Forecast (WRF) numerical model. The simulated meteorological fields are evaluated by comparison with mean diurnal observational data or mean vertical profiles of temperature, wind speed, humidity and boundary-layer height from 6 to 27 May 2005. Different PBL schemes, which parametrize the atmospheric turbulence in the PBL using different turbulence closure schemes, may be used in the WRF model. The sensitivity of the results to four PBL schemes (two non-local closure schemes and two local closure schemes) is estimated. Uncertainties in the PBL schemes are compared to the influence of the urban canopy model (UCM) and the updated Coordination of Information on the Environment (CORINE) land-use data. Using the UCM and the CORINE land-use data produces more realistic modelled meteorological fields. The wind speed, which is overestimated in the simulations without the UCM, is improved below 1,000 m height. Furthermore, the modelled PBL heights during nighttime are strongly modified, with an increase that may be as high as 200 %. At night, the impact of changing the PBL scheme is lower than the impact of using the UCM and the CORINE land-use data.
CORINE land-use Planetary boundary layer Urban canopy model Weather Research and Forecast model
The vertical dispersion of atmospheric pollutants in the planetary boundary layer (PBL) is mostly governed by motions caused by turbulence. For the vertical dispersion, the temperature stratification plays an important role in defining the atmospheric stability, the intensity of thermal turbulence and the depth of the boundary layer. These factors regulate the upward dispersion of pollutants and the rate of replacement of cleaner air from above (Oke 1987).
Numerical experiments have been carried out to accurately depict the vertical atmospheric motion. Explicitly resolving the turbulent motions in the PBL has been limited to idealized physical conditions (Moeng et al. 2007). Therefore parametrizations are generally used for PBL modelling (e.g. Pleim and Chang 1992; Holtslag et al. 1995; Hong and Pan 1996; Hong et al. 2006; Hourdin et al. 2006; Pleim 2007; Nakanishi and Niino 2009) in numerical weather prediction systems such as the Fifth generation Penn State/NCAR Mesoscale Model (MM5) or the Weather Research and Forecast (WRF) model. These parametrizations of turbulent fluxes in the PBL using turbulence closure schemes are referred to as PBL schemes.
The PBL schemes have been evaluated and intercompared for boundary-layer modelling in the U.S.A. (Berg and Zhong 2005; Olson and Brown 2009; Hu et al. 2010; Kim et al. 2010; Shin and Hong 2011), Asia (Srinivas et al. 2007; Han et al. 2008) and Europe (Borge et al. 2008). However there has been no intercomparison study at urban scales using the WRF model and an urban canopy model (UCM).
Meteorological fields for urban areas differ from those for surrounding rural areas because of different geometry (radiation trapping and wind profiles) and materials (heat storage) of their surfaces and different energy consumption (heat release). For accurate modelling of urban meteorological fields, we need the appropriate land-use data (location of urbanized areas and their fraction in a grid cell) and an urban model that describes heat/momentum exchange between urban structures and the lower atmosphere. For example, the impact of land-use data on temperature in the lower atmosphere was studied using WRF/UCM over the Phoenix metropolitan area in the U.S.A. by Grossman-Clarke et al. (2010) and over northern Taiwan by Lin et al. (2008).
The impacts of urban models on the meteorological fields in the lower atmosphere have been studied over a range of large cities (e.g., Kusaka and Kimura 2004; Otte et al. 2004; Dandou et al. 2005; Lin et al. 2008; Miao et al. 2009; Grossman-Clarke et al. 2010; Lee et al. 2010; Flagg and Taylor 2011; Salamanca et al. 2011). In particular, the WRF/UCM have been used for different studies. For example, Miao et al. (2009) found that the diurnal cycle of urban heat intensity was well reproduced by the WRF/urban model in Beijing, China. Lee et al. (2010) suggested that proper surface representation and explicit parametrizations of urban physical processes are required for accurate urban modelling in Houston, U.S.A. Flagg and Taylor (2011) examined the sensitivity of the surface energy balance, canopy layer and boundary-layer processes on the scale of urban surface representation. They found that small changes in the scale can affect the urban fraction used in the surface representation, affecting meteorological fields (e.g., surface heat flux and skin surface temperature) in Detroit, U.S.A.–Windsor (Canada) area. Salamanca et al. (2011) conducted simulations with high resolution urban canopy parameters in Houston, and revealed that a simple bulk urban scheme is sufficient for an estimate of the 2-m temperature in an urban area. However a complex urban canopy scheme and a high resolution urban canopy parameter database (e.g., urban fraction, building height and building area) are necessary for an evaluation of the urban heat intensity or the energy consumption due to air conditioning.
The anthropogenic heat release is an important variable for accurate modelling of air temperature over urban areas. However it is difficult to estimate representative values for urban areas. The effect of the anthropogenic heat release has been studied by Dupont et al. (2004); Sailor and Lu (2004) and Fan and Sailor (2005) over the U.S.A. and by Sarrat et al. (2006); Pigeon et al. (2007) and Sarkar and De Ridder (2011) over France. Allen et al. (2011) developed a model to estimate anthropogenic heat flux from global to individual city scales. In this model, three anthropogenic heat sources (metabolic heat, traffic heat and building heat) are estimated for a global scale (\(0.50^{\circ }\) grid resolution) and individual city scales.
The PBL schemes are attached to large uncertainties, which have a large impact not only on meteorology but also on air quality modelling (Mallet and Sportisse 2006; Roustan et al. 2010). The impact of urban canopy models on air quality modelling is also widely recognized (e.g., Lemonsu and Masson 2002 and Chen et al. 2011). However, UCM have not been widely used when modelling air quality over Paris and its suburbs (hereafter Greater Paris) (Tombette and Sportisse 2007; Vautard et al. 2007; Korsakissok and Mallet 2010; Sciare et al. 2010; Roustan et al. 2011; Royer et al. 2011).
This paper aims at evaluating the relative impact of the PBL scheme used in the WRF model and the use of an UCM over Greater Paris. The WRF/urban model is evaluated over Greater Paris during May 2005. The model is compared to in-situ measurements of temperature, wind speed and humidity at a ground station (Palaiseau), a tall mast (Saclay), a radiosonde station (Trappes), and an observation deck at a height of 319 m on the Eiffel Tower (Paris). Furthermore, mobile lidar data from the LISAIR (LIdar pour la Surveillance de l'AIR, Raut and Chazette 2009) campaign are used to estimate PBL height. The paper is organized as follows: first, the settings of the WRF model used here are described. Second, meteorological measurements used for comparisons to modelled results are detailed, and then the different methods used for estimation of PBL heights are briefly presented. Fourthly, a sensitivity study of meteorological data to the PBL schemes is performed, as well as to UCMs and land-use data. Finally, the relative sensitivity of the meteorological fields to the PBL schemes and to the UCMs and land-use data is discussed.
2 The Weather Research and Forecast (WRF) Model
The WRF model version 3.3 with the Advanced Research WRF (ARW) dynamics solver is used to obtain meteorological fields over Greater Paris (Skamarock et al. 2008).
2.1 Simulation Settings
The regular latitude-longitude map projection is used for three simulation domains with two-way nesting. The horizontal grid spacing of the coarse domain is \(0.5^{\circ }\), and \(0.125^{\circ }\) and \(0.03125^{\circ }\), respectively, for the two nested domains. The largest \(0.5^{\circ }\) domain covers Europe and the smallest domain covers Greater Paris. The U.S. Geological Survey (USGS) Global Land Cover Characteristics (GLCC) database is used (10-arc minute, 2-arc minute and 30-arc second land-use data for the three domains, respectively). There are 28 vertical levels refined near the surface and the pressure at the model top is 100 hPa. The physical parametrizations used include the Kessler microphysics scheme (Kessler 1969), the RRTM longwave radiation scheme (Mlawer et al. 1997), the Goddard shortwave scheme (Chou and Suarez 1994), the Grell–Devenyi ensemble cumulus parametrization scheme (Grell and Devenyi 2002) and the Noah land-surface model (Chen and Dudhia 2001). The National Centers for Environmental Prediction (NCEP) final (FNL) operational model global tropospheric analyses are used for the initial and boundary conditions. The NCEP FNL analyses are available on a \(1.0^{\circ } \times 1.0^{\circ }\) grid every 6 h. The three-dimensional analysis nudging method of the NCEP analyses is used in the WRF model. The simulations are carried out for three weeks from 6 May to 27 May 2005, and simulation results are saved every 30 min for the finer grid simulations.
2.2 Planetary Boundary-Layer Schemes
Numerical PBL schemes have been developed to apply various parametrizations in the WRF model. We evaluate here four PBL schemes that are currently operational in the WRF model; brief descriptions are given below.
The Yonsei University (YSU) scheme (Hong et al. 2006) is a revised Medium-Range Forecast (MRF) scheme (Hong and Pan 1996); the YSU scheme is a non-local closure scheme. In the YSU and MRF schemes, a counter-gradient term is incorporated for the non-local closure. This term is a correction to the local gradient of heat and water vapour, and incorporates the contribution of the large-scale eddies to the total flux in the PBL under unstable conditions (Hong and Pan 1996).
The second scheme, ACM2 is the new version of the Asymmetric Convective Model (ACM) scheme (Pleim 2007); the ACM2 scheme is also a non-local closure scheme. In the ACM schemes, the non-local nature is represented by using a transilient term that defines the mass flux between any pair of model layers even if they are not adjacent (Pleim and Chang 1992). The ACM2 scheme adds an eddy diffusion component to the transilient term of the original ACM scheme.
The Mellor–Yamada–Janjic (MYJ) scheme (Janjić 2001) is a local closure scheme. The MYJ scheme determines eddy diffusivities from prognostically calculated turbulent kinetic energy (TKE) (Hu et al. 2010). The Mellor–Yamada–Nakanishi and Niino (MYNN) level-2.5 scheme (Nakanishi and Niino 2004) is also a TKE-based scheme. The MYNN scheme and the MYJ scheme are developed to improve performances of the original Mellor–Yamada model (Mellor and Yamada 1974). Major differences between the two schemes include formulations for mixing length and methods to determine unknown parameters. The MYJ scheme uses observations to determine unknown parameters while the MYNN scheme uses large-eddy simulation (LES) results. Olson and Brown (2009) highlighted the differences between the MYJ and the MYNN schemes. The MYNN scheme produces larger TKE and mixing length, which lead to slightly greater mixed-layer depths in the MYNN scheme than in the MYJ scheme.
Hu et al. (2010) showed that the ACM2 and the YSU schemes predicted stronger vertical mixing than the MYJ scheme in the lower atmosphere. This produced stronger entrainment at the top of the PBL and, in turn, produced a warmer and drier lower atmosphere. However, Shin and Hong (2011) compared the vertical profiles of diffusivities with the PBL schemes and showed that the vertical mixing in the MYJ scheme was stronger than the ACM2 and the YSU schemes during daytime. They also revealed that discrepancies between state-of-the-art PBL schemes are important in modelling surface variables under stable conditions.
In the framework of the Global Energy and Water Exchange Project (GEWEX) Atmospheric Boundary Layer Study (GABLS) intercomparisons, Svensson et al. (2011) presented comparisons of single-column models including the ACM2, the YSU and the MYJ schemes. The vertical profiles of the potential temperature during daytime were significantly different between the PBL schemes: neutral profiles for both the ACM2 and the YSU schemes and typical unstable profiles for the MYJ scheme.
2.3 Surface-Layer Schemes
Parametrizations of turbulence near the surface using the Monin–Obukov similarity theory are referred to as surface-layer schemes. In the current version of the WRF model, surface-layer schemes are linked to particular PBL schemes. This can be a source of discrepancies between simulations conducted with different PBL schemes. The used surface-layer schemes linked to PBL schemes are: the MM5 similarity scheme (Zhang and Anthes 1982) to the YSU scheme, the Pleim–Xiu scheme (Pleim 2006) to the ACM2 scheme, the Eta similarity scheme (Janjić 1990) to the MYJ scheme, and the MYNN surface-layer scheme to the MYNN PBL scheme.
2.4 Urban Surface Models
To consider the effects of urbanization, the WRF model includes three urban surface models: the UCM (Kusaka et al. 2001), the Building Environment Parametrization (BEP) (Martilli et al. 2002) and the Building Energy Model (BEM) (Salamanca et al. 2010). The UCM is a simple single-layer model, while the BEP and the BEM models are multi-layer models. Urban models are used to represent the influence of urbanization on the surface temperature. Kusaka et al. (2001) showed that the diurnal variations of surface temperature from the UCM are close to those from the multi-layer models. In addition, the UCM includes the anthropogenic heat release in the total sensible heat flux. It is not explicitly represented in the multi-layer models. Therefore the UCM is used for this study. If an urban surface model is not used, the WRF model uses the NOAH land-surface model, which distinguishes urban from non-urban areas by differences in vegetation parameters (surface albedo, roughness length, green vegetation fraction). However, it does not take into account the effect of geometric (building height, building width, road width) and thermal parameters (anthropogenic heat, thermal conductivities, heat capacity).
Geometric and thermal parameters for the urban canopy model
Building height
Roof width
Road width
Urban area ratio for a grid
Vegetation area ratio for a grid
Diurnal maximum of anthropogenic heat flux
\(70\,\hbox {W m}^{-2}\)
Diurnal profile of anthropogenic heat flux
See Fig. 1
Surface albedo of roof, road and wall
Surface emissivity of roof, road and wall
Volumetric heat capacity of roof, road and wall
\(2.01 \times 10^6\,\hbox {J m}^{-3}\,\hbox {K}^{-1}\)
Thermal conductivity of roof, road and wall
\(2.28\,\hbox {W m}^{-1}\,\hbox {K}^{-1}\)
Geometric and thermal parameters for the UCM have a significant effect on the transfer of energy and momentum between the urban surface and the atmosphere (Loridan et al. 2010; Wang et al. 2011; Loridan and Grimmond 2012). The parameters used in this study are summarized in Table 1; most parameters are based on Kusaka et al. (2001). Although the choice of the parameters is important, it is sometimes difficult to choose representative values for a city. Temperature and wind speed are very sensitive to the ratio of the building width to the road width, which is chosen using repeated model-to-measurement tests. The optimized ratio for this study is 0.33. Using a single set of parameters over the whole urban area is not realistic; however there is only one urban category in the land-use data used herein. In future studies, adding sub-urban categories in the land-use data would make the urban model more realistic.
The anthropogenic heat release \((Q_\mathrm{F})\) is likely to have a very strong impact on the modelled sensible heat flux, in particular, during nighttime and hence on the PBL processes due to enhanced urban turbulence (Stull 1988). The \(Q_\mathrm{F}\) value for Paris is based on the work of Allen et al. (2011) who compute \(Q_\mathrm{F}\) for different cities around the world. They presented annual mean and annual maximum \(Q_\mathrm{F}\) based on hourly values. Tokyo and New York have the highest annual mean \(Q_\mathrm{F}\) (around \(60\,\hbox {W m}^{-2}\)). Although the annual mean \(Q_\mathrm{F}\) is not specified for Paris in Allen et al. (2011), the annual maximum \(Q_\mathrm{F}\) for Paris (261 W m\(^{-2}\)) is between the values for New York \((550\,\hbox {W m}^{-2})\) and Tokyo \((180\,\hbox {W m}^{-2})\). The ratio between the annual maximum \(Q_\mathrm{F}\) and the annual mean \(Q_\mathrm{F}\) varies from one city to another; the ratio is about 3 for Tokyo, 5 for London and as high as 10 for New York. Assuming a ratio of 4 for Paris leads to an annual mean \(Q_\mathrm{F}\) of about \(65\,\hbox {W m}^{-2}\), which is similar to Tokyo and New York. The \(Q_\mathrm{F}\) in May is estimated from the annual mean \(Q_\mathrm{F}\) based on the work of Pigeon et al. (2007) and Allen et al. (2011). They estimated the ratio between the annual \(Q_\mathrm{F}\) and the \(Q_\mathrm{F}\) in May to be 1.15 for Toulouse and 1.25 for London, respectively. The ratio for Paris is assumed to be 1.2 in this study. The diurnal variation of \(Q_\mathrm{F}\) in May is computed based on the diurnal cycle in the local emission inventory for human activities obtained from Airparif (http://www.airparif.asso.fr/en/index/index). Note that the morning and evening energy consumption peaks do not appear in this profile of total anthropogenic heat, which includes metabolic heat, traffic heat and building heat. The morning and evening energy consumption peaks are mostly due to the traffic heat during working days, and the contribution of the traffic heat to the total anthropogenic heat may vary from 25 to 62 % (Allen et al. 2011). The contribution of traffic emissions may also be underestimated in the local emission inventory used in this study. Figure 1 presents the diurnal variation of \(Q_\mathrm{F}\) for Paris in May.
Diurnal variation of the anthropogenic heat flux for Paris in May
Land-use categories using a USGS database and b CORINE database. 1 Urban and built-up land, 2 dryland/cropland and pasture, 3 irrigated cropland and pasture, 4 mixed dryland/irrigated cropland and pasture, 5 cropland/grassland mosaic, 6 cropland/woodland mosaic, 7 grassland, 8 shrubland, 9 mixed shrubland/grassland, 10 savanna, 11 deciduous broadleaf forest, 12 deciduous needleleaf forest, 13 evergreen broadleaf forest, 14 evergreen needleleaf forest, 15 mixed forest, 16 water bodies
The USGS land-use data are commonly used in the WRF model. However, this database was created in 1993 and the land-use changes between 1993 and 2005 are important over Greater Paris according to the database of the European Environment Agency (EEA). Therefore, the recent Coordination of Information on the Environment (CORINE) land-use data of EEA is used instead of the USGS data. In order to use the CORINE land-use data in the WRF simulations, the land-use categories of the CORINE land-use data were converted to the categories of the USGS data following Pineda et al. (2004). The geographical coordinate system of the CORINE land-use data, European Terrestrial Reference System 1989 (ETRS89)—Lambert Azimuthal Equal Area (LAEA) is not directly usable in the WRF model. Thus the reprojection of the coordinate system to World Geodetic System 1984 (WGS84) was carried out following Arnold et al. (2010). The CORINE land-cover 2006 raster data (version 13) with a resolution of 250 m, which are freely available at http://www.eea.europa.eu/data-and-maps/data/corine-land-cover-2006-raster, were used for this study. Figure 2 displays the changes of dominant land-use category from the USGS data to the CORINE data. The category for urban and built-up land (category 1) is dominant in Paris in both the USGS and the CORINE data but the category 1 area is extended southwards and westwards from Paris in the CORINE data.
We compared the results obtained using the four PBL schemes to meteorological measurements provided by various observatories. Figure 3 presents the locations of the measurement stations. A French national atmospheric observatory, Site Instrumental de Recherche par Télédétection Atmosphérique (SIRTA), provides measurements of wind speed and direction, temperature, pressure, relative humidity and precipitation rate at a ground station located in Palaiseau, 20 km south-west of Paris, in a semi-urban environment (Haeffelin et al. 2005).
Locations of meteorological observation stations and route taken for the measurements of the GBML. Blue and brown marks show the route for the measurements from the suburbs of Paris to Paris centre for 24 May and 25 May, respectively. Red marks represent the measurements on the beltway of Paris before rush-hour and green marks represent the measurements on the beltway during rush-hour for 25 May 2005. The black lines show the geographical border of the administrative department
This local-scale station is adequate to compare simulation results of a model horizontal resolution of about 4 km as used in this work. The station is several hundreds metres away from heat sources such as buildings and from natural fences to avoid extraneous microclimatic influences. Except when a strong synoptic flow exists during the observations, local-scale effects dominate. However the weather conditions (clear sky and weak wind) were favourable to our comparison in the lowest layers of the atmosphere at the SIRTA station during the observation period in May 2005 (Météo-France 2005). SIRTA also provides radiosonde profiles of pressure, temperature, potential/virtual potential temperature, relative/specific humidity, wind speed/direction, and values of PBL height performed at 0000 and 1200 UTC at Trappes as part of the French national meteorological service organization (Météo-France) network. The station at Trappes is 15 km west of the SIRTA site in Palaiseau and is in an urban environment. Details on the measurements are available at http://sirta.ipsl.polytechnique.fr. The PBL heights at Trappes are estimated using radiosonde profiles of the potential temperature and the estimation method is presented in Sect. 4. The Commissariat à l'Energie Atomique (CEA) operates an observation mast of 100 m tall in Saclay. Hourly measurements are carried out for wind speed/direction, relative humidity, pressure, precipitation rate, solar radiation and temperature at various heights. The mast is located in a semi-urban environment. The Météo-France operates an observation deck on the Eiffel Tower in Paris, with the hourly measurements carried out at a height of 319 m. The comparison is performed for the period from 6 May to 27 May 2005.
Lidar data are also used to estimate the PBL height. A ground-based mobile lidar (GBML) was used during the air quality observation campaign, LISAIR, over Greater Paris from 24 to 27 May 2005 (Raut and Chazette 2009). Observations of the aerosol extinction coefficients profiles by the GBML were made to retrieve the multiple boundary layers in the troposphere and in turn the vertical distribution of particulate matter (PM) with aerodynamic diameter \({<}10\,\upmu \hbox {m}\,(\hbox {PM}_{10}\)). The lidar used during the LISAIR campaign is a home-made instrument (Chazette et al. 2007); its overlap factor becomes unity at about 150 m above the ground level. It enables us to retrieve the height of the different aerosol layers, even close to the surface as detected in the evening or early morning. The accurate heights of the limits between the multiple layers are obtained from an algorithm enabling the detection of vertical heterogeneities in the aerosol extinction coefficients derived from lidar profiles.
Two kinds of observations were performed with the GBML: the \(\hbox {PM}_{10}\) gradients between the suburbs of Paris and Paris centre were observed, and observations along the main roads (from Les Halles to the Arc de Triomphe through the Avenue des Champs-Élysées) and the beltway of Paris were carried out. Routes followed by the automobile embarking the lidar are presented in Fig. 3. The GBML measurements are detailed in Raut and Chazette (2009).
4 Estimations of PBL Heights Used in this Study
The PBL height is not obtained directly but only estimated from measured meteorological fields (e.g., temperature, wind speed and humidity) and from the vertical distribution of trace gas concentrations.
4.1 Estimations from Measurements
Radiosonde temperature and wind profiles have been used to estimate the PBL height. Following Coindreau et al. (2007), the PBL height is estimated from radiosonde profiles using a bulk Richardson number \(Ri_\mathrm{b}\) calculated as,
$$\begin{aligned} Ri_\mathrm{b}(z) = \frac{g(z-z_0)}{\theta _\mathrm{v}(z)}\left[ \frac{\theta _\mathrm{v}(z)-\theta _\mathrm{v}(z_0)}{u(z)^2 + v(z)^2}\right] , \end{aligned}$$
where \(\theta _\mathrm{v}\) is the virtual potential temperature, \(g\) is the acceleration due to gravity, \(z\) is the height, \(z_0\) is the reference height (considered here as the first vertical point available on the sounding profile), and \(u\) and \(v\) are the zonal and meridional wind components, respectively. The PBL height is estimated at the first height at which the calculated \(Ri_\mathrm{b}\) first exceeds a critical Richardson number. In this study, the critical Richardson number is set to 0.21 following Coindreau et al. (2007).
Various detection criteria have been proposed to find the PBL height from lidar vertical profiles. The PBL height can be detected as the altitude at which the vertical gradient of the extinction coefficient is minimum (Flamant et al. 1997) or where the second derivative is zero (Menut et al. 1999). Other studies rely on mathematical fitting functions (Steyn et al. 1999) or the application of a wavelet covariance transform (Brooks 2003). Here, we analyze the lidar profiles using the curvature radius \(\rho \) defined by
$$\begin{aligned} \rho (z)=\frac{\hbox {d}^2\alpha }{\hbox {d}z^2} \left[ 1+\left( \frac{\mathrm{d}\alpha }{\hbox {d}z}\right) ^2\right] ^{-\frac{3}{2}}, \end{aligned}$$
where \(\alpha \) is the aerosol extinction coefficient.
First, the vertical profile of \(\alpha \) is approximated by a second-order polynomial function using the least mean squares method. This polynomial fit is done in a vertically sliding window of thickness that may vary with the vertical layer structure (100 m on average). Then first and second derivatives of \(\alpha \) are numerically computed through an analytical derivation. The curvature radius is obtained by inserting the first and second derivatives into Eq. 2. The curvature radius provides information on the limits of a transition zone between the PBL and the residual layer. The centre of the transition zone is defined as the minimum gradient in the vertical profile of \(\alpha \). The bottom and top of the transition zone are defined as the nearest peaks of \(\rho \) from the centre of the transition zone. The PBL height is defined as the top of this transition zone (Raut and Chazette 2009). This approach allows us to follow the temporal evolution of the discontinuity in the transition zone independently of the remaining part of the profile. The discontinuity is detected on the first profile of the temporal series, as explained above. The temporal evolution of the discontinuity is then retrieved from the treatment of each individual profile in a 300-m thick window around the altitude detected on the previous profile, insuring the temporal consistency.
4.2 Estimations from Modelling
The PBL height is estimated differently in each of the four PBL schemes. The YSU and the ACM2 schemes define PBL height as the height at which the bulk Richardson number is greater than a critical Richardson number. For unstable conditions, the critical Richardson number is zero for the YSU scheme and 0.25 for the ACM2 scheme, while it is 0.25 for both the YSU and the ACM2 schemes for stable conditions. The difference between the YSU and ACM2 schemes is that the Richardson number criterion is applied from the lowest model level for unstable conditions in the YSU scheme, while it is applied across the entrainment layer only in the ACM2 scheme.
In the YSU scheme, the bulk Richardson number is calculated as
$$\begin{aligned} Ri_\mathrm{b}(z) = \frac{gz}{u(z)^2 + v(z)^2}\left[ \frac{\theta _\mathrm{v}(z)-\theta _\mathrm{s}}{\theta _\mathrm{v}(z_0)}\right] , \end{aligned}$$
where \(\theta _\mathrm{v}(z_0)\) is the virtual potential temperature at the lowest model level \((z_0), \theta _\mathrm{v}(z)\) is the virtual potential temperature at a level \(z, \theta _\mathrm{s}\) is an appropriate temperature near the surface. The appropriate temperature near the surface is defined as
$$\begin{aligned} \theta _\mathrm{s} = \theta _\mathrm{v}(z_0) + \theta _T, \end{aligned}$$
where \(\theta _T\) is the virtual temperature excess near the surface, which is a function of the virtual heat flux from the surface and a wind velocity scale.
In the ACM2 scheme, the top of the convectively unstable layer \((z_\mathrm{mix})\) is found as the height at which \(\theta _\mathrm{v}(z_\mathrm{mix})=\theta _\mathrm{s}\). Then the bulk Richardson number is calculated by
$$\begin{aligned} Ri_\mathrm{b}(z) = \frac{g(z-z_\mathrm{mix})}{(u(z)-u(z_\mathrm{mix}))^2 + (v(z)-v(z_\mathrm{mix}))^2}\left[ \frac{\theta _\mathrm{v}(z)-\theta _\mathrm{s}}{\overline{\theta _\mathrm{v}}}\right] , \end{aligned}$$
where \(\overline{\theta _\mathrm{v}}\) is the average virtual potential temperature between \(z_0\) and \(z\).
The PBL height for the MYJ scheme is defined as the height at which the TKE falls below a minimum value \((0.1\,\hbox {m}^2\,\hbox {s}^{-2})\), while it is defined in the MYNN scheme as the height at which the virtual potential temperature is \({>}0.5\) K than that at the surface.
5 Comparisons to Measurements: Sensitivity to the PBL Schemes
The fine-grid simulation results for Greater Paris are used for comparisons to measurements. The statistical indicators used in this study are the root-mean-square error (RMSE), the mean bias (MB), the mean fractional bias and error (MFB and MFE), the normalized mean bias and error (NMB and NME) and the correlation coefficient. They are defined in Table 2.
Definitions of the statistical indicators
Root-mean-square error (RMSE) and mean bias (MB)
\(\sqrt{\displaystyle \frac{1}{n} \displaystyle \sum \nolimits _{i=1}^{n}(c_i - o_i)^2}\) and \(\displaystyle \frac{1}{n} \displaystyle \sum \limits _{i=1}^{n}(c_i - o_i)\)
Mean fractional bias (MFB) and mean fractional error (MFE)
\(\displaystyle \frac{1}{n} \displaystyle \sum \nolimits _{i=1}^{n} \frac{c_i - o_i}{(c_i + o_i)/2}\) and \(\displaystyle \frac{1}{n} \displaystyle \sum \nolimits _{i=1}^{n} \frac{\mid c_i \!-\! o_i \mid }{(c_i \!+\! o_i)/2}\)
Normalized mean bias (NMB) and normalized mean error (NME)
\(\frac{\displaystyle \sum \nolimits _{i=1}^{n} (c_i - o_i)}{\displaystyle \sum \nolimits _{i=1}^{n} o_i}\) and \(\frac{\displaystyle \sum \nolimits _{i=1}^{n} (\mid c_i - o_i \mid )}{\displaystyle \sum \nolimits _{i=1}^{n} o_i}\)
Mean normalized bias (MNB) and mean normalized gross error (MNGE)
\(\displaystyle \frac{1}{n} \displaystyle \sum \nolimits _{i=1}^{n} \frac{c_i - o_i}{o_i}\) and \(\displaystyle \frac{1}{n} \displaystyle \sum \nolimits _{i=1}^{n} \frac{\mid c_i - o_i \mid }{o_i}\)
Correlation coefficient
\(\frac{\displaystyle \sum \nolimits _{i=1}^{n} (c_i - \overline{c})(o_i - \overline{o})}{\sqrt{\displaystyle \sum \nolimits _{i=1}^{n} (c_i - \overline{c})^2}\sqrt{\displaystyle \sum \nolimits _{i=1}^{n} (o_i - \overline{o})^2}}\) with \(\overline{o}=\displaystyle \frac{1}{n} \displaystyle \sum \nolimits _{i=1}^{n} o_i\) and \(\overline{c}=\displaystyle \frac{1}{n} \displaystyle \sum \nolimits _{i=1}^{n} c_i\)
\(c_i\) modelled values, \(o_i\) observed values, \(n\) number of data
5.1 Impact on Temperature
Figure 4 shows the mean diurnal variations of the 2-m temperature at Palaiseau, 100-m temperature at Saclay and 319-m temperature at the Eiffel Tower. The transition from night to day, that is the time at which the temperature starts to increase, is well simulated by all the PBL schemes at Palaiseau. However, at Saclay and at the Eiffel Tower, the transition time is delayed in the simulations compared to the observations by 1 and 2–3 h respectively. The temperature is underestimated at Palaiseau, Saclay and the Eiffel Tower whatever the PBL scheme used in the simulation. Therefore, the discrepancies may not be due to the PBL scheme but most likely from the radiation model, as the underestimation is stronger during the day.
Mean diurnal variations of observed and modelled temperatures between 6 May and 27 May 2005: a 2-m temperature at Palaiseau, b 100-m temperature at Saclay, and c 319-m temperature at the Eiffel Tower. Black lines correspond to the observed values. The modelled values using each PBL scheme are represented by triangles (the ACM2 scheme), plus (the MYJ scheme), cross (the MYNN scheme) and dots (the YSU scheme). The modelled values using a PBL scheme, the UCM and the CORINE land-use data are represented by dashed lines (the MYNN scheme) and dotted lines (the YSU scheme)
The differences between the PBL schemes are small, although the statistics obtained with the MYNN scheme are slightly better than others. The underestimations of the 2-m temperature are smaller than those of the temperature at higher altitudes: the MFB varies from \(-0.06\) (the YSU scheme) to \(-0.15\) (the MYJ scheme) and the NMB varies from \(-0.05\) (the YSU and the MYNN schemes) to \(-0.13\) (the MYJ scheme). The differences of the 2-m temperature are due to the differences in the temperature of the skin that forms the interface between soil and atmosphere (not shown). The simulated mean skin temperatures by the YSU scheme (highest) and the ACM2 scheme (lowest) differ by \(4^{\circ }\hbox {C}\) at 1400 UTC at Palaiseau. The differences in the skin temperature result from using a different surface-layer scheme with each PBL scheme (see Sect. 2.3).
The underestimations of the temperature are more significant at the Eiffel Tower than at Palaiseau and Saclay (MFB: \(-0.24\) for the MYNN scheme to \(-0.41\) for the MYJ scheme, NMB: \(-0.21\) for the MYNN scheme to \(-0.29\) for the MYJ scheme). The diurnal cycle and the temperatures are underestimated, particularly during daytime for the temperatures, and the underestimations tend to increase with altitude. The bias of the simulated temperatures consists of two components: a general cold bias and an underestimation of the amplitude of the diurnal cycle. The general cold bias increases with height, and may be due to uncertainties in the radiation model values (in particular, incoming solar radiation during daytime) that increase with height.
Figure 5 compares the observed and simulated mean vertical profiles of potential temperature at Trappes at 0000 and 1200 UTC. All the PBL schemes underestimate the potential temperature for both the daytime and the nighttime observations. The MYNN scheme performs better than others below 1,200 m height at 1200 UTC. The differences between the PBL schemes are large near the surface and decrease with height. This may be due to differences in the surface-layer schemes. The ACM2 and the MYNN schemes perform better than the other two schemes below 1,200 m height at 0000 UTC. The YSU and the MYJ schemes show lower potential temperature values (weak stable profiles) between 200 and 700 m. This is consistent with the results at the Eiffel Tower where lower temperatures are simulated with the YSU and the MYJ schemes during nighttime. The weaker stable (more neutral) profiles suggest higher heat diffusivity near the surface in the YSU and MYJ schemes than in the two other schemes.
Mean vertical profiles of observed and modelled potential temperatures at Trappes: a 1200 UTC and b 0000 UTC
5.2 Impact on Wind Speed
Figure 6 shows the mean diurnal variations of the 10-m wind speed at Palaiseau, 110-m wind speeds at Saclay and 319-m wind speed at the Eiffel Tower. For the 10-m wind speed, the morning transition at which the wind speed increases, is observed at about 0500 UTC, and it is well simulated by the PBL schemes, except for the ACM2 scheme that simulates an earlier transition. The statistics obtained with the MYNN scheme are overall improved on the others. The 10-m wind speed is overestimated with all the schemes. The overestimation of the MYJ scheme (MFB: 0.73) is slightly higher than others (MFB: 0.66, 0.62 and 0.61 for the YSU, the ACM2 and the MYNN schemes respectively). This overestimation of the 10-m wind speed may be partly attributed, especially during nighttime, to an underestimation of the friction velocity, which depends on the surface-layer schemes.
Mean diurnal variations of observed and modelled wind speeds: a 10-m wind speed at Palaiseau, b 110-m wind speed at Saclay, and c 319-m wind speed at the Eiffel Tower. For the detailed caption of the figure, see Fig. 4
Morning and evening transitions, where wind speed increases and decreases respectively, are clearly defined for the 110-m and the 319-m wind speeds. However there are time differences between observations and simulations, which are similar to those for the temperature diurnal cycle. Therefore the differences could be due to the uncertainties in the radiation models. The errors and bias of the 110-m wind speed at Saclay are lower than the 10-m wind speed at Palaiseau except for the RMSE that increases with the ACM2 and the MYNN schemes. The 319-m wind speed is overestimated with all the schemes. The bias between the simulated values and the observed values are lower than those of the 10-m wind speed (about a third); however magnitudes of the errors are similar. Statistics obtained with the YSU scheme are the best among the schemes.
Figure 7 presents the observed and simulated mean vertical profiles of wind speed at Trappes at 0000 and 1200 UTC. The four schemes overestimate the wind speed from the ground to around 1,000 m in the daytime profile and they underestimate it above 1,000 m. The overestimations near the surface may be partly due to an underestimation in the friction velocities in the surface-layer schemes, especially during nighttime. During daytime, except for the correlation coefficient, the YSU scheme shows the best statistics. The larger discrepancies between the different schemes are observed at night in the first 1,000 m. Close to the surface, the wind speed is underestimated by all the schemes at night. The wind speed decreases near the surface with all the schemes and a low-level jet develops at around 500 m of height. The low-level jet with the ACM2 and the MYNN schemes is stronger (\(11\, \hbox {m s}^{-1}\) at peak) than that with the YSU and the MYJ schemes (\(9\,\hbox {m s}^{-1}\) at peak). The value of the peak of the observed low-level jet is about \(10\,\hbox {m s}^{-1}\), i.e., between the values of the simulated peaks, and the observed peak is vertically lower (200 m) than the simulated peaks (300–500 m). During nighttime the YSU and the MYJ schemes show the best statistics.
Mean vertical profile of observed and modelled wind speeds at Trappes: a 1200 UTC and b 0000 UTC. For the detailed caption of the figure, see Fig. 4
5.3 Impact on Humidity
Figure 8 displays the mean diurnal variations of surface relative humidity (\(r\)) and specific humidity (\(q\)) at Palaiseau; \(r\) is overestimated by all the schemes (bias: about 0.10 for the YSU scheme to 0.20 for the ACM2 scheme). The statistics obtained with the YSU scheme are slightly better than the others, except for the correlation coefficient, which is greater with the MYNN scheme (0.74). Overestimations of \(r\) are mostly due to overestimations of \(q\) during daytime, and to underestimations of the temperature during nighttime.
Mean diurnal variation of observed and modelled a surface relative humidity and b specific humidity at Palaiseau. For the detailed caption of the figure, see Fig. 4
The observed and simulated mean vertical profiles of \(q\) are compared at Trappes (not shown). During daytime, the four PBL schemes overestimate \(q\) below about 1,500 m except for the MYJ scheme, which underestimates \(q\) between 700 and 1,000 m. The \(q\) simulated with the MYJ scheme is higher between the surface and 400 m, because of weaker vertical mixing in the MYJ scheme. The MYNN scheme, which has improved vertical mixing, simulates similar results to the two non-local schemes (Nakanishi and Niino 2009). The MFE in the YSU scheme is improved on the others while the RMSEand the NME in the MYNN scheme are optimal. During nighttime, the MYNN scheme has lower errors while the YSU and the MYJ schemes have lower biases.
5.4 Impact on PBL Height
The PBL heights modelled by the PBL schemes and retrieved by the radiosonde at Trappes are compared in Table 3. During daytime, the PBL schemes mainly underestimate the PBL heights except for the MYJ scheme. The lowest monthly mean error is obtained with the YSU scheme. The maximum difference in the modelled mean PBL heights among the PBL schemes is 20 % (135 m between the MYNN and the MYJ schemes). During nighttime, the YSU and the MYJ schemes overestimate the PBL heights while the ACM2 and the MYNN schemes underestimate them. Modelled mean PBL heights are significantly different among the schemes (from 150 m for the MYNN scheme to 652 m for the YSU scheme, 335 %). The mean value of the modelled mean heights for all the schemes (405 m) is the best estimation for the observed mean height (407 m).
Comparison of observed PBL heights (m) from radiosonde to modelled PBL heights
ACM2
MYJ
MYNN
Using the parametrization
Using the common algorithm
The modelled heights calculated using the algorithms from the parametrizations are compared to those using the common algorithm
As discussed in Sect. 4, different methods are used in the PBL schemes to determine the PBL height. Besides, the method to detect the PBL height from the radiosonde observations (\(\theta \)-profile method hereafter) is different from that used with the simulated data. To remove discrepancies from using different methods in the PBL height diagnosis, the simulated PBL heights are recalculated using the \(\theta \)-profile method used for the radiosonde data (see Sect. 3). The mean simulated PBL heights during both daytime and nighttime with the \(\theta \)-profile method are presented in Table 3. As expected, the discrepancies in the PBL heights from the different PBL schemes are significantly reduced using the common \(\theta \)-profile method. Furthermore, the bias between the observed height and the simulated height is reduced except for the height during nighttime obtained with the YSU scheme.
We compare the PBL heights estimated by the GBML measurements to the modelled PBL heights; Fig. 9a, b present the PBL heights estimated by the lidar from the suburbs of Paris (Palaiseau) to Paris centre (Les Halles) on 24 May and 25 May, respectively.
Boundary-layer heights estimated by the GBML and modelled heights: from Palaiseau to Paris on a 24 May and b 25 May; at the main road and the beltway of Paris on 25 May c before rush-hour and d during rush-hour. The black circles correspond to the values observed by the GBML. The modelled values using each PBL scheme are represented by a blue line (the ACM2 scheme), red line (the MYJ scheme), green line (the MYNN scheme) and magenta line (the YSU scheme). The modelled values using a PBL scheme, the UCM and the CORINE land-use data are represented by a green dashed line (the MYNN scheme) and a magenta dashed line (the YSU scheme)
In Fig. 9a, the heights do not significantly vary during the measurements on 24 May. The height at Palaiseau is 444 m while the height at Les Halles is 486 m. This weak increase of the PBL height could be explained by uncertainties in the algorithm used to calculate the PBL height from the aerosol extinction coefficients. According to the vertical distribution of \(\hbox {PM}_{10}\) on 24 May at about 0400 UTC presented in the Fig. 4 of Raut and Chazette (2009), two layers are seen above the whose top is estimated to be at about 500 m. One layer extends from 600 to 700 m and the other one from 750 to 1,500 m. The highest layer should correspond to the residual layer. However it is not clear whether the layer between 600 and 700 m should be considered as part of the residual layer or as part of the PBL. If it is considered as part of the PBL, the PBL height would then be about 700 m, which corresponds to the height modelled using the YSU scheme. All the PBL schemes underestimate the PBL heights except for the YSU scheme at the Paris centre where the modelled heights increase to about 750 m. This increase of the modelled PBL height by the YSU scheme at the Paris centre can be explained by the vertical profiles of potential temperature. The potential temperature at the surface is similar to that around 750 m of height (difference \(<\)1.0 K), leading to a more neutral profile and resulting in an increase in PBL height.
In Fig. 9b, the PBL height at Palaiseau at 0300 UTC on 25 May is about 320 m while the height at Les Halles at 0400 UTC is about 480 m. The PBL height does not significantly increase from 0300 UTC to 0400 UTC because sunrise at Paris at the end of May is about 0400 UTC. Therefore, the increase of the PBL height at Les Halles compared to that at Palaiseau is explained by the stronger urban heat release at Les Halles. All the PBL schemes significantly underestimate the PBL heights. The mean modelled PBL heights are lower than 100 m except for the YSU scheme (130 m) while the mean height observed by the lidar is about 390 m. This discrepancy is partly due to uncertainties in modelling the nighttime heat flux due to human activities in the urban region and to uncertainties in modelling the stable conditions, as shown by the vertical profiles of potential temperature. The modelled temperature at 200 m is probably overestimated, and is higher than that at the surface, resulting in very stable conditions. This increase of the PBL heights observed by the lidar when moving from rural to urban areas on 25 May (but not on 24 May) may be explained by the difference in temperature between Palaiseau and Paris City Hall. The difference measured by these two fixed stations is about 5 K on 25 May and 2 K on 24 May. Therefore, the warming of the urban surface is more important on 25 May than on 24 May, resulting in a greater PBL height at the Paris centre on 25 May (see Fig. 9b).
Figure 9c, d present the PBL heights along the main road and the beltway of Paris before rush-hour (from 0400 to 0500 UTC) and during rush-hour (from 0530 to 0800 UTC), respectively. The mean PBL heights estimated by the lidar are 445 m before rush-hour and 378 m during rush-hour while the mean modelled heights are \({<}80\,\hbox {m}\) before rush-hour and \({<}180\,\hbox {m}\) during rush-hour. All the PBL schemes underestimate the PBL heights; the YSU and the MYJ schemes perform slightly better before and during rush-hour than others.
The PBL heights for the GBML measurements vary greatly with the PBL scheme: the maximum difference between the mean PBL heights of the PBL schemes is important for the case of Palaiseau to Paris on 24 May (78 %) compared to the others (66 % for Palaiseau to Paris on 25 May, 40 % for the beltway of Paris before rush-hour and 60 % for the beltway of Paris during rush-hour).
To summarize, for air temperature, the MYNN scheme presents the best performance although the diurnal cycle and the temperature are underestimated particularly during daytime. For the wind speed, the YSU and the MYNN schemes perform better than the others. The YSU and the MYNN schemes perform better for the relative humidity and the specific humidity, as well. For the PBL height, the YSU scheme performs better than the others but still underestimates significantly the PBL height. As no direct measurement of the PBL height exists, the observed PBL height is diagnosed in various ways, e.g., using the aerosol lidar measurements, as explained in Sect. 3, and using the virtual potential temperature. Because the method used to retrieve the PBL height influences the results, the underestimation of the modelled PBL heights is partly due to the use of different methods of diagnosis.
6 Effects of the UCM and the CORINE Land-Use Data
Impacts of the UCM and the CORINE land-use data on the meteorological fields are studied by comparing the reference simulations of the previous section (hereafter Reference) to simulations that use the UCM coupled to the CORINE land-use data (hereafter UCM–CORINE). Simulations are compared for the two PBL schemes that were the best performed in the previous section (the YSU and the MYNN schemes).
When the UCM is used, the sensible heat flux in the urban area increases. The increase is due to the anthropogenic heat flux and to differences in energy balance resulting from different geometric and thermal parameters in the UCM. The increase of the sensible heat flux results in an increase in surface temperature. The surface temperatures for the UCM–CORINE simulations are higher than the Reference simulations, especially during nighttime (0.8 K on average for both YSU and MYNN, see Fig. 4). Influences of the UCM and the CORINE land-use data on the 100-m temperature are lower than influences on the 2-m temperature, partly because the 2-m level is within the urban canopy. Outside the canopy (100-m temperature), the UCM–CORINE performs better than the Reference, as the temperature is underestimated by the model. Although this underestimation is resolved using UCM–CORINE during nighttime at Saclay, it persists at the Eiffel Tower. The transition from night to day for the 2-m temperature is delayed by about 1 h in the UCM–CORINE simulations. This may be due to a delayed transition of the skin temperature when the UCM is used compared to simulations without the UCM.
The UCM reduces the amplitude of the diurnal cycle of the 2-m temperature at Palaiseau. It is due to the urban heating arising from the anthropogenic heat flux taken into account in the UCM, which has a strong impact on the temperature near the surface during nighttime. The impact of the anthropogenic heat flux on the upward heat flux is significant during nighttime. However it is not significant during daytime, reducing the amplitude of the diurnal cycle. Although this reduction is significant for the 2-m temperature, it is low at higher altitudes (see Fig. 4b, c with the 100-m temperature at Saclay and the 319-m temperature at the Eiffel Tower).
The observed temperature diurnal amplitude at Palaiseau is closer to the Reference simulations than to the UCM simulations. This may be due to the use of a single urban land-use category in our UCM simulations. As explained in Sect. 3, the station at Palaiseau is located in a semi-urban environment. However, as we do not distinguish semi-urban areas from urban areas in our simulations, the urban effect may be overestimated at Palaiseau. In addition, the footprint of the station for 2-m temperature measurements may be \({<}100\,\hbox {m}\), which is much smaller than the grid size (Oke 2006). Footprints usually increase with the height of measurements. Therefore, footprints for the 100-m temperature measurements at Saclay and the 319-m temperature measurements at the Eiffel Tower may be larger than those for the 2-m temperature. The 100- and 319-m temperatures are more representative of the grid sizes used in the simulations.
Influences of the UCM and the CORINE land-use data on the mean vertical profiles of potential temperature at Trappes are low and confined to the lowest altitudes. The maximum differences between the UCM–CORINE and the Reference at 100 m altitude are only 0.3 K during daytime and 0.5 K during nighttime (not shown).
As shown in Fig. 6, the 10-m wind speed at Palaiseau is closer to measurements during daytime when the UCM and the CORINE land-use data are used. The lower 10-m wind speed is attributed to increasing roughness length in the UCM. The roughness length for urban areas defined in the Noah land surface model is 0.5 m. When the UCM is used, the roughness length over urban areas is recalculated using the formulation of Macdonald et al. (1998). In this study, we obtained a roughness length of 2.8 m using the parameters described in Table 1 (building height, roof width and road width). However the 10-m wind speed is still overestimated in all simulations during nighttime.
The 110-m wind speed at Saclay is much lower and closer to measurements in the UCM–CORINE simulation than in the Reference simulation for both the YSU and the MYNN schemes. The UCM–CORINE simulation produces better results for the 10-m wind speed and the 110-m wind speed, because the modelled wind speed is lower and in much better agreement with the measurements. As shown in Fig. 7, for the vertical profiles at Trappes, influences of the UCM and the CORINE land-use data on the wind speed during both daytime and nighttime are important below 1,000 m height, especially during nighttime. Maximum differences between the UCM–CORINE and the Reference is \(1\,\hbox {m s}^{-1}\) for the YSU scheme and \(2\,\hbox {m s}^{-1}\) for the MYNN scheme at 100 m during nighttime.
For the relative humidity (\(r\)) at Palaiseau, as shown in Fig. 8, the differences between the UCM–CORINE simulation and the Reference simulation are significant (about 15 % of mean \(r\)) for both the YSU and the MYNN schemes. During nighttime, lower \(r\) at the ground in the UCM–CORINE simulation is due to higher surface temperature. However during daytime, lower \(r\) is due to lower specific humidity (\(q\)), which results from stronger vertical mixing in the boundary layer influenced by the anthropogenic heat release in the UCM–CORINE simulation.
The variations of the vertical profiles of \(q\) at Trappes are influenced by the stronger vertical mixing in the UCM–CORINE simulation (not shown). Lower \(q\) is simulated by the UCM–CORINE than by the Reference near the surface while \(q\) with the UCM–CORINE at higher altitudes is slightly higher than with the Reference.
The UCM increases the PBL height over urbanized surface. This increase is more important during nighttime than during daytime. Accordingly, the increase with the UCM–CORINE simulation of the modelled PBL heights at Trappes is 8 % for the YSU scheme and 5 % for the MYNN scheme during daytime while it is 15 % for the YSU scheme and 200 % for the MYNN scheme during nighttime. Figure 10a, b display the mean PBL heights from 6 May to 27 May over Greater Paris by simulations with and without the UCM and the CORINE land-use data. The PBL heights are greater with the UCM–CORINE simulation than with the Reference simulation in Paris and the near suburbs. The maximum difference of the mean PBL height is about 290 m near Orly airport located south of Paris (see Fig. 10c). The effect of using the CORINE land-use data rather than the USGS data is shown in Fig. 10d, which shows differences of the PBL height between the UCM–CORINE simulations and a simulation with the UCM and the standard USGS land use. The influence of the CORINE land-use data on the PBL height is not significant in Paris while it is significant over urbanized areas mostly between 10 and 30 km from Paris.
Modelled mean PBL heights (m) from 6 May to 27 May: a Reference simulation with the YSU scheme, b UCM–CORINE simulation with the YSU scheme, c differences between the UCM–CORINE and the Reference simulations, and d differences between the UCM–CORINE simulation and the simulation with the UCM and the USGS land-use data. The black lines show the geographical border of the administrative department
Compared to the GBML measurements, modelled PBL heights are also significantly influenced by the UCM and the CORINE land-use data (see Fig. 9). For the measurements from Palaiseau to Paris centre, as well as for the measurements at the main road and the beltway of Paris, the UCM–CORINE simulations produce better results than the Reference simulations, as the modelled PBL heights are higher. However, PBL heights are still underestimated on 25 May (see Fig. 9b). Although the surface atmospheric stability is reduced by higher surface temperatures using the UCM, this influence is not significant because of a strong temperature inversion (increase in temperature with altitude) for the modelled temperatures at low altitudes.
6.5 Comparison to Previous Studies
The model performance presented above are also briefly compared to some recent studies using the WRF model (Miao et al. 2009; Grossman-Clarke et al. 2010; Lee et al. 2010; Flagg and Taylor 2011; Salamanca et al. 2011) (see Table 4). Miao et al. (2009) used measurements at 60 surface stations and a wind profiler over Greater Beijing on August 2005. Grossman-Clarke et al. (2010) used measurements at 18 surface stations in the Phoenix metropolitan area during summer extreme heat events for the years 1973, 1985, 1998, 2005. Lee et al. (2010) used Texas air quality study 2006 field campaign data that included surface measurements, radar wind profilers, boundary-layer height measurements from airborne and ship-based lidars. The simulations were performed over the Houston metropolitan area for a period from 12 to 17 August 2006. Flagg and Taylor (2011) used the Border Air Quality and Meteorology Study (BAQS-Met) 2007 field data that include measurements from an aircraft conducted at various heights across south-western Ontario and adjacent areas around Detroit from 3 to 7 July 2007. They also used radiosonde measurements at White Lake, Michigan and VHF wind profiler measurements at Harrow. Salamanca et al. (2011) used surface measurements over Houston for two days in August 2000.
Statistical comparisons of modelled values to observed values in this study and in previous studies using urban models
RMSE
Temperature (K)
Flagg and Taylor (2011)
\(-\)4.90 to 0.38
Grossman-Clarke et al. (2010)
\(-\)2.0 to 2.0
Lee et al. (2010)
Salamanca et al. (2011)
\(-\)1.46
Miao et al. (2009)
This study*
Mixing ratio \((\hbox {g kg}^{-1})\)
1.8 to 2.78
\(-\)2.26 to \(-\)1.17
Wind speed \((\hbox {m s}^{-1})\)
PBL height (m)
\(-\)288 to 539
* The combined results of the simulation using the YSU PBL scheme, the UCM and the CORINE land-use data and the simulation using the MYNN PBL scheme, the UCM and the CORINE land-use data
For the temperature, the mean RMSE obtained in this study (3.19 K) is higher than that of the previous studies (2.33 K). However, the mean MB in this study ranges between the maximum and the minimum of the previous studies. For the mixing ratio, the mean RMSE in this study \((0.78\,\hbox {g kg}^{-1})\) is lower than that of the previous studies and a lowest bias is also obtained. For the wind speed, the mean RMSE obtained in this study \((2.13\,\hbox {m s}^{-1}\)) is slightly higher than that of the previous studies \((1.96\,\hbox {m s}^{-1}\)) while the bias in this study is lower than that of the previous studies. For the PBL height, the mean RMSE in this study (325 m) is similar to that (304 m) in Lee et al. (2010) while the bias is higher (414 m against 249 m).
Meteorological modelling of the PBL over Greater Paris is performed using the WRF model for the period from 6 to 27 May 2005. As modelled meteorological data in the PBL have previously shown to be very sensitive to the PBL scheme, simulations were performed with various PBL schemes. Meteorological data obtained with the ACM2, MYJ, MYNN and YSU PBL schemes were compared to observations at various meteorological stations around Paris and its suburbs.
For air temperature, the errors of the modelled values are in the range of the errors obtained in previous studies and the MYNN scheme performs slightly better than others. However, the amplitudes of the diurnal cycle of the temperature are underestimated, particularly during daytime; and the underestimations tend to increase with altitude. Wind speeds are overestimated, particularly near the ground and the overestimations decrease with altitude. The YSU and the MYNN schemes perform better than the others.
For humidity, the modelled values are in good agreement with the observed values for the four PBL schemes, although relative humidity tends to be overestimated. The YSU and the MYNN schemes perform better for the relative humidity at the ground station and the specific humidity of radiosonde profiles, as well.
Larger differences between the simulations are obtained for the PBL height. The YSU and the MYJ schemes overestimate the PBL height while the ACM2 and the MYNN schemes underestimate the PBL height during nighttime. Mean PBL heights are also significantly different among them. The YSU scheme performs better than the others (maximum difference: 77 %).
Including the UCM and the CORINE land-use data produces more realistic modelled meteorological fields. Improvements in the temperature and the specific humidity modelling using UCM–CORINE are low, but the modelling of the wind speed, the relative humidity and the PBL height is significantly improved using UCM–CORINE. In particular, modelled PBL heights with the MYNN scheme during nighttime are strongly influenced by UCM–CORINE (200 %).
Influences of using the UCM and the CORINE land-use data on the modelled meteorological fields are greater than those using different PBL schemes, while the latter are greater for the upper air temperatures (above 40 m) and the PBL heights estimated using radiosonde profiles at Trappes. Compared to the PBL heights observed by lidar measurements, the influences of using different PBL schemes at Palaiseau are more important than those of using the UCM and the CORINE land-use data, while the latter are more important at the centre of Paris. Our results show that the use of a urban canopy model is crucial for meteorological and air quality modelling over the centre of Paris. Further work should be devoted to the study of uncertainties in the UCM, for example, by using a multi-layer model. Multiple urban land-use categories (e.g., high intensity, medium intensity, and low intensity) should also be used. Multiple urban land-use categories are available in the CORINE land-use data, and should be mapped to a data type that can be used in the WRF model. Further work will also focus on evaluating the impact of the meteorological modelling on pollutant concentrations \((\hbox {O}_3,\,\hbox {NO}_2,\,\hbox {PM})\) within the PBL.
The authors acknowledge Yang Zhang, North Carolina State University, for helpful discussions on the WRF simulation. Thanks are also due to Philippe Beguinel, CEA Saclay DSM/Sac/UPSE/SPR mesures métórologiques for providing measurement data; to Denis Fourgassié, Météo-France CIDM Paris-Montsouris for providing meteorological fields at the Eiffel Tower; to Sylvain Dupont, US NCAR/MMM, for providing Fortran code for the urban model; to Delia Arnold, University of natural resources and life sciences in Vienna, for providing Fortran code for the CORINE land-use conversion; to Song-You Hong, Yonsei University in Seoul, for providing Fortran code for the YSU PBL parametrization. We also thank SIRTA for providing the meteorological fields. Our colleagues Victor Winiarek and Jérôme Drevet helped us with the WRF configuration and geographical information system (GIS) data usage, respectively. Helpful advice about meteorological data analysis was given by Luc Musson Genon, Bertrand Carissimo and Eric Dupont. Finally, we thank Christian Seigneur for helpful discussions and advice on the manuscript.
Allen L, Lindberg F, Grimmond CSB (2011) Global to city scale urban anthropogenic heat flux: model and variability. Int J Climatol 31(13):1990–2005CrossRefGoogle Scholar
Arnold D, Schicker I, Seibert P (2010) High-resolution atmospheric modelling in complex terrain for future climate simulations (HiRmod). VSC report 2010. http://www.boku.ac.at/met/envmet/hirmod.html
Berg LK, Zhong S (2005) Sensitivity of MM5-simulated boundary layer characteristics to turbulence parameterizations. J Appl Meteorol 44:1467–1483CrossRefGoogle Scholar
Borge R, Alexandrov V, Josedelvas J, Lumbreras J, Rodrguez E (2008) A comprehensive sensitivity analysis of the WRF model for air quality applications over the Iberian Peninsula. Atmos Environ 42(37):8560–8574CrossRefGoogle Scholar
Brooks IM (2003) Finding boundary layer top: application of a wavelet covariance transform to lidar backscatter profiles. J Atmos Ocean Technol 20(8):1092–1105CrossRefGoogle Scholar
Chazette P, Sanak J, Dulac F (2007) New approach for aerosol profiling with a lidar onboard an ultralight aircraft: application to the African monsoon multidisciplinary analysis. Environ Sci Technol 41(24):8335–8341CrossRefGoogle Scholar
Chen F, Dudhia J (2001) Coupling an advanced land surface-hydrology model with the Penn State-NCAR MM5 modeling system. Part I : Model implementation and sensitivity. Mon Weather Rev 129:569–585CrossRefGoogle Scholar
Chen F, Kusaka H, Bornstein R, Ching J, Grimmond CSB, Grossman-Clarke S, Loridan T, Manning KW, Martilli A, Miao S, Sailor D, Salamanca FP, Taha H, Tewari M, Wang X, Wyszogrodzki AA, Zhang C (2011) The integrated WRF/urban modeling system: development, evaluation, and applications to urban environmental problems. Int J Climatol 31(2):479–492CrossRefGoogle Scholar
Chou MD, Suarez MJ (1994) An efficient thermal infrared radiation parameterization for use in general circulation models. Technical report series on global modeling and data assimilation 3:85. http://archive.org/details/nasa_techdoc_19950009331
Coindreau O, Hourdin F, Haeffelin M, Mathieu A, Rio C (2007) Assessment of physical parameterizations using a global climate model with stretchable grid and nudging. Mon Weather Rev 135:1474–1489CrossRefGoogle Scholar
Dandou A, Tombrou M, Akylas E, Soulakellis N, Bossioli E (2005) Development and evaluation of an urban parameterization scheme in the Penn State/NCAR Mesoscale Model (MM5). J Geophys Res 110:D10102Google Scholar
Dupont S, Otte TL, Ching JKS (2004) Simulation of meteorological fields within and above urban and rural canopies with a mesoscale model. Boundary-Layer Meteorol 113:111–158CrossRefGoogle Scholar
Fan H, Sailor DJ (2005) Modeling the impacts of anthropogenic heating on the urban climate of Philadelphia: a comparison of implementations in two PBL schemes. Atmos Environ 39(1):73–84CrossRefGoogle Scholar
Flagg DD, Taylor PA (2011) Sensitivity of mesoscale model urban boundary layer meteorology to the scale of urban representation. Atmos Chem Phys 11(6):2951–2972CrossRefGoogle Scholar
Flamant C, Pelon J, Flamant PH, Durand P (1997) Lidar determination of the entrainment zone thickness at the top of the unstable marine atmospheric boundary layer. Boundary-Layer Meteorol 83:247–284CrossRefGoogle Scholar
Grell GA, Devenyi D (2002) A generalized approach to parameterizing convection combining ensemble and data assimilation techniques. Geophys Res Lett 29(14):38.1-38.4Google Scholar
Grossman-Clarke S, Zehnder JA, Loridan T, Grimmond CSB (2010) Contribution of land use changes to near-surface air temperatures during recent summer extreme heat events in the Phoenix metropolitan area. J Appl Meteorol Climatol 49:1649–1664CrossRefGoogle Scholar
Haeffelin M, Barths L, Bock O, Boitel C, Bony S, Bouniol D, Chepfer H, Chiriaco M, Cuesta J, Delano J, Drobinski P, Dufresne JL, Flamant C, Grall M, Hodzic A, Hourdin F, Lapouge F, Lematre Y, Mathieu A, Morille Y, Naud C, Nol V, O'Hirok W, Pelon J, Pietras C, Protat A, Romand B, Scialom G, Vautard R (2005) SIRTA, a ground-based atmospheric observatory for cloud and aerosol research. Ann Geophys 23(2):253–275CrossRefGoogle Scholar
Han Z, Ueda H, An J (2008) Evaluation and intercomparison of meteorological predictions by five MM5-PBL parameterizations in combination with three land-surface models. Atmos Environ 42(2):233–249CrossRefGoogle Scholar
Holtslag AAM, Meijgaard E, Rooy WC (1995) A comparison of boundary layer diffusion schemes in unstable conditions over land. Boundary-Layer Meteorol 76:69–95CrossRefGoogle Scholar
Hong SY, Pan HL (1996) Nonlocal boundary layer vertical diffusion in a medium-range forecast model. Mon Weather Rev 124(10):2322–2339CrossRefGoogle Scholar
Hong SY, Noh Y, Dudhia J (2006) A new vertical diffusion package with an explicit treatment of entrainment processes. Mon Weather Rev 134(9):2318–2341CrossRefGoogle Scholar
Hourdin F, Musat I, Bony S, Braconnot P, Codron F, Dufresne JL, Fairhead L, Filiberti MA, Friedlingstein P, Grandpeix JY, Krinner G, Levan P, Li ZX, Lott F (2006) The LMDZ4 general circulation model: climate performance and sensitivity to parametrized physics with emphasis on tropical convection. Clim Dyn 27:787–813CrossRefGoogle Scholar
Hu XM, Nielsen-Gammon JW, Zhang F (2010) Evaluation of three planetary boundary layer schemes in the WRF model. J Appl Meteorol Climatol 49(9):1831–1844CrossRefGoogle Scholar
Janjić ZI (1990) The step-mountain coordinate: physical package. Mon Weather Rev 118(7):1429CrossRefGoogle Scholar
Janjić ZI (2001) Nonsingular implementation of the Mellor–Yamada level 2.5 scheme in the NCEP meso model. National Centers for Environmental Prediction, Office Note 437, 61 pp. http://www.emc.ncep.noaa.gov/officenotes/newernotes/on437.pdf
Kessler E (1969) On the distribution and continuity of water substance in atmospheric circulation. Meteorol Monogr 10:1–84Google Scholar
Kim Y, Fu JS, Miller TL (2010) Improving ozone modeling in complex terrain at a fine grid resolution: part I examination of analysis nudging and all PBL schemes associated with LSMs in meteorological model. Atmos Environ 44(4):523–532CrossRefGoogle Scholar
Korsakissok I, Mallet V (2010) Development and application of a reactive plume-in-grid model: evaluation over Greater Paris. Atmos Chem Phys 10(18):8917–8931CrossRefGoogle Scholar
Kusaka H, Kimura F (2004) Thermal effects of urban canyon structure on the nocturnal heat island: numerical experiment using a mesoscale model coupled with an urban canopy model. J Appl Meteorol 43:1899–1910CrossRefGoogle Scholar
Kusaka H, Kondo H, Kikegawa Y, Kimura F (2001) A simple single-layer urban canopy model for atmospheric models: comparison with multi-layer and slab models. Boundary-Layer Meteorol 101:329–358CrossRefGoogle Scholar
Lee SH, Kim SW, Angevine W, Bianco L, McKeen S, Senff C, Trainer M, Tucker S, Zamora R (2010) Evaluation of urban surface parameterizations in the WRF model using measurements during the Texas Air Quality Study 2006 field campaign. Atmos Chem Phys 11:2127–2143CrossRefGoogle Scholar
Lemonsu A, Masson V (2002) Simulation of a summer urban breeze over Paris. Boundary-Layer Meteorol 104:463–490CrossRefGoogle Scholar
Lin CY, Chen F, Huang J, Chen WC, Liou YA, Chen WN, Liu SC (2008) Urban heat island effect and its impact on boundary layer development and land–sea circulation over northern Taiwan. Atmos Environ 42(22):5635–5649CrossRefGoogle Scholar
Loridan T, Grimmond C (2012) Multi-site evaluation of an urban land-surface model: intra-urban heterogeneity, seasonality and parameter complexity requirements. Q J R Meteorol Soc 138(665):1094–1113CrossRefGoogle Scholar
Loridan T, Grimmond CSB, Grossman-Clarke S, Chen F, Tewari M, Manning K, Martilli A, Kusaka H, Best M (2010) Trade-offs and responsiveness of the single-layer urban canopy parametrization in WRF: an offline evaluation using the MOSCEM optimization algorithm and field observations. Q J R Meteorol Soc 136(649):997–1019CrossRefGoogle Scholar
Macdonald R, Griffiths R, Hall D (1998) An improved method for the estimation of surface roughness of obstacle arrays. Atmos Environ 32(11):1857–1864CrossRefGoogle Scholar
Mallet V, Sportisse B (2006) Uncertainty in a chemistry-transport model due to physical parameterizations and numerical approximations: an ensemble approach applied to ozone modeling. J Geophys Res 111:D01302CrossRefGoogle Scholar
Martilli A, Clappier A, Rotach MW (2002) An urban surface exchange parameterisation for mesoscale models. Boundary-Layer Meteorol 104:261–304CrossRefGoogle Scholar
Mellor GL, Yamada T (1974) A hierarchy of turbulence closure models for planetary boundary layers. J Atmos Sci 31:1791–1806CrossRefGoogle Scholar
Menut L, Flamant C, Pelon J, Flamant PH (1999) Urban boundary-layer height determination from lidar measurements over the Paris area. Appl Opt 38(6):945–954CrossRefGoogle Scholar
Météo-France (2005) Bulletin climatologique mensuel, 75 Paris et petite couronne, Mai 2005. https://public.meteofrance.com/
Miao S, Chen F, Lemone MA, Tewari M, Li Q, Wang Y (2009) An observational and modeling study of characteristics of urban heat island and boundary layer structures in Beijing. J Appl Meteorol Climatol 48:484–501CrossRefGoogle Scholar
Mlawer EJ, Taubman SJ, Brown PD, Iacono MJ, Clough SA (1997) Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J Geophys Res 102(D14):16663–16682CrossRefGoogle Scholar
Moeng CH, Dudhia J, Klemp J, Sullivan P (2007) Examining two-way grid nesting for large eddy simulation of the PBL using the WRF model. Mon Weather Rev 135(6):2295–2311CrossRefGoogle Scholar
Nakanishi M, Niino H (2004) An improved Mellor Yamada Level-3 model with condensation physics: its design and verification. Boundary-Layer Meteorol 112:1–31CrossRefGoogle Scholar
Nakanishi M, Niino H (2009) Development of an improved turbulence closure model for the atmospheric boundary layer. J Meteorol Soc Jpn 87(5):895–912CrossRefGoogle Scholar
Oke TR (1987) Boundary layer climates, 2nd edn. Routledge, London, 435 ppGoogle Scholar
Oke TR (2006) Initial guidance to obtain representative meteorological observations at urban sites. WMO/TD-No. 1250 available at http://www.wmo.int/pages/prog/www/IMOP/publications/IOM-81/IOM-81-UrbanMetObs.pdf
Olson JB, Brown JM (2009) A comparison of two Mellor–Yamada-based PBL schemes in simulating a hybrid barrier jet. In: 23rd Conference on weather analysis and forecasting/19th conference on numerical weather prediction, Omaha. http://ams.confex.com/ams/pdfpapers/154321.pdf
Otte TL, Lacser A, Dupont S, Ching JKS (2004) Implementation of an urban canopy parameterization in a mesoscale meteorological model. J Appl Meteorol 43:1648–1665CrossRefGoogle Scholar
Pigeon G, Legain D, Durand P, Masson V (2007) Anthropogenic heat release in an old European agglomeration (Toulouse, France). Int J Climatol 27:1969–1981CrossRefGoogle Scholar
Pineda N, Jorba O, Jorge J, Baldasano JM (2004) Using NOAA AVHRR and SPOT VGT data to estimate surface parameters: application to a mesoscale meteorological model. Int J Remote Sens 25(1):129–143CrossRefGoogle Scholar
Pleim JE (2006) A simple, efficient solution of flux profile relationships in the atmospheric surface layer. J Appl Meteorol Climatol 45:341–347Google Scholar
Pleim JE (2007) A combined local and nonlocal closure model for the atmospheric boundary layer. Part I: Model description and testing. J Appl Meteorol Climatol 46(9):1383–1395CrossRefGoogle Scholar
Pleim JE, Chang JS (1992) A non-local closure model for vertical mixing in the convective boundary layer. Atmos Environ 26A:965–981Google Scholar
Raut JC, Chazette P (2009) Assessment of vertically-resolved \(\text{ PM }_{10}\) from mobile lidar observations. Atmos Chem Phys 9(21):8617–8638CrossRefGoogle Scholar
Roustan Y, Sartelet K, Tombette M, Debry É, Sportisse B (2010) Simulation of aerosols and gas-phase species over Europe with the Polyphemus system. Part II: Model sensitivity analysis for 2001. Atmos Environ 44(34):4219–4229CrossRefGoogle Scholar
Roustan Y, Pausader M, Seigneur C (2011) Estimating the effect of on-road vehicle emission controls on future air quality in Paris, France. Atmos Environ 45(37):6828–6836CrossRefGoogle Scholar
Royer P, Chazette P, Sartelet K, Zhang QJ, Beekmann M, Raut JC (2011) Comparison of lidar-derived \(\text{ PM }_{10}\) with regional modeling and ground-based observations in the frame of MEGAPOLI experiment. Atmos Chem Phys 11(20):10705–10726CrossRefGoogle Scholar
Sailor D, Lu L (2004) A top-down methodology for developing diurnal and seasonal anthropogenic heating profiles for urban areas. Atmos Environ 38(17):2737–2748CrossRefGoogle Scholar
Salamanca F, Krpo A, Martilli A, Clappier A (2010) A new building energy model coupled with an urban canopy parameterization for urban climate simulations—part I. Formulation, verification, and sensitivity analysis of the model. Theor Appl Climatol 99:331–344CrossRefGoogle Scholar
Salamanca F, Martilli A, Tewari M, Chen F (2011) A study of the urban boundary layer using different urban parameterizations and high-resolution urban canopy parameters with WRF. J Appl Meteorol Climatol 50:1107–1128CrossRefGoogle Scholar
Sarkar A, De Ridder K (2011) The urban heat island intensity of Paris: a case study based on a simple urban surface parametrization. Boundary-Layer Meteorol 138:511–520CrossRefGoogle Scholar
Sarrat C, Lemonsu A, Masson V, Guedalia D (2006) Impact of urban heat island on regional atmospheric pollution. Atmos Environ 40(10):1743–1758CrossRefGoogle Scholar
Sciare J, d'Argouges O, Zhang QJ, Sarda-Estève R, Gaimoz C, Gros V, Beekmann M, Sanchez O (2010) Comparison between simulated and observed chemical composition of fine aerosols in Paris (France) during springtime: contribution of regional versus continental emissions. Atmos Chem Phys 10(24):11987–12004CrossRefGoogle Scholar
Shin H, Hong SY (2011) Intercomparison of planetary boundary-layer parametrizations in the WRF model for a single day from CASES-99. Boundary-Layer Meteorol 139:261–281CrossRefGoogle Scholar
Skamarock WC, Klemp JB, Dudhia J, Gill DO, Barker DM, Duda MG, Huang XY, Wang W, Powers JG (2008) A description of the Advanced Research WRF version 3. NCAR Technical note-475+STR, 113 pp. http://www.mmm.ucar.edu/wrf/users/docs/arw_v3.pdf
Srinivas C, Venkatesan R, Singh AB (2007) Sensitivity of mesoscale simulations of land–sea breeze to boundary layer turbulence parameterization. Atmos Environ 41(12):2534–2548CrossRefGoogle Scholar
Steyn DG, Baldi M, Hoff RM (1999) The detection of mixed layer depth and entrainment zone thickness from lidar backscatter profiles. J Atmos Ocean Technol 16(7):953–959CrossRefGoogle Scholar
Stull RB (1988) An introduction to boundary layer meteorology. Kluwer, Dordrecht, 666 ppGoogle Scholar
Svensson G, Holtslag A, Kumar V, Mauritsen T, Steeneveld G, Angevine W, Bazile E, Beljaars A, Bruijn E, Cheng A, Conangla L, Cuxart J, Ek M, Falk M, Freedman F, Kitagawa H, Larson V, Lock A, Mailhot J, Masson V, Park S, Pleim J, Sderberg S, Weng W, Zampieri M (2011) Evaluation of the diurnal cycle in the atmospheric boundary layer over land as represented by a variety of single-column models: the second GABLS experiment. Boundary-Layer Meteorol 140:177–206CrossRefGoogle Scholar
Tombette M, Sportisse B (2007) Aerosol modeling at a regional scale: model-to-data comparison and sensitivity analysis over Greater Paris. Atmos Environ 41(33):6941–6950CrossRefGoogle Scholar
Vautard R, Builtjes P, Thunis P, Cuvelier C, Bedogni M, Bessagnet B, Honoré C, Moussiopoulos N, Pirovano G, Schaap M, Stern R, Tarrason L, Wind P (2007) Evaluation and intercomparison of ozone and \(\text{ PM }_{10}\) simulations by several chemistry transport models over four European cities within the CityDelta project. Atmos Environ 41(1):173–188CrossRefGoogle Scholar
Wang ZH, Bou-Zeid E, Au SK, Smith JA (2011) Analyzing the sensitivity of WRF's single-layer urban canopy model to parameter uncertainty using advanced Monte Carlo simulation. J Appl Meteorol Climatol 50:1795–1814CrossRefGoogle Scholar
Zhang D, Anthes RA (1982) A high-resolution model of the planetary boundary layer-sensitivity tests and comparisons with SESAME-79 data. J Appl Meteorol 21:1594–1609CrossRefGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
1.CEREA, Joint Research Laboratory, École des Ponts ParisTech/EDF R&DUniversité Paris-EstMarne la Vallée Cedex 2France
2.Laboratoire Atmosphères Milieux Observations Spatiales (LATMOS), Laboratoire Mixte UPMC-UVSQ-CNRS, UMR 8190Université Paris 6ParisFrance
3.Laboratoire des Sciences du Climat et de l'Environnement (LSCE), Laboratoire Mixte CEA-CNRS-UVSQCEA SaclayGif-sur-YvetteFrance
Kim, Y., Sartelet, K., Raut, JC. et al. Boundary-Layer Meteorol (2013) 149: 105. https://doi.org/10.1007/s10546-013-9838-6
Accepted 04 July 2013
First Online 27 July 2013
Publisher Name Springer Netherlands | CommonCrawl |
Radical Expressions / Equations
The Distance Formula 03:32 minutes
Transcript The Distance Formula
Deep in the Amazonian jungle, Carlos lives with his family in a teeny-tiny village. Carlos likes most things about his village, but he has to wake up at the crack of dawn if he wants to be at school on time. It's not so much that his school is so far away, but there's a huge canyon between the village and the school, so Carlos must walk around the canyon to get to his school. To make his journey faster, Carlos has a great idea. But to make sure his idea will work he'll need to use the Distance Formula.
Take a look at this map The scale's in yards. Here's the path that Carlos usually walks to go to school, around this side of the canyon, across the bridge, and then along the other side of the canyon. It takes Carlos about 2 hours to walk to school every day. So, what's Carlos' great idea? He wants to build a zip line to go right across the canyon, allowing him to get to school in a fraction of the time! But he doesn't have any rope. What can he do? In a stroke of genius, Carlos decides to use the rope from his mom's clothesline. There are just two problems with this plan: the clothesline is only 350 yards long. Will that be enough? And what will his mom say about him using the rope from the clothesline to build the zip line?
Using the Pythagorean Theorem to Calculate the Distance
To answer the first question, he needs to calculate the distance between these two points. As for Carlos' mom and the missing clothesline? Only time will tell...
We can't help Carlos out with his mom, but we can help him solve his little math problem. To find the distance between any two known points in a coordinate plane, first construct a right triangle. Then, modify the Pythagorean Theorem to solve for the unknown distance. Notice how we replaced 'a' and 'b' with the quantities 'x'-two minus 'x'-one and 'y'-two minus 'y'-one, respectively. Since 'c' is the distance we want to know, we'll now call this variable 'd'.
After taking the square roots of both sides of the equation, we're left with the Distance Formula. The location of Carlos' village is at the ordered pair one hundred, one hundred and the location of his school is at the ordered pair two hundred, four hundred. Carlos' home will be point 1 and his school will be point 2. Now, using the known points, we can replace the variables in the expression and solve for the distance Now that there are no more variables, we can finish this off with PEMDAS to get the distance. To get across the canyon, the zip line only needs to be approximately 316.23 yards...so Carlos has enough rope!
He's really excited to use the zip line for the first time, AND he was even able to sleep two hours longer than usual.
There he goes. Whe! Oh man, now Carlos knows a bit more about the villagers than he wanted.
The distance formula is a formula used to find the distance between two distinct points on a plane. The formula was derived from the Pythagorean theorem, which states that for any right triangle, the square of the hypotenuse is equal to the sum of the square of the two legs.
Finding the distance between two distinct points on a plane is the same as finding the hypotenuse of a right triangle. From this perspective, the distance formula states that the distance of two distinct points on a plane is equal to the square root of the sum of the square of the rise and run.
The distance formula comes with some uses in everyday life. It can be used as a strategy for easy navigation and distance estimation. For example, if you want to estimate the distance of two places on a map, simply get the coordinate of the two places and apply the formula. Or when a pilot wants to know the distance of an incoming plane and his plane, he can use the plane radar and find the coordinates of the two planes and then apply the formula.
Use coordinates to prove simple geometric theorems algebraically
CCSS.MATH.CONTENT.HSG.GPE.B.7
Susan Sayfan
Radical Expressions / Equations (8 Videos)
Simplifying Radical Expressions
Solving Radical Equations
Rationalize the Denominator
The Distance Formula
The Midpoint Formula
Adding and Subtracting Radical Expressions
Multiplying Radical Expressions
Dividing Radical Expressions
The Distance Formula Übung
Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video The Distance Formula kannst du es wiederholen und üben.
Explain how to calculate the distance between two points.
The Pythagorean theorem states that the sum of the squares of the legs of any right triangle is the same as the square of the hypotenuse.
In the picture above, the lengths of both legs are given by the difference of the coordinates of the two points.
We don't want to know the squared distance.
To determine the distance between two given points, $(x_1,y_1)$ and $(x_2,y_2)$, in a coordinate plane we first draw a right angle.
Then we use the Pythagorean theorem to get
$(x_2-x_1)^2+(y_2-y_1)^2=c^2$,
where $c$ is the length of the hypotenuse.
Because the length of the hypotenuse is the desired distance, we replace it by $d$:
$(x_2-x_1)^2+(y_2-y_1)^2=d^2$.
Lastly, we take the square root on both sides and change the sides to get the distance formula
$d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$.
Find the right distance formula.
Here you see a right triangle with the points $(x_1,y_1)$ and $(x_2,y_2)$.
Use the Pythagorean property:
The sum of the squares of the legs of any right triangle is the same as the square of the hypotenuse.
The distance is the length of the hypotenuse.
Two points, $(x_1,y_1)$ as well as $(x_2,y_2)$, of a right triangle are given.
With the Pythagorean theorem, we get
where $c$ is the length of the hypotenuse and $|x_2-x_1|$ and $|y_2-y_1|$ the lengths of the legs.
Calculate the distance from Carlos' village to his school.
Here you see the distance formula for two points in a coordinate plane.
You get the points by just looking at the coordinate plane on the map pictured above.
For Carlos' village:
Draw a line parallel to the $x$-axis passing this point. The $y$-coordinate is the intersection of this line and the $y$-axis.
Draw a line parallel to the $y$-axis passing this point. The $x$-coordinate is the intersection of this line and the $x$-axis.
The coordinate for Carlos' school can be found in a similar fashion.
PEMDAS means the order of operations:
Paranthesis
The coordinates of Carlos' village as well as his school are:
Home: $(100,100)$
School: $(200,400)$
The distance formula for two points in a coordinate plane is given by
$d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$,
where, in our situation, we have that:
$x_2=200$
$y_2=400$
Putting those values into the distance formula above, we get:
$d=\sqrt{(200-100)^2+(400-100)^2}$.
Using PEMDAS, we get
$d=\sqrt{(100)^2+(300)^2}=\sqrt{10000+90000}=\sqrt{100000}$.
We can simplify this expression to get
$d=100\sqrt{10}\approx316.23$.
Examine the distances of the given points.
Use the distance formula:
Here you see an example for the calculation of the distance of the two points $(10,40)$ and $(40,20)$.
Be careful with the order of the coordinates.
The distance between two points is given as the square root of the sum of the squares of the differences of the coordinates:
Let's practice the calculation of the distance for a few pairs of points:
$(10,20)$ and $(30,40)$ gives
$\begin{array}{rcl} d&=&~\sqrt{(30-10)^2+(40-20)^2}\\ &=&~\sqrt{(20)^2+(20)^2}\\ &=&~\sqrt{800}=20\sqrt2 \end{array}$
Pay attention to the order of the coordinates: $(10,20)$ and $(40,30)$ leads to
$\begin{array}{rcl} d&=&~\sqrt{(40-10)^2+(30-20)^2}\\ &=&~\sqrt{(30)^2+(10)^2}\\ &=&~\sqrt{1000}=10\sqrt{10} \end{array}$
So you see, it's very important to be careful with the order of the coordinates. Those calculations look quite similar, but the result is totally different.
The distance between $(10,30)$ and $(30,30)$ is
$\begin{array}{rcl} d&=&~\sqrt{(30-10)^2+(30-30)^2}\\ &=&~\sqrt{(20)^2}\\ &=&~20 \end{array}$
Lastly, the distance between $(40,40)$ and $(10,90)$ is
$\begin{array}{rcl} d&=&~\sqrt{(10-40)^2+(90-40)^2}\\ &=&~\sqrt{(-30)^2+(50)^2}\\ &=&~\sqrt{2500}=50 \end{array}$
Connect the ordered pairs with the right formula.
Pay attention to the correct order of the points.
Just check the exponents.
Here you see an example of using the distance formula.
So we have $d=\sqrt{(56-12)^2+(78-34)^2}$.
The distance between two points is the square root of the sum of the squares of the differences of the coordinates:
Let's practice it with a few pairs of points:
$d=\sqrt{(30-10)^2+(40-20)^2}$.
Pay attention to the order: $(10,20)$ and $(40,30)$ leads to
It looks quite similar, but it's totally different.
$d=\sqrt{(10-40)^2+(50-20)^2}$,
while the distance between $(40,20)$ and $(50,10)$ is
Determine the distance.
Use the distance formula for two points in a coordinate plane:
$d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$
If you have to round to the nearest hundredth, you just have to look at the third position after the decimal: if this position is less or equal $4$ then round down, otherwise round up.
Here are two examples for rounding decimals:
The point representing Carlos' location is $(100,100)$ and the point representing where his friend lives is $(150,300)$.
We use the distance formula for two points in a coordinate plane:
with the given coordinates above. So we put them into this formula to get
$d=\sqrt{(50)^2+(200)^2}=\sqrt{2500+40000}=\sqrt{42500}$.
We can write this result as
$d=\sqrt{2500\times17}=50\sqrt{17}$,
or as a rounded decimal
$d\approx206.16$. | CommonCrawl |
IMAGE: high-powered detection of genetic effects on DNA methylation using integrated methylation QTL mapping and allele-specific analysis
Yue Fan1,2,
Tauras P. Vilgalys3,
Shiquan Sun2,
Qinke Peng1,
Jenny Tung3,4 &
Xiang Zhou ORCID: orcid.org/0000-0002-4331-75992,5
Genome Biology volume 20, Article number: 220 (2019) Cite this article
Identifying genetic variants that are associated with methylation variation—an analysis commonly referred to as methylation quantitative trait locus (mQTL) mapping—is important for understanding the epigenetic mechanisms underlying genotype-trait associations. Here, we develop a statistical method, IMAGE, for mQTL mapping in sequencing-based methylation studies. IMAGE properly accounts for the count nature of bisulfite sequencing data and incorporates allele-specific methylation patterns from heterozygous individuals to enable more powerful mQTL discovery. We compare IMAGE with existing approaches through extensive simulation. We also apply IMAGE to analyze two bisulfite sequencing studies, in which IMAGE identifies more mQTL than existing approaches.
DNA methylation is a stable, covalent modification of cytosine residues that, in vertebrates, typically occurs at CpG dinucleotides. DNA methylation also functions as an important epigenetic regulatory mechanism, with known roles in genomic imprinting, X-inactivation, and suppression of transposable element activity [1, 2]. DNA methylation is thus thought to play a key role in responding to the environment and generating trait variation, including variation in disease susceptibility. In support of this idea, methylation levels have been associated with diabetes [3, 4], autoimmune diseases [5,6,7], metabolic disorders [8,9,10], neurological disorders [11, 12], and various forms of cancer [13,14,15,16,17].
Importantly, DNA methylation variation at individual CpG sites often has a strong genetic component [18,19,20,21,22,23,24,25,26,27,28,29]. Family-based and population-based studies have shown that DNA methylation levels are 34% heritable on average in adipose tissue and are 18–20% heritable on average in whole blood, with heritability estimates reaching as high as 97% [21, 24, 26, 30]. Genetic effects on DNA methylation levels can be explained, at least in part, by cis-acting SNPs located close to target CpG sites, where CpG methylation level is associated with the identity of physically linked alleles [23, 31,32,33,34,35]. Indeed, recent methylation quantitative trait loci (mQTL) mapping studies have shown that up to 28% of CpG sites in the human genome are associated with nearby SNPs [23, 26, 31, 32, 36]. Further, cis-mQTL often colocalize with disease-associated loci and cis-expression QTL (cis-eQTL) [26], suggesting that genetic effects on gene expression may be mediated by DNA methylation. Therefore, identifying cis-mQTL is an important step towards understanding the genetic basis of gene regulatory variation and, ultimately, organism-level traits.
Most mQTL mapping studies thus far rely on DNA methylation data generated using array-based platforms [36,37,38]. However, the falling cost of sequencing and the development of high-throughput sequencing-based approaches to measure DNA methylation levels makes mQTL mapping using sequencing data increasingly feasible. Sequencing-based approaches offer several advantages. They can extend the breadth of DNA methylation analysis to the full genome (e.g., via whole genome bisulfite sequencing [39]), increase the flexibility to target specific regions of interest (e.g., via capture methods [40]), improve the representation of genomic regions or regulatory elements that are poorly represented on current array platforms (e.g., via reduced representation bisulfite sequencing [41, 42]), and distinguish 5-hmc modifications from 5-mc modifications (e.g., via TET-assisted pyridine borane sequencing [43] or TAB-seq approaches [44]). Further, unlike arrays, which are largely limited to studies in humans, sequencing-based approaches can be applied to any species [45,46,47,48]. Therefore, sequencing-based approaches have become the workhorse of major initiatives like the 1001 Genomes Project in the plant model system Arabidopsis thaliana [49, 50]. Importantly, sequencing techniques also facilitate the estimation of allele-specific methylation levels, which should greatly improve the power of mQTL mapping approaches (as allele-specific expression estimates have been shown to do for eQTL mapping: [51, 52]). Early attempts to perform mQTL mapping with bisulfite sequencing data have yielded promising results [35, 49, 53]. However, existing mQTL mapping methods are designed with array data in mind [37, 38]. To maximize power, mQTL mapping using sequencing data requires new statistical method development that can properly account for two of its distinctive features.
First, methylation data collected in sequencing studies are counts, not continuous representations like those produced by arrays. Specifically, methylation-level estimates at a given cytosine base are based on both the total read count at the site and the subset of those reads that are unconverted by sodium bisulfite (or other processes [43]). Previous mQTL studies have dealt with these data by first computing a ratio between the methylated count and the total count, and then treating this ratio as an estimate of the true methylation level [35, 49]. However, the count nature of the raw data means that the mean and variance of the computed ratio are highly interdependent. This relationship is not captured by previously deployed linear regression methods, which likely leads to loss of power. Indeed, similar losses of power are well documented for differential methylation analysis [40] and differential expression analysis of RNA-seq data [54,55,56,57]. To overcome this challenge, statistical methods for sequencing-based differential methylation analysis now adapt over-dispersed count models, including beta-binomial models [58,59,60,61,62] and binomial mixed models [40, 63, 64], to properly model the mean-variance relationship and potential over-dispersion. In differential methylation analysis, these approaches can substantially improve power compared with normalization-based approaches [30, 65, 66]. Because mQTL mapping is conceptually similar and can be effectively viewed as genotype-based differential methylation analysis, extending over-dispersed binomial models to mQTL mapping is a promising approach.
Second, sequencing-based techniques are capable of measuring DNA methylation levels in heterozygotes in an allele-specific fashion (i.e., allele-specific methylation, ASM). When ASM estimates support differences in methylation levels between the two alleles carried by heterozygotes, they can be used to increase the power of mapping analysis. Indeed, assuming that additive genetic effects dominate, true cis-acting genetic differences in DNA methylation are expected to lead to both (i) differential methylation by genotype across all three genotypes at a biallelic site and (ii) ASM in heterozygotes. These two types of evidence are only available in sequencing studies, since ASM is not generally detectable when DNA methylation is profiled using arrays. Notably, previous methods for detecting genotype-dependent ASM suggest that it is common across tissue types and species, is more often explained by cis-acting variants than trans-effects, and is enriched near genes that also display patterns of allele-specific expression [67,68,69,70,71,72,73,74,75]. Thus, integrating ASM analysis into mQTL mapping analyses should also contribute to understanding the basis of cis-regulatory effects on gene expression. There is strong precedent for such a combined strategy in other omics studies. For example, the methods implemented in TreCASE and WASP can integrate allele-specific expression information to greatly enhance the power of eQTL mapping [51, 76,77,78], and the software RASQUAL integrates allele-specific patterns with individual-level differences to facilitate QTL mapping of chromatin accessibility and ChIP-seq data [79]. However, to our knowledge, no method currently exists for integrating ASM with mQTL mapping in sequencing-based studies of DNA methylation.
Here, we develop a new statistical method for mQTL mapping in bisulfite sequencing studies that both accounts for the count-based nature of the data and takes advantage of ASM analysis to improve power. We refer to our method as IMAGE (Integrative Methylation Association with GEnotypes), which is implemented as an open-source R package (www.xzlab.org/software.html). IMAGE jointly accounts for both allele-specific methylation information from heterozygous individuals and non-allele-specific methylation information across all individuals, enabling powerful ASM-assisted mQTL mapping. In addition, IMAGE relies on an over-dispersed binomial mixed model to directly model count data, which naturally accounts for sample non-independence resulting from individual relatedness, population stratification, or batch effects that are commonly observed in sequencing studies [40, 57]. We develop a penalized quasi-likelihood (PQL) approximation-based algorithm [64, 80, 81] to facilitate scalable model inference. We illustrate the effectiveness of IMAGE and compare it with existing approaches in simulations. We also apply IMAGE to map mQTLs in two bisulfite sequencing studies from wild baboons and wild wolves.
Method overview and simulation design
IMAGE is described in detail in the "Materials and methods" section, with additional information provided in Additional file 1: Supplementary Text. Briefly, IMAGE combines the benefits of both standard mQTL mapping and ASM analysis by jointly modeling non-allele-specific (i.e., per-individual) methylation information across all individuals together with allele-specific methylation information (i.e., per-allele) from heterozygous individuals. This approach enables cis-mQTL mapping when the heterozygous SNP and the CpG site of interest are captured either on the same sequencing read or with known phasing information (Fig. 1). By combining both allele-specific and non-allele-specific information, IMAGE improves power over traditional mapping approaches that use non-allele-specific information alone. In addition, IMAGE relies on a binomial mixed model to directly model count data from bisulfite sequencing and naturally accounts for over-dispersion as well as sample non-independence. IMAGE uses a penalized quasi-likelihood-based algorithm for scalable inference and is implemented in an open-source R package, freely available at http://www.xzlab.org/software.html.
Schematic of ASM-assisted mQTL mapping. The top three panels show bisulfite sequencing data mapped to a CpG site where methylation level is associated with a nearby SNP, in an AA homozygote (left), an AT heterozygote (middle), and a TT homozygote (right). Note that, while illustrated in the panels, the allele-level methylation information in the two homozygotes is not observed. The bottom three panels depict three methods to detect SNP-CpG association: the standard mQTL mapping approach (left) uses non-allele-specific information from all three individuals to detect an association, the standard ASM analysis (middle) uses allele-level information from the heterozygote only, and the joint analysis approach (right) presented here uses both types of information to achieve a gain in power. mQTL methylation quantitative trait loci, ASM allele-specific methylation
We performed simulations to examine the effectiveness of IMAGE and compare it with other approaches. In each simulation, we started with real genotypes for n= 50–150 individuals [82] and examined power and accuracy over a range of parameters: the background heritability h2, the over-dispersion variance σ2, the SNP minor allele frequency MAF, the expected per-site total read TR across individuals, the average methylation ratio π0, the SNP effect size PVE, the sample size n, and the proportion of total environmental variance that is shared between two alleles ρ (a detailed explanation of these parameters is available in the "Materials and methods" section). In the simulations, we examined the role of each of these eight modeling parameters in determining mQTL mapping power. To do so, we first created a baseline simulation scenario where we set the simulation parameters to typical values inferred from real data [40] ("Materials and methods" section). Afterwards, we changed one parameter at a time to create different simulation scenarios and examined the influence of each parameter on method performance. In each scenario, we simulated 10,000 SNP-CpG pairs. For 9000 pairs, the methylation level at the CpG site was independent of the SNP genotype, while for the remaining 1000 pairs, CpG site methylation was associated with the SNP genotype, such that genotype explained a fixed proportion of methylation levels equivalent to the parameter PVE. After simulation, we discarded the methylation measurements for CpG sites on non-informative individuals (i.e., those with total read counts of zero). We then applied IMAGE and five other approaches to analyze each SNP-CpG pair separately.
The five other approaches perform mQTL mapping using different information: (1) IMAGE-I, a special case of IMAGE, which uses only non-allele-specific, individual-level information across all individuals; (2) IMAGE-A, another special case of IMAGE, which uses only allele-specific information from heterozygous individuals; (3) MACAU [40, 57], which uses a binomial mixed model to perform mQTL mapping using only non-allele-specific information; (4) GEMMA [83,84,85], which uses a linear mixed model to perform mQTL mapping using only non-allele-specific information; and (5) BB, which implements a beta-binomial model [40] to perform mQTL mapping using only non-allele-specific information. Note that, with the exception of IMAGE and IMAGE-A, all methods perform mQTL mapping using only non-allele-specific information. In addition, with the sole exception of GEMMA, all methods model counts directly. For GEMMA, we used normalized data in the form of M values for analysis, following the previous literature [40, 57]. We performed 10 simulation replicates (each consisting of 10,000 SNP-CpG pairs) for each scenario and computed power based on a known false discovery rate (FDR) for each scenario by combining simulation replicates.
Overall, the simulation results show that IMAGE outperforms all other methods across all tested parameters (Fig. 2 and Additional file 2: Figure S1). For example, in the baseline simulation scenario, at an FDR of 0.05, IMAGE reaches a power of 57.15% in a sample size of 100 individuals. IMAGE-I, IMAGE-A, MACAU, GEMMA, and BB reach a power of 7.55%, 10.27%, 7.49%, 2.25%, and 6.79%, respectively. The ranking of different methods is not sensitive to different FDR cutoffs. For example, at an FDR of 0.1, the power of IMAGE is 68.78%, while the power of IMAGE-I, IMAGE-A, MACAU, GEMMA, and BB is 14.98%, 24.35%, 13.64%, 2.84%, and 15.03%, respectively. The superior performance of IMAGE suggests that incorporating ASM information into mQTL mapping can greatly enhance power.
IMAGE achieves higher power to detect mQTL across various simulation settings. Power is measured by number of true mQTL detected at a false discovery rate (FDR) of 0.05. Each simulation setting is based on 10 simulation replicates, each including 10,000 simulated SNP-CpG pairs, 10% of which represent true mQTL. a We vary h2, the background heritability, to be either 0, 0.3, or 0.6, while maintaining other parameters at baseline. b We vary ρ, the proportion of common environmental variance, to be either 0, 0.3, or 0.9, while maintaining other parameters at baseline. The middle panel in a and the left panel in b correspond to the baseline simulation setting. Increasing both h2 and ρ, which capture genetic and common environmental background effects, respectively, results in increased power for methods that use ASM information (IMAGE and IMAGE-A), but losses in power for methods that do not use ASM information (IMAGE-I, MACAU, GEMMA, BB). FDR false discovery rate
Among the eight parameters we examined, six have similar effects on power across IMAGE and the five other models we compared. For example, the power of all methods increases with larger sample size n (Additional file 2: Figure S1A), larger genetic effect size PVE (Additional file 2: Figure S1B), larger minor allele frequency MAF (Additional file 2: Figure S1C), larger read depth TR (Additional file 2: Figure S1D), and larger over-dispersion variance σ2, which implicitly increases the genetic effect size PVE (Additional file 2: Figure S1E). In addition, the power of all methods is the highest for CpG sites with intermediate methylation level π0, but reduced for both hypomethylated and hypermethylated sites (Additional file 2: Figure S1F). The power dependence on π0 is presumably because higher methylation variance in the middle range of π0 leads to higher power.
Careful examination of the relative performance of different methods in different scenarios yields additional insights. First, among the mQTL mapping methods, we found that count-based approaches (IMAGE-I, MACAU, BB) often outperform a normalized data-based approach (GEMMA). Such performance differences become more apparent when sample size n is small (Additional file 2: Figure S1A), methylation level π0 is either low or high (Additional file 2: Figure S1F), or mean per-site read depth TR is low (Additional file 2: Figure S1D). For example, when the mean total read TR = 10, the power of IMAGE-I, MACAU, and BB is 5.8%, 4.56%, and 5.33%, respectively (n = 100), while the power of GEMMA is only 1.01%. When TR increases to 30, the power of IMAGE-I, MACAU, and BB becomes 15.25%, 15.32%, and 14.55%, respectively, while the power of GEMMA remains low, at 6.14%. The superior performance of count-based methods is consistent with previous observations [40, 57], suggesting that modeling sequencing data in the original count form has added benefits for mQTL mapping. For DNA methylation levels, this advantage may arise in part because uncertainty in DNA methylation-level estimates is more accurately modeled in the count data than in normalized ratios. For example, a methylation level of one (completely hypermethylated) is strongly supported for a site-sample combination where read depth is very high, but weakly supported for combinations where read depth is low. The count-based methods effectively capture this distinction, which is lost in conversion to a single ratio.
Second, ASM-based approaches (IMAGE and IMAGE-A) often outperform mQTL mapping approaches that only use non-allele-specific data. This result holds even for IMAGE-A, even though it only models data for heterozygotes at nearby SNPs (and hence, uses only a subset of the data: 42% of the full set of simulated individuals on average). The generally higher power of ASM analysis likely stems from the fact that ASM methods control for both environmental and trans-acting genetic background effects (for each heterozygote, both alleles reside in the same individual, providing a natural internal control). Our simulations suggest that there are two important parameters that influence the relative power of ASM analysis and mQTL mapping. The first important parameter is background heritability, h2. Increased background heritability can reduce the performance of mQTL mapping methods, as increased confounding from polygenic effects of other SNPs likely increases the difficulty of identifying individual SNP associations [40, 57]. For example, when h2 = 0, the power of IMAGE-I, MACAU, GEMMA, and BB is 13.57%, 11.62%, 2.69%, and 13.88%, respectively. When h2 increases to 0.6, however, the power of IMAGE-I, MACAU, GEMMA, and BB reduces to 6.48%, 7.05%, 1.50%, and 5.92%, respectively. In contrast, ASM analysis relies on a model that explicitly accounts for the heritable component that arises from genetic background effects, and thus achieves relatively stable performance. For example, when h2 = 0, the power of IMAGE and IMAGE-A is 57.48% and 10.30%, respectively. When h2 increases to 0.6, the power of IMAGE and IMAGE-A actually increases to 63.07% and 23.09%, respectively. This observation is consistent with the fact that the two alleles modeled in ASM, for each individual, share an identical genetic background that becomes easier to control for as its contribution to DNA methylation increases (i.e., as h2 increases). Thus, IMAGE-I outperforms IMAGE-A when background heritability is zero (h2 = 0), but performs worse when background heritability is moderate or high (h2 = 0.3 or 0.6; Fig. 2a).
The second important parameter is the ratio parameter ρ, which represents the relative contribution of shared/common environmental effects (i.e., the "trans" acting environment) and also influences the relative power of ASM vs mQTL. For mQTL methods, increasing ρ necessarily increases the contribution of common environmental noise shared between the two alleles. Common environmental noise is not explicitly accounted for by mQTL models, thus leading to a reduction in power. For example, when ρ = 0, IMAGE-I, MACAU, GEMMA, and BB detect 7.55%, 7.49%, 2.25%, and 6.79% of true effects, respectively. When ρ increases to 0.9, the power of IMAGE-I, MACAU, GEMMA, and BB reduces to 3.50%, 3.44%, 1.67%, and 3.57%, respectively. In contrast, ASM analysis explicitly accounts for both common and independent environmental background effects, again because it measures DNA methylation in the two alleles in the same individual. ASM methods thus achieve better, not worse, performance with higher values of ρ. For example, when ρ = 0, the power of IMAGE and IMAGE-A is 57.15% and 10.27%, respectively. When ρ increases to 0.9, the power of IMAGE and IMAGE-A becomes 84.15% and 67.55%, respectively. Consequently, while mQTL methods have similar power as ASM when ρ is small, ASM can outperform mQTL when ρ is large (Fig. 2b).
In addition, we note that IMAGE can estimate FDR reasonably accurately by constructing an empirical null via permutations. In particular, IMAGE produces either calibrated or slightly conservative FDR estimates regardless of the values of h2 (Additional file 2: Figure S2A), ρ (Additional file 2: Figure S2B), n (Additional file 2: Figure S2C), genetic effect size PVE (Additional file 2: Figure S2D), MAF (Additional file 2: Figure S2E), average read counts per site TR (Additional file 2: Figure S2F), over-dispersion variance σ2 (Additional file 2: Figure S2G), or average methylation ratio π0 (Additional file 2: Figure S2H).
Finally, we note that while we set PVE = 0.10 and h2 = 0.30 in the baseline simulations to capture realistic effect sizes and background heritability across all SNP-CpG pairs genome-wide, reasonable data filtering decisions will often increase mean PVE and h2 among SNP-CpG pairs tested in real data applications. For example, in the wolf and baboon data sets analyzed below, the median PVE was approximately 0.15 and the median h2 estimate was near 0.5. For direct comparability, we therefore also created a simulation scenario in which we set PVE to 0.15 and h2 to 0.50 (Additional file 2: Figure S1G). Notably, the relative power of different methods in this setting largely recapitulates our observations in the real data applications (see below).
mQTL mapping in wild baboons
We applied our method to analyze a reduced representation bisulfite sequencing data collected on 67 baboons from the Amboseli ecosystem of Kenya [40, 45]. Detailed data description and processing steps are provided in the "Materials and methods" section, with an illustrative processing diagram shown in Additional file 2: Figure S3. Briefly, we extracted 49,196 SNP-CpG pairs from the bisulfite sequencing data, which consists of 13,753 unique SNPs and 45,210 unique CpG sites. We applied IMAGE together with the other five approaches described above to analyze each SNP-CpG pair individually. We performed permutations to estimate FDR for each method, and we report results based on a fixed FDR cutoff.
Consistent with our simulations, our method achieves higher power compared with other methods in the baboon data set (Fig. 3a). For example, at an empirical FDR of 5%, IMAGE detected 7043 associated SNP-CpG pairs, which is 45% more than that detected by the next best method (IMAGE-A, which detected 4855 pairs at a 5% FDR). IMAGE-I, MACAU, GEMMA, and BB detected 3585, 3024, 2629 and 3259 pairs, respectively. Also consistent with the simulations, the higher power of IMAGE compared to other methods is robust with respect to different FDR cutoffs (Fig. 3a). We illustrate a few example sites that were only detected by IMAGE in Additional file 2: Figure S4. For these sites, methylation levels measured in the heterozygotes are noisy and often indistinguishable from at least one type of homozygote (often because total read counts are unevenly distributed across alleles). However, by separating methylation levels in heterozygotes into the contribution from each individual allele and modeling ASM information together with non-allele-specific information, IMAGE remains capable of identifying mQTLs in these sites. In addition, consistent with simulations, we also observed that our method could detect more associated SNP-CpG pairs with increasing MAF (Additional file 2: Figure S5A), increasing read depth TR (Additional file 2: Figure S5B), increasing sample size (Additional file 2: Figure S5C), or at intermediate methylation levels (Additional file 2: Figure S5D).
mQTL mapping results in the baboon RRBS data. a IMAGE identified more mQTL than the other five methods across a range of empirical FDR thresholds. b IMAGE identifies more consistent associations than the other methods in the subset analysis. Here, we randomly split individuals into two approximately equal-sized subsets and analyzed the two subsets separately using each method. We then counted the number of overlapping mQTL identified in both subsets. The overlap ratio (y-axis) is plotted against the percentage of top mQTL ranked by statistical evidence for a SNP-CpG methylation association in each method (x-axis). c Upper panel: log2 odds ratio of detecting associated SNP-CpG pairs, together with the 95% CI, is computed for CpG sites residing in different annotated genomic regions. CpG sites with IMAGE-identified mQTL are enriched in open sea regions (p value = 0.0106) and depleted in CpG islands (p value = 1.056 × 10−9). Bottom panel: all analyzed CpG sites were annotated to genomic regions based on their relation to the nearest CpG island. CpG islands were annotated based on the UCSC Genome Browser (average length = 672 bp in the data; min = 201 bp; max = 15,960 bp). Shore is the flanking region of CpG islands covering 0–2000 bp distant from the CpG island. Shelf is the region flanking island shores covering 2000–4000 bp distant from the CpG island. d A higher percentage of CpG sites are directly disrupted by the SNP in mQTL pairs compared to by chance alone (horizontal dashed line), and more so than in non-mQTL pairs (p value < 2.2 × 10−16). Such enrichment decays with increased FDR thresholds. *p < 0.05, **p < 0.01
To validate the mQTLs we identified, we randomly split the sample into two approximately equal-sized subsets (one with 34 individuals and the other with 33 individuals) and examined the consistency of the SNP-CpG pairs detected in the two subsets. We removed IMAGE-A from this analysis as it requires at least five heterozygous individuals, which is no longer satisfied for many SNP-CpG pairs in each of the two subsets. For the remaining methods, we found that IMAGE detects more consistent SNP-CpG pairs between the two subsets than the other approaches (Fig. 3b). For example, among the top 5% (n = 2511) associated SNP-CpG pairs based on IMAGE, 53.8% of them were identified in both subsets. In contrast, among the top 5% (n = 2511) associated SNP-CpG pairs based on IMAGE-I, MACAU, GEMMA, and BB, 35.84%, 35.12%, 33.92%, and 37.64% overlapped between the two subsets. The greater consistency of results from IMAGE thus provides convergent support for its increased power.
Next, we assessed the set of detected SNP-CpG associations by performing functional enrichment analysis to compare our findings against published results (Fig. 3c). Here, we refer to the CpG sites with associated mQTL as mCpG sites. We examined whether the set of mCpG sites were enriched in CpG islands, CpG island shores, CpG island shelves, or genomic "open sea." To do so, we obtained functional genomic annotation information from the UCSC Genome Browser for the baboon genome, Panu2.0, and relied on the same criterion as [86] to annotate genomic regions (details in the "Materials and methods" section). For each annotated category, we then computed the proportion of mCpG sites in the annotated regions and contrasted it to the proportion of non-mCpG sites analyzed in our original mQTL mapping analysis. We found that mCpG sites are significantly enriched in open seas compared to non-mCpG sites (69.74% vs 66.08%; Fisher's exact test, p value = 0.0106) but underrepresented in CpG islands (11.16% vs 14.33%; p value = 1.056 × 10−9). The results are consistent with previous observations [87, 88], partly because CpG islands are often enriched in evolutionarily conserved promoter regions [89,90,91] that harbor fewer regulatory genetic variants and partly because power to detect mQTL is lower in hypomethylated regions [92]. The results are qualitatively consistent across sites with different mean CpG methylation levels, although do not reach statistical significance in all bins likely due to the smaller number of sites and the resulting lower power in each bin (Additional file 2: Figure S6). Importantly, despite the higher number of mCpG sites detected by IMAGE, the evidence for both enrichment in open sea and underrepresentation in CpG islands is also stronger in the IMAGE analysis than for other methods (Additional file 3: Table S1).
Finally, we counted the percentage of SNP-CpG pairs for which the SNP directly resides in the CpG sequence, abolishing the CpG site and therefore resulting in an entirely unmethylated alternate allele [69, 93]. These sites, by definition, should exhibit mQTL and ASM. Four hundred three sites in our data set were disrupted by SNPs, and 59.6% of them (n = 240) were indeed identified as significant mCpG sites. For 95.70% of those we did not detect (n = 156), the non-disrupted CpG was also hypomethylated in our sample (< 10% methylation level), which would make it impossible to detect an mQTL (i.e., because both disrupted and non-disrupted alleles are hypomethylated). CpG sites disrupted by SNPs accounted for 3.72% of significant mCpG sites (compared to the 0.89% expected by chance), but only 0.43% of non-mCpG sites, in support of the accuracy of our mQTL mapping approach (Fisher's exact test p value < 2.2 × 10−16). In addition, as expected, the percentage of significant mCpG sites accounted for by CpG sites disrupted by SNPs gradually decreases with less stringent FDR cutoffs (Fig. 3d). Importantly, IMAGE also outperforms the other five methods on this metric (Additional file 3: Table S2).
mQTL analysis in wild wolves
Finally, we applied IMAGE to analyze a second RRBS data set collected on 63 gray wolves from Yellowstone National Park [46, 94]. We applied the same data processing procedure described above for baboons, followed by mQTL mapping. In total, we extracted 279,223 SNP-CpG pairs from the bisulfite sequencing data, which consists of 77,039 unique SNPs and 242,784 unique CpG sites. IMAGE again achieved higher power compared with the other methods (Fig. 4a). At an empirical FDR of 5%, IMAGE detected 34,779 significantly associated SNP-CpG pairs, which is 50% more than that detected by the next best method (IMAGE-A), and 262% more than the other four methods (Fig. 4a and Additional file 2: Figure S7). As in the baboons, subset analysis confirmed that IMAGE detects more consistent SNP-CpG pairs than the other approaches (Fig. 4b). For example, among the top 5% (n = 14,091) associated SNP-CpG pairs based on IMAGE analysis, 53.8% of them are consistent between the two subsets, compared to 20.5–30.7% for the other four methods tested. Consistent with results from simulations and the baboon data, we also observed that our method could detect more associated SNP-CpG pairs with intermediate methylation levels, increasing MAF, increasing read depth, and increasing sample size (Additional file 2: Figure S5).
mQTL mapping results in the wolf RRBS data. Methods for analysis include IMAGE (red), IMAGE-I (orange), IMAGE-A (green), MACAU (pink), GEMMA (brown), and BB (blue). a IMAGE identified more associated SNP-CpG pairs than the other five methods across a range of empirical FDRs constructed by permutation. b IMAGE identifies more consistent associations than the other methods in the subset analysis. Here, we randomly split individuals into two approximately equal-sized subsets and applied methods to analyze the two subsets separately. We count the number of overlapping associations between the top SNP-CpG pairs in the two subsets. The overlap ratio (y-axis) is plotted against the percentage of top SNP-CpG pairs (x-axis). c Upper panel: log2 odds ratio of detecting associated SNP-CpG pairs, together with the 95% CI, is computed for CpG sites residing in different annotated genomic regions. CpG sites associated with SNPs identified by IMAGE are enriched in open sea regions (p value < 2.2 × 10−16) and depleted in CpG island regions (p value < 2.2 × 10−16). Shores are defined as the 2000-bp regions flanking CpG islands; shelves are defined as the 2000-bp regions flanking the island shores (2000–4000 bp from CpG islands). Bottom panel: all analyzed CpG sites were annotated to genomic regions based on their relation to the nearest CpG island. CpG islands were annotated based on the UCSC Genome Browser (average length = 830 bp in the data; min = 201 bp; max = 322,257 bp). Shore is the flanking region of CpG islands covering 0–2000 bp distant from the CpG island. Shelf is the region flanking island shores covering 2000–4000 bp distant from the CpG island. d A higher percentage of CpG sites are directly disrupted by the SNP in the mQTL pairs compared to by chance alone (horizontal dashed line), and more so than in non-mQTL pairs (p value < 2.2 × 10−16). Such enrichment decays with increased FDR thresholds. *p < 0.05, **p < 0.01
Finally, consistent with the baboon results, mCpG sites in the wolves were significantly enriched in open sea compared to non-mCpG sites (31.77% vs 26.31%; p value <2.2 × 10−16) and were underrepresented in CpG islands (30.17% vs 37.43%; p value < 2.2 × 10−16) (Fig. 4c). In the wolves, we also observed significant (albeit much weaker) enrichment of mCpG sites in shelf regions (12.49% vs 11.63%; p value = 9.001 × 10−5) and shore regions (25.57% vs 24.64%; p value = 5.890 × 10−3). The higher frequency of mCpG sites in CpG island shelves and shores is consistent with previous studies [87, 88] and likely reflects greater power to detect enrichment in the wolf data set, which yields a larger number of analyzable SNP-CpG pairs than in the baboons (m = 242,784 in wolf vs m = 45,210 in baboon). The enrichment in open sea and underrepresentation of mCpG sites in CpG islands are robust regardless of whether we stratify sites based on mean methylation levels, although the shelf/shore results are noisier (Additional file 2: Figure S8). Again, we found that enrichment results were stronger in the IMAGE analysis than when using other methods (Additional file 3: Table S3) and that mCpG sites were more likely to be disrupted by their associated SNPs than non-mCpG sites (3.66% vs 0.18%; p value < 2.2 × 10−16) (Fig. 4d; see also Additional file 3: Table S4).
Here, we present IMAGE, a new statistical method with a scalable computational algorithm, for mQTL mapping in bisulfite sequencing studies. IMAGE relies on a binomial mixed model to account for the count nature of over-dispersed bisulfite sequencing data, models multiple sources of methylation-level variance, and incorporates allele-specific methylation patterns from heterozygous individuals into mQTL mapping. Both simulations and two real data sets support its increased power over other commonly used methods.
A key feature of our method is its ability to incorporate allele-specific methylation information into mQTL mapping. In RNA sequencing studies, it has been well documented that incorporating ASE information can greatly improve the power of eQTL mapping [51, 76,77,78]. Our results confirm that this observation generalizes to mQTL mapping and provides substantial benefits over approaches that cannot or do not use allele-specific data. Notably, these benefits are not limited to the RRBS data we examined here: IMAGE can also be applied to analyze data generated via whole genome bisulfite sequencing (WGBS) [39] or by newer approaches that distinguish 5-hmc modifications from 5-mc modifications [43, 44]. Doing so would greatly facilitate detection of methylation-associated genetic variants genome-wide, including variants associated with different types of methylation marks.
Notably, although secondary to the methods advance itself, our real data applications show that mQTL mapping can be successfully executed using bisulfite sequencing data alone, in the absence of independently generated genotype data. Specifically, we used the same bisulfite sequencing data set to both extract methylation measurements and call SNP genotypes. Our approach dovetails with previous observations that accurate genotyping data can be obtained from RNA sequencing data [95], bisulfite sequencing data [78], or ChIP sequencing data [96], which simultaneously reduces experimental cost and increases the utility of different sequencing data types. Because of these benefits, molecular QTL mapping without separate DNA sequencing or genotyping is gaining popularity [97]. For example, a recent study performed eQTL mapping and ASE analysis using RNA sequencing alone and demonstrated that this strategy achieves approximately 50% power compared to traditional eQTL mapping strategies that rely on independently derived genotype data, even though it only uses the 12.66% of SNPs represented in blood-derived RNA-seq reads [45]. Here, we also show that genotyping and phenotyping from the same data set can facilitate well-powered mQTL mapping. Notably, unlike RNA-seq data, because allele-specific methylation information is represented as the ratio between methylated reads and total reads mapped to the same allele, our approach is also less likely to be affected by allele-specific mapping biases (mitigating another argument for generating independent genotype data). Thus, our mQTL mapping approach has the potential to both increase the utility and applicability of functional genomic data types and improve accessibility of this type of analysis across species.
Our method is not without limitations. For example, to enable ASM-assisted mQTL mapping, our method makes a key modeling assumption that the allelic effect size estimated from heterozygotes is equivalent to the genotype effect size estimated from mQTL mapping across all genotype classes. This assumption is generally satisfied for cis genetic effects when the SNP is close to the CpG site [98], and is shared, for gene expression phenotypes, with ASE-assisted eQTL mapping methods (e.g., TreCASE and WASP [51, 52]). However, in rare occasions, the equal effect size assumption may be violated. For example, if ASM arises because of genomic imprinting instead of sequence variation, the allelic effect size may be much smaller than the mQTL effect size obtained across all individuals. Such a violation would lead IMAGE to lose power relative to classical mQTL mapping approaches. Notably, imprinted regions are quite rare in vertebrate genomes (less than 1% of genes are imprinted) [99]. However, excluding imprinted loci prior to IMAGE mapping or substituting the IMAGE-I approach for these loci may slightly improve performance. Additionally, in unphased data, an important limitation of IMAGE is that it can only be used to analyze adjacent SNP-CpG pairs that are covered by the same sequencing reads. Analyzing only adjacent SNP-CpG pairs can limit the discovery of mQTLs. Therefore, it would be important to extend IMAGE to analyze distant SNP-CpG pairs in unphased data, using, for example, strategies presented in [100]. Certainly, if SNP data can be phased, IMAGE can also be applied to analyze SNP-CpG pairs that are separated by longer distances. In principle, using phased data could improve mQTL mapping power even further, if physically linked CpG sites display consistent ASM. Because the baboon and wolf data we analyzed here are not associated with an extensive genetic reference panel, we did not attempt to extend our analysis to phased data. Nevertheless, exploring the benefits of phased data or extending IMAGE to analyzing distant SNP-CpG pairs in unphased data is an important future direction.
Another limitation of IMAGE is that type I error may not be well controlled when methylation background heritability is high (> 0.6, Additional file 3: Table S5), when the sample size is small (< 100, Additional file 3: Table S6), or when the genotype minor allele frequency is low (< 0.1, Additional file 3: Table S7). As a result, we recommend calibrating the false discovery rate against a permutation-derived empirical null, as we have done here (we note that calibrating against permutations has become an increasingly common approach in functional genomic mapping studies in any case [101, 102]). Finally, while our method is reasonably efficient and can be readily applied to analyze hundreds of individuals and tens of thousands of SNP-CpG pairs (Table 1), new algorithms will be needed to adapt IMAGE to data sets that are orders of magnitude larger.
Table 1 Computational time for analyzing differently sized data sets, for count-based mQTL mapping methods. Computing time is based on analysis of 100,000 SNP-CpG pairs with baseline simulation parameters and varying sample size, using a single thread on a Xeon E5-2683 2.00-GHz processor
Nevertheless, in its current form, IMAGE is well-suited to analyzing sequencing-based DNA methylation data sets of the size and scale typically generated in recent studies [103]. Thus, it can be flexibly deployed to investigate the genetic architecture of gene regulatory variation, the relative role of genes and the environment in shaping the epigenome, or the mediating role of DNA methylation in linking environmental conditions to downstream phenotypes, including human disease (e.g., via Mendelian randomization or related approaches [104, 105]).
Method overview
Both mQTL mapping and ASM analysis examine one CpG site-SNP pair at a time to identify SNPs associated with DNA methylation levels. However, these two approaches rely on different information to model the genotype-DNA methylation-level relationship. Specifically, mQTL mapping focuses on modeling the methylated read counts and total read counts at the individual level across all samples, without differentiating between the contributions from the two alleles contained within each individual. In contrast, ASM analysis focuses on modeling methylated read counts and total read counts in an allele-specific fashion, restricting it to heterozygotes for the SNP of interest (otherwise, the contributions of each allele cannot be decoupled). mQTL mapping has the benefit of using the entire sample, not just heterozygotes. In contrast, ASM has the benefit of internal control, since both alleles within each heterozygote experience the same genetic and environmental background.
To take advantage of both approaches, IMAGE independently models each CpG-SNP site pair. For each individual measured at a CpG-SNP pair, we denote yi and ri as the methylated read count and total read count for the ith individual (combined across alleles), for i = 1, ⋯, n. We denote the corresponding methylated and total read counts mapped to each of the two alleles of the ith individual as yil and ril, for l = 1 or 2. Thus, yi = yi1 + yi2 and ri = ri1 + ri2. Note that yil and ril are only observed in heterozygotes, so are treated as missing data in homozygotes (more details below). We then model the methylated read counts for each allele as a function of the total read counts for the same allele using a binomial model:
$$ {y}_{il}\sim Bin\left({r}_{il},{\pi}_{il}\right), $$
where πil is the true methylation level for the lth allele in the ith individual. We further model the logit-transformed methylation proportion πil as a function of allele genotype:
$$ {\lambda}_{il}= logit\left({\pi}_{il}\right)=\mu +{x}_{il}\beta +{g}_i+{u}_i+{e}_{il}, $$
where μ is the intercept; xil is the lth allele type for the ith individual for the SNP of interest (xil = 0 or 1, corresponding to the reference allele and alternative allele, respectively); and β is the corresponding allele/genotype effect size. In addition to these fixed effects, we model three random effects to account for different sources of over-dispersion. Specifically, gi represents the genetic background/polygenic effect on DNA methylation for the ith individual and can be used to account for kinship or other population structure in the sample. We assume \( \boldsymbol{g}={\left({g}_1,\cdots, {g}_n\right)}^T\sim MVN\left(0,{\sigma}_g^2K\right) \), where K is a known n by n genetic relatedness matrix that can be estimated either from genotype or pedigree data. ui represents individual-level environmental effects that we assume are independent across individuals but shared between the two alleles within the same individual. We assume \( {u}_i\sim N\left(0,{\sigma}_u^2\right) \). Finally, eil represents the residual error and is used to account for independent noise that varies across both individuals and alleles (e.g., stochastic events). We assume \( {e}_{il}\sim N\left(0,{\sigma}_e^2\right) \). We standardize the genetic relatedness matrix K to ensure that the mean of the diagonal elements of K equals 1, or \( \frac{tr(K)}{n}=1 \). When this is the case, \( {h}^2=\frac{\sigma_g^2}{\sigma_g^2+{\sigma}_u^2+\frac{1}{2}{\sigma}_e^2} \), and can be interpreted as the approximate background heritability of DNA methylation levels (details in Additional file 1: Supplementary Text). Here, the background heritability represents the proportion of variance in the latent parameter λ explained by the genetic effects from all SNPs other than the SNP of focus (i.e., x). Therefore, the background heritability is the usual heritability minus the genetic effect of x. Our primary goal is to test the null hypothesis that genotype is not associated with methylation levels, or equivalently, H0 : β = 0.
While the above model is fully specified for heterozygous individuals, it is not fully specified in homozygotes, where yil and ril are not observed. For homozygotes, only the sums of the reads across both alleles, yi = yi1 + yi2 and ri = ri1 + ri2, are observed. Therefore, for homozygotes, we derive a model for yi and ri based on Eq. (1) by summing over all possible values of yil and ril:
$$ P\left({y}_i|{r}_i,{\pi}_{i1},{\pi}_{i2}\right)=\sum \limits_{y_{i1}=0}^{\min \left({r}_{i1},{y}_i\right)}\sum \limits_{r_{i1}=0}^{r_i}P\left(\left.{y}_{i1}\right|{r}_{i1},{\pi}_{i1}\right)P\left(\left.{y}_i-{y}_{i1}\right|{r}_i-{r}_{i1},{\pi}_{i2}\right)P\left(\left.{r}_{i1}\right|{r}_i\right). $$
In Eq. (3), we assume that the model specified in Eq. (1) for the two alleles are independent of each other; thus, P(yi1, yi2| ri1, ri2, πi1, πi2) = P(yi1| ri1, πi1)P(yi − yi1| ri − ri1, πi2). We further assume that P(ri1| ri) follows a binomial distribution ri1~Bin(ri, 0.5), which reflects the assumption that both alleles are equally likely to be represented in the sequencing data. Even with these two assumptions, the probability P(yi| ri, πi1, πi2) in Eq. (3) does not have an analytic form and can only be evaluated numerically, which is highly computationally inefficient for parameter estimation and inference. To enable scalable computation, we therefore approximate the distribution in Eq. (3) using a binomial distribution (details in Additional file 1: Supplementary Text). Numerical simulations demonstrate the accuracy of this approximation across a range of settings (Additional file 2: Fig. S9).
The model defined in Eqs. (1), (2) (for heterozygous individuals), and (3) (for homozygous individuals) allows us to perform ASM-assisted mQTL mapping to identify SNPs associated with DNA methylation levels. Due to the random effects terms in the model, the joint likelihood based on these equations consists of a high-dimensional integration that cannot be solved analytically. Here, we rely on the penalized quasi-likelihood (PQL) algorithm that is commonly used for fitting generalized linear mixed models [64, 80, 81] to perform parameter estimation. Based on the parameter estimates, we further calculate a Wald statistic for testing the null hypothesis that H0 : β = 0 and obtaining a corresponding p value.
We refer to the above model as IMAGE, which is implemented as a freely available R software package at www.xzlab.org/software.html.
We performed simulations to examine the effectiveness of our method and compare it with other approaches. To do so, we first randomly selected 150 individuals from the 1958 birth cohort study, which is a part of the control samples that were used in the Wellcome Trust Case Control Consortium Study (WTCCC) [82]. We then obtained genotypes for 394,117 SNPs on chromosome 1 for these selected individuals. In the simulations, we examined the influence of sample size on power by choosing three different sample sizes: n= 50, 100, or 150. For n = 150, we used all 150 samples; for n < 150, we randomly selected the corresponding number of individuals from the 150 samples. For each simulation replicate, we computed the genetic relatedness matrix K from the SNP data using GEMMA [83,84,85]. We examined the influence of SNP minor allele frequency (MAF) on power by dividing the 394,117 SNPs into three different MAF bins: an MAF bin centered on 0.1, which contains SNPs with an MAF between 0.05 and 0.15 (p = 100,631); an MAF bin centered on 0.3, which contains SNPs with an MAF between 0.25 and 0.35 (p = 51,800); and an MAF bin including 0.5 which contains SNPs with an MAF between 0.45 and 0.50 (p = 23,619). To simulate SNP-CpG site pairs, given a combination of sample size and MAF bin, we randomly selected one SNP from the appropriate MAF bin and simulated methylation counts and total read counts based on the following procedure.
For the total read counts, we first used a negative binomial distribution NB(TR, ϕ) to simulate the total read count ri for each individual. Here, TR is the mean parameter and ϕ is the dispersion parameter. We set TR= 10, 20, or 30, close to the median estimate across all CpG sites from the baboon data (details of the data are described in the next section; median estimate in the real data = 23). We set ϕ= 3, which is close to the median estimates obtained from the baboon data (median estimate in the real data = 2.80). To obtain the total read count mapped to each of the two alleles, we further simulated a proportion parameter qi, which represents the proportion of reads mapped to one allele out of the two alleles. Specifically, qi was simulated from a beta distribution Beta(a, b), where we set the shape parameters a and b to both be 10, so that the simulated qi is symmetric around 0.5 and is within the range of (0.3, 0.7) in 93.6% of cases. With ri and qi, we simulated the total read count mapped to one of the two alleles from ri1~Bin(ri, qi) and set the total read count mapped to the other allele as ri2 = ri − ri1.
For the methylated read counts, we performed simulations using a combination of five parameters. These five parameters include the intercept μ, which characterizes the baseline methylation level (interpretable as the mean methylation level within a given population); h2, which represents background heritability; σ2, which is the over-dispersion variance; ρ, which characterizes the proportion of common environmental variance (i.e., for those effects that are shared between the two alleles in each individual) with respect to both the common environmental variance and the independent environmental variance that is independent between both individuals and alleles within individuals; and PVE, which represents genotype effect size in terms of proportion of phenotypic variance explained (PVE) by genotype. With these four parameters, we first simulated the genetic random effects g = (g1, ⋯, gn)T (an n-vector) across all individuals from a multivariate normal distribution with covariance \( \frac{\left(1+\rho \right){h}^2}{2+\left(\rho -1\right){h}^2}{\sigma}^2\boldsymbol{K} \) to guarantee that the background heritability for our population of simulated individuals is h2 (details in Additional file 1: Supplementary Text). For each individual at a time, we then simulated the environmental random effects (ei1, ei2) and ui together as a bivariate vector (ui + ei1, ui + ei2)T from a bivariate normal distribution with a covariance Σ, where \( \Sigma =\left[\begin{array}{cc}\left(1- pve-{h}^2\right){\sigma}^2& \rho \left(1- pve-{h}^2\right){\sigma}^2\\ {}\rho \left(1- pve-{h}^2\right){\sigma}^2& \left(1- pve-{h}^2\right){\sigma}^2\end{array}\right] \).
For sites where methylation level was not associated with genotype, the SNP effect β was set to zero and the background genetic effects, environmental effects, and an intercept (μ) were then summed together to yield the latent variable πil through logit(πil) = logit(π0) + gi + ui + eil for the lth allele in ith individual. For sites with true mQTL, we used logit(πil) = logit(π0) + xilβ + gi + ui + eil to yield the latent variable πil, where xil is the allele genotype for the lth allele in the ith individual. We randomly draw \( \beta \sim N\left(0,{\sigma}_b^2\right) \) for each CpG site in turn, where \( {\sigma}_b^2 \) is set to ensure that genetic effects explain a fixed PVE in logit(πil), on average. We set PVE to be 5%, 10%, or 15% to represent different mean mQTL effect sizes, and we derive \( {\sigma}_b^2=\frac{PVE\ {\sigma}^2}{\left(1- PVE\right)V\left(\boldsymbol{x}\right)} \), where the function V (•) denotes the sample variance computed across individuals with x being a genotype vector of size n. Finally, we simulated the methylated read counts for each allele based on a binomial distribution with a rate parameter determined by the total read counts ri and the methylation proportion πil; that is, yil~Bin(ril, πil) for the lth allele in ith individual. For heterozygotes, we retained the allele-level data (yi1, yi2) and (ri1, ri2). For homozygotes, we collapsed the allele-level data into individual-level data, yi = yi1 + yi2 and ri = ri1 + ri2.
Using the procedure described above, we first simulated data under a baseline simulation scenario of n = 100, h2= 0.3, π0 = 0.5 , MAF = 0.3, ρ = 0, TR = 20, σ2= 0.7, and PVE = 0.1 for mQTL sites. We then varied one parameter at a time to generate different simulation scenarios to examine the influence of each parameter, following [40]. Here, we varied the baseline methylation level π0 to be either 0.1, 0.5, or 0.9 to represent low, moderate, or high levels of DNA methylation. We varied h2= 0.0, 0.3, or 0.6 to represent no, medium, or high background heritability. We varied σ2= 0.3, 0.5, or 0.7 to represent different levels of over-dispersion. We varied ρ= 0, 0.3, or 0.9 to represent different levels of common environment influence. For each simulated combination of parameters, we performed 10 simulation replicates consisting of 10,000 CpG sites each. Among these sites, DNA methylation levels at 1000 of them were associated with the SNP genotype (β ≠ 0) while DNA methylation levels for the remaining 9000 were not (β = 0).
Baboon RRBS data
We applied our method to a bisulfite sequencing data set from 69 wild baboons from the Amboseli ecosystem in Kenya [40, 45]. These data were generated using RRBS on the Illumina HiSeq 2000 platform, with 100-bp single-end sequencing reads. We obtained the raw fastq files from NCBI (accession number PRJNA283632), removed adaptor contamination and low-quality bases using the program Trim Galore (version 0.4.3) [106], and then mapped reads to the baboon reference genome (Panu2.0) using BSseeker2 [107] (Additional file 2: Figure S3; more details in Additional file 1: Supplementary Text). After removing two samples that had extremely low sequencing read depths (57,734 and 58,070 reads, respectively), sequencing read depth ranged from 5.00 to 79.78 million reads (median = 24.48 million reads; sd = 13.69 million).
We performed SNP calling in the bisulfite sequencing data using CGmaptools, a SNP calling program specifically designed for bisulfite sequencing data. CGmaptools examines one individual at a time using the BayesWC SNP calling strategy [78]. Following the authors' recommendations, we used a conservative error rate of 0.01 and a dynamic p value to account for different read depth per site. Further, we modified the source code to make CGmaptools output homozygous reference genotypes as well. After SNP calling, we indexed and merged variant call files (VCFs) using VCFtools [108]. We then obtained a common set of SNPs where the position was called in at least 50% individuals (including homozygous reference calls). For each individual, we filtered out SNPs that were called using less than three reads. For each SNP, we filtered out variants that had an estimated MAF < 0.05. Finally, we filtered out 989 multiallelic SNPs to obtain a final call set of 289,103 analysis-ready SNPs (mean = 203,864 SNPs typed per sample; median = 204,554; sd = 34,768). We computed the genetic relatedness matrix K in GEMMA, using this SNP data set.
To validate the SNP genotype data, we compared the variants identified from the bisulfite sequencing data to a set of previously identified SNP variants in baboons [109]. These previously identified SNPs were obtained from 44 different wild baboons from East Africa, including members of the baboon population from which the RRBS data were generated but also members of baboon populations outside Amboseli, via low-coverage DNA sequencing (range 0.6× to 4.35×; median = 1.91×; sd = 0.77×). This data set identified a total of 24,770,393 SNPs, with an average of 17,725,780 SNPs genotyped per individual (median = 18,139,340; sd = 4,315,590). Because of the low sequencing depth in the DNA sequencing data set, we expected that variants called from the bisulfite sequencing data would not completely overlap with variants identified from the DNA sequencing data. Indeed, we found that 50.9% of our called variants are located at a known variant from the DNA sequencing study, with the remaining SNPs being novel. Importantly, among overlapping variants, 99.5% have the same alternate allele, in support of the accuracy of SNP calling from bisulfite sequencing data. Additionally, we observe more overlap in called variants with higher alternate allele frequency, reaching 72.5% for variants with an alternate allele frequency > 0.5 in the RRBS data (Additional file 2: Figure S10A). The allele frequency estimates from the two data sets for overlapping variants are reasonably well correlated (Spearman correlation r = 0.551; p value < 2.2 × 10−16; Additional file 2: Figure S10B).
In addition to genotyping, we used CGmaptools to obtain CpG-SNP pairs where the SNP and CpG site were profiled on the same sequencing read. The distance between the SNP-CpG site pairs ranges from 1 to 104 bp, with a median distance of 37 bp (mean = 39.75 bp; sd = 26.15 bp; Additional file 2: Figure S10C). We extracted the methylation-level estimates for each CpG site in the form of the number of methylated read counts and the number of total read counts, at the individual level for homozygotes and for each allele separately for heterozygotes. We obtained a total of 522,965 SNP-CpG pairs, with 82,217 unique SNPs and 391,137 unique CpG sites. Following [49], we excluded CpG sites (i) that were measured in less than 20 individuals, (ii) where methylation levels fell below 10% or above 90% in at least 90% of measured individuals, (iii) that had a mean read depth less than 5, or (iv) that were paired with a SNP with MAF < 0.05 across individuals for whom DNA methylation estimates were available. To avoid potential mapping bias, we also excluded CpG sites with apparent differences in methylation levels between reference and alternate alleles that were larger than 0.6. Note that excluding these sites is a conservative strategy and may remove truly associated SNP-CpG pairs where mQTL are unusually large effect size. After filtering, our final data consisted of 49,196 SNP-CpG pairs, with 13,753 unique SNPs and 45,210 unique CpG sites, and an average of 33,539 SNP-CpG pairs measured per individual.
For these SNP-CpG pairs, the median number of reads per SNP across all individuals was 23 (mean = 31.21; sd = 30.08), and the median number of reads per allele was 13 in heterozygous individuals (mean = 18.75; sd = 19.75). To check the quality of DNA methylation estimates for these CpG sites, we examined their distribution across individuals. Similar to other RRBS data sets [110], we observed a bimodal distribution pattern of methylation levels, including a large number of hypomethylated and hypermethylated CpG sites (Additional file 2: Figure S10D). Next, we examined the accuracy of methylation measurements obtained from our pipeline by comparing the mean methylation at each CpG site obtained here to those estimated in a previous study that focused on a subset of 61 individuals but used a different mapping and DNA methylation estimation pipeline [111]. As expected, the overall distribution of DNA methylation levels is almost identical between our pipeline and the previous study for the 15,605 overlapping sites (Additional file 2: Figure S10E). In addition, site-specific DNA methylation-level estimates are highly correlated (Spearman correlation r = 0.855, p value <2.2 × 10−16; Additional file 2: Figure S10E). Finally, we checked whether our data suggest mapping bias in favor of the reference allele. Among the CpG sites we analyzed, we observed no bias in methylation-level estimates between the reference and the alternate alleles (Additional file 2: Figure S10F).
We applied five different approaches (details in the "Results" section), together with our primary IMAGE method, to analyze the baboon DNA methylation data. Most of these methods are count based, and algorithms for count-based models can be computationally unstable in the presence of covariates. To control for confounding effects from covariates, for each SNP in turn, we removed the effects of age, sex, and the top two methylation principal components based on M values [112] and used the genotype residuals for analysis. One method, IMAGE-A, requires a relatively large number of heterozygous individuals and was thus only applied to analyze sites for which we identified at least 5 heterozygotes (38,250 SNP-CpG pairs). All other methods were applied to all 49,196 SNP-CpG pairs. Because different methods have different type I error control and one method (IMAGE-A) analyzes a different number of SNP-CpG pairs, to ensure fair comparison, we performed permutations to construct empirical null distributions. Specifically, we combined the count data from the heterozygotes (yi1, yi2), (ri1, ri2) with the count data from the homozygotes (yi, ri), treated the two alleles of each heterozygote as two samples and treated each homozygote as one sample, permuted the sample label 10 times to create null permutations, and applied each method to analyze the permuted data. We note that an alternative permutation strategy would be to permute (yi, ri) along with covariates across individuals. In this strategy, the number of methylated reads for each allele (out of total reads for each allele) in heterozygotes could then be sampled from a binomial distribution with probability 0.5, conditional on yi and ri − yi respectively. This alternative strategy is not ideal for small sample sizes, but is likely to work well for large samples (approximately n > 150). Therefore, we have also implemented this alternative permutation strategy in the software and recommend users to explore both strategies and select one that performs the best for their data. Regardless of which permutation strategy one uses, the statistics from the permuted data allowed us to construct an empirical null distribution. With the empirical null distribution, we estimated the empirical false discovery rate (FDR) for different methods at different p value thresholds. We then compared the number of associations detected by different methods at a fixed FDR cutoff.
Finally, following [86], we annotated CpG sites into four categories based on genomic locations obtained from the UCSC Genome Browser: island, shore, shelf, and open sea. CpG islands are defined as short (approximately 1 kb) regions of high CpG density in an otherwise CpG-sparse genome [113]. A large proportion of CpG islands have been shown to be associated with gene promoters [114, 115]. The methylation level at the CpG islands is often associated with transcription repression [116, 117]. CpG shores are defined as the 2 kb of sequence flanking a CpG island, and CpG shelfs are defined as the 2 kb of sequence further flanking CpG shores. Both CpG shores and shelfs have been reported to be more dynamic than the CpG island itself [90, 118, 119]. The methylation variation at shores and shelfs have been associated with various diseases. Finally, the remaining regions outside of CpG island/shore/shelf are denoted as open seas [120]. We downloaded the CpG island annotations for Panu2.0 directly from the UCSC Genome Browser, annotated the 2-kb region upstream and downstream of the CpG island boundaries as the shore, annotated the 2-kb regions upstream and downstream of the CpG shores as the CpG shelves, and annotated the remaining regions as open sea (Fig. 3e).
Wolf RRBS data
We also applied our method to analyze a second bisulfite sequencing data set, from 63 gray wolves from Yellowstone National Park in the USA [46, 94]. The wolf data are RRBS data collected on the Illumina HiSeq 2500 platform using 100-bp single-end sequencing reads. We obtained bam files for 35 individuals from NCBI (accession number PRJNA299792) [46] and the fastq files for the remaining individuals from accession number PRJNA488382 [94]. We processed all files using the same procedure described in the previous section, using Trim Galore and BSseeker2, with the dog genome canFam 3.1 [121] as the reference genome. Per-individual sequencing read depth ranges from 9.53 to 75.18 million reads per individual (median = 31.36 million reads; sd = 12.91 million). We used the same SNP calling procedure described for baboons and applied the same filtering criteria to obtain a final call set of 518,774 SNPs, with an average of 360,063 SNPs genotyped per individual (median = 440,898; sd = 103,522). We also computed the genetic relatedness matrix K with these SNPs using GEMMA.
To validate variants identified in the wolf data set, we compared the called variants from the bisulfite sequencing data to an existing SNV data base from the current Ensembl release for the dog genome canFam 3.1. We found that 17.9% of variants overlapped with known variants from Ensembl. Importantly, among overlapping variants, 99.1% of them have the same alternative allele as reported in Ensembl. In addition, the proportion of overlapping variants increases with increasing alternate allele frequency and reaches 41.3% when we focus on variants that have an alternate allele frequency > 0.5 in the RRBS data (Additional file 2: Figure S11A).
We followed the same procedure described for baboons to extract methylation measurements on SNP-CpG pairs. In the wolves, the distance between SNP-CpG site in each pair ranges from 1 to 103 bp, with a median of 35 bp (mean = 38.41 bp; sd = 25.63 bp; Additional file 2: Figure S11B). We obtained a total of 861,474 SNP-CpG pairs, representing 144,670 unique SNPs and 684,681 unique CpG sites. Following quality control filtering, we obtained a final set of 279,223 SNP-CpG pairs, representing 77,039 unique SNPs and 242,784 unique CpG sites, with an average of 179,412 SNP-CpG pairs measured per individual. In this set, the median number of reads per SNP across all individuals is 25 (mean = 31.16; sd = 29.33) and the median number of reads per allele is 14 in heterozygotes (mean = 17.45; sd = 18.90). Methylation levels across sites display the expected bimodal distribution pattern (Additional file 2: Figure S11C), and we observed no bias in methylation-level estimates between the reference and the alternate alleles (Additional file 2: Figure S11D).
We applied the same analysis procedure to analyze the wolf data as we did for the baboon data set. IMAGE-A was used to analyze 236,092 SNP-CpG pairs where the data set included at least 5 heterozygotes while the other methods were applied to all 279,223 SNP-CpG pairs. We used permutation to construct empirical null distributions for FDR control and controlled for the effects of sex and the top two methylation principal components in the same procedure described in the baboon data. Finally, we annotated CpG sites into island, shore, shelf, and open sea categories as described above, based on the canFam3.1 genome.
Baboon RRBS fastq files are available in the Sequence Read Archive (SRA) of NCBI under accession PRJNA283632 [40, 45]. Wolf RRBS bam files for 35 wolves are available under accession PRJNA299792 [46], and the fastq files for the other 27 wolves are available under accession PRJNA488382 [94]. The Trim Galore! Software is available from https://www.bioinformatics.babraham.ac.uk/projects/trim_galore/ [106]. The BS Seeker 2 software is available from http://pellegrini-legacy.mcdb.ucla.edu/bs_seeker2/ [107]. The VCFtools software is available from http://vcftools.sourceforge.net/ [108]. The CGmaptools software is available from https://cgmaptools.github.io/ [78]. The GEMMA [83,84,85], MACAU [40, 57], BB [40], and PQLseq [64] software packages are available from http://www.xzlab.org/software.html.
IMAGE is an open-source R package that is freely available from GitHub [122] https://github.com/fanyue322/IMAGE, CRAN (https://cran.r-project.org/web/packages/IMAGE/index.html), and http://www.xzlab.org/software.html. Source code for the software release used in the paper has been placed into a DOI-assigning repository [123] (https://doi.org/10.5281/zenodo.3334384). The code to reproduce all the analyses presented in the paper is available on GitHub [124] (https://github.com/fanyue322/IMAGEreproduce) and deposited on Zenodo [125] (https://doi.org/10.5281/zenodo.3334388).
Murrell A, et al. An association between variants in the IGF2 gene and Beckwith-Wiedemann syndrome: interaction between genotype and epigenotype. Hum Mol Genet. 2004;13(2):247–55.
Article CAS PubMed PubMed Central Google Scholar
Jones PA. Functions of DNA methylation: islands, start sites, gene bodies and beyond. Nat Rev Genet. 2012;13(7):484–92.
Dayeh T, et al. Genome-wide DNA methylation analysis of human pancreatic islets from type 2 diabetic and non-diabetic donors identifies candidate genes that influence insulin secretion. PLoS Genet. 2014;10(3):e1004160.
Davegardh C, et al. DNA methylation in the pathogenesis of type 2 diabetes in humans. Molecular Metabolism. 2018;14:12–25.
Bellamy N, et al. Rheumatoid-arthritis in twins - a study of etiopathogenesis based on the Australian Twin Registry. Ann Rheum Dis. 1992;51(5):588–93.
Deapen D, et al. A revised estimate of twin concordance in systemic lupus-erythematosus. Arthritis Rheum. 1992;35(3):311–8.
Jarvinen P, Aho K. Twin studies in rheumatic diseases. Semin Arthritis Rheum. 1994;24(1):19–28.
Soriano-Tarraga C, et al. Epigenome-wide association study identifies TXNIP gene associated with type 2 diabetes mellitus and sustained hyperglycemia. Hum Mol Genet. 2016;25(3):609–19.
Dick KJ, et al. DNA methylation and body-mass index: a genome-wide analysis. Lancet. 2014;383(9933):1990–8.
Guay SP, et al. Epigenome-wide analysis in familial hyper-cholesterolemia identified new loci associated with high-density lipoprotein cholesterol concentration. Epigenomics. 2012;4(6):623–39.
Iossifov I, et al. The contribution of de novo coding mutations to autism spectrum disorder. Nature. 2014;515(7526):216–21.
Amir RE, et al. Rett syndrome is caused by mutations in X-linked MECP2, encoding methyl-CpG-binding protein 2. Nat Genet. 1999;23(2):185–8.
Cui HM, et al. Loss of IGF2 imprinting: a potential marker of colorectal cancer risk. Science. 2003;299(5613):1753–5.
Vu TH, Nguyen AH, Hoffman AR. Loss of IGF2 imprinting is associated with abrogation of long-range intrachromosomal interactions in human cancer cells. Hum Mol Genet. 2010;19(5):901–19.
Byun HM, et al. Examination of IGF2 and H19 loss of imprinting in bladder cancer. Cancer Res. 2007;67(22):10753–8.
Feinberg AP, Koldobskiy MA, Gondor A. Epigenetic modulators, modifiers and mediators in cancer aetiology and progression. Nat Rev Genet. 2016;17(5):284–99.
Timp W, Feinberg AP. Cancer as a dysregulated epigenome allowing cellular growth advantage at the expense of the host. Nat Rev Cancer. 2013;13(7):497–510.
Chong SY, Whitelaw E. Epigenetic germline inheritance. Curr Opin Genet Dev. 2004;14(6):692–6.
Anway MD. Epigenetic transgenerational actions of endocrine disruptors and male fertility. Science. 2010;328(5979):690.
Kaminsky ZA, et al. DNA methylation profiles in monozygotic and dizygotic twins. Nat Genet. 2009;41(2):240–5.
Heijmans BT, et al. Heritable rather than age-related environmental and stochastic factors dominate variation in DNA methylation of the human IGF2/H19 locus. Hum Mol Genet. 2007;16(5):547–54.
Ollikainen M, et al. DNA methylation analysis of multiple tissues from newborn twins reveals both genetic and intrauterine components to variation in the human neonatal epigenome. Hum Mol Genet. 2010;19(21):4176–88.
Bell JT, et al. DNA methylation patterns associate with genetic and gene expression variation in HapMap cell lines. Genome Biol. 2011;12(1):R10.
Bell JT, et al. Epigenome-wide scans identify differentially methylated regions for age and age-related phenotypes in a healthy ageing population. PLoS Genet. 2012;8(4):189–200.
Lam LL, et al. Factors underlying variable DNA methylation in a human community cohort. Proc Natl Acad Sci U S A. 2012;109:17253–60.
Grundberg E, et al. Global analysis of DNA methylation variation in adipose tissue from twins reveals links to disease-associated variants in distal regulatory elements. Am J Hum Genet. 2013;93(6):1158.
Article CAS PubMed Central Google Scholar
Gutierrez-Arcelus M, et al. Passive and active DNA methylation and the interplay with genetic variation in gene regulation. Elife. 2013;2:e00523.
Polderman TJC, et al. Meta-analysis of the heritability of human traits based on fifty years of twin studies. Nat Genet. 2015;47(7):702–9.
Hannon E, et al. Characterizing genetic and environmental influences on variable DNA methylation using monozygotic and dizygotic twins. PLoS Genet. 2018;14(8):e1007544.
McRae AF, et al. Contribution of genetic variation to transgenerational inheritance of DNA methylation. Genome Biol. 2014;15(5):R73.
Wagner JR, et al. The relationship between DNA methylation, genetic and expression inter-individual variation in untransformed human fibroblasts. Genome Biol. 2014;15(2):R37.
Shi JX, et al. Characterizing the genetic basis of methylome diversity in histologically normal human lung tissue. Nat Commun. 2014;5:3365.
Luijk R, et al. An alternative approach to multiple testing for methylation QTL mapping reduces the proportion of falsely identified CpGs. Bioinformatics. 2015;31(3):340–5.
Pai AA, et al. A genome-wide study of DNA methylation patterns and gene expression levels in multiple human and chimpanzee tissues. PLoS Genet. 2011;7(2):e1001316.
Banovich NE, et al. Methylation QTLs are associated with coordinated changes in transcription factor binding, histone modifications, and gene expression levels. PLoS Genet. 2014;10(9):e1004663.
Gibbs JR, et al. Abundant quantitative trait loci exist for DNA methylation and gene expression in human brain. PLoS Genet. 2010;6(5):e1000952.
Hannon E, et al. Methylation QTLs in the developing brain and their enrichment in schizophrenia risk loci. Nat Neurosci. 2016;19(1):48–54.
Day K, et al. Heritable DNA methylation in CD4(+) cells among complex families displays genetic and non-genetic effects. PLoS One. 2016;11(10):e0165488.
Cokus SJ, et al. Shotgun bisulphite sequencing of the Arabidopsis genome reveals DNA methylation patterning. Nature. 2008;452(7184):215–9.
Lea AJ, Tung J, Zhou X. A flexible, efficient binomial mixed model for identifying differential DNA methylation in bisulfite sequencing data. PLoS Genet. 2015;11(11):e1005650.
Boyle P, et al. Gel-free multiplexed reduced representation bisulfite sequencing for large-scale DNA methylation profiling. Genome Biol. 2012;13(10):R92.
Gu HC, et al. Preparation of reduced representation bisulfite sequencing libraries for genome-scale DNA methylation profiling. Nat Protoc. 2011;6(4):468–81.
Liu Y, et al. Bisulfite-free direct detection of 5-methylcytosine and 5-hydroxymethylcytosine at base resolution. Nat Biotechnol. 2019;37(4):424–9.
Yu M, et al. Base-resolution analysis of 5-hydroxymethylcytosine in the mammalian genome. Cell. 2012;149(6):1368–80.
Tung J, et al. The genetic architecture of gene expression levels in wild baboons. Elife. 2015;4:e04729.
Koch IJ, et al. The concerted impact of domestication and transposon insertions on methylation patterns between dogs and grey wolves. Mol Ecol. 2016;25(8):1838–55.
Chatterjee A, et al. Mapping the zebrafish brain methylome using reduced representation bisulfite sequencing. Epigenetics. 2013;8(9):979–89.
Stubbs TM, et al. Multi-tissue DNA methylation age predictor in mouse. Genome Biol. 2017;18(1):68.
Schmitz RJ, et al. Patterns of population epigenomic diversity. Nature. 2013;495(7440):193–8.
Alonso-Blanco C, et al. 1,135 genomes reveal the global pattern of polymorphism in Arabidopsis thaliana. Cell. 2016;166(2):481–91.
Hu YJ, et al. Proper use of allele-specific expression improves statistical power for cis-eQTL mapping with RNA-Seq data. J Am Stat Assoc. 2015;110(511):962–74.
van de Geijn B, et al. WASP: allele-specific software for robust molecular quantitative trait locus discovery. Nat Methods. 2015;12(11):1061–3.
Cheung WA, et al. Functional variation in allelic methylomes underscores a strong genetic contribution and reveals novel epigenetic alterations in the human epigenome. Genome Biol. 2017;18(1):50.
Soneson C, Delorenzi M. A comparison of methods for differential expression analysis of RNA-seq data. Bmc Bioinformatics. 2013;14(1):91.
Kvam VM, Lu P, Si YQ. A comparison of statistical methods for detecting differentially expressed genes from RNA-seq data. Am J Bot. 2012;99(2):248–56.
Zhang ZH, et al. A comparative study of techniques for differential expression analysis on RNA-Seq data. PLoS One. 2014;9(8):e103207.
Sun SQ, et al. Differential expression analysis for RNAseq using Poisson mixed models. Nucleic Acids Res. 2017;45(11):e106.
Feng H, Conneely KN, Wu H. A Bayesian hierarchical model to detect differentially methylated loci from single nucleotide resolution sequencing data. Nucleic Acids Res. 2014;42(8):e69.
Sun DQ, et al. MOABS: model based analysis of bisulfite sequencing data. Genome Biol. 2014;15(2):R38.
Dolzhenko E, Smith AD. Using beta-binomial regression for high-precision differential methylation analysis in multifactor whole-genome bisulfite sequencing experiments. BMC Bioinformatics. 2014;15(1):215.
Park Y, Wu H. Differential methylation analysis for BS-seq data under general experimental design. Bioinformatics. 2016;32(10):1446–53.
Wu H, et al. Detection of differentially methylated regions from whole-genome bisulfite sequencing data without replicates. Nucleic Acids Res. 2015;43(21):e141.
PubMed PubMed Central Google Scholar
Weissbrod O, et al. Association testing of bisulfite-sequencing methylation data via a Laplace approximation. Bioinformatics. 2017;33(14):I325–32.
Sun S, et al. Heritability estimation and differential analysis of count data with generalized linear mixed models in genomic sequencing studies. Bioinformatics. 2019;35(3):487–96.
Dubin MJ, et al. DNA methylation in Arabidopsis has a genetic basis and shows evidence of local adaptation. Elife. 2015;4:e05255.
Orozco LD, et al. Epigenome-wide association of liver methylation patterns and complex metabolic traits in mice. Cell Metab. 2015;21(6):905–17.
Li YR, et al. The DNA methylome of human peripheral blood mononuclear cells. PLoS Biol. 2010;8(11):e1000533.
Peng Q, Ecker JR. Detection of allele-specific methylation through a generalized heterogeneous epigenome model. Bioinformatics. 2012;28(12):I163–71.
Fang F, et al. Genomic landscape of human allele-specific DNA methylation. Proc Natl Acad Sci U S A. 2012;109(19):7332–7.
Kerkel K, et al. Genomic surveys by methylation-sensitive SNP analysis identify sequence-dependent allele-specific DNA methylation. Nat Genet. 2008;40(7):904–8.
Schalkwyk LC, et al. Allelic skewing of DNA methylation is widespread across the genome. Am J Hum Genet. 2010;86(2):196–212.
Xie W, et al. Base-resolution analyses of sequence and parent-of-origin dependent DNA methylation in the mouse genome. Cell. 2012;148(4):816–31.
Shoemaker R, et al. Allele-specific methylation is prevalent and is contributed by CpG-SNPs in the human genome. Genome Res. 2010;20(7):883–9.
Gertz J, et al. Analysis of DNA methylation in a three-generation family reveals widespread genetic influence on epigenetic regulation. PLoS Genet. 2011;7(8):e1002228.
Kaplow IM, et al. A pooling-based approach to mapping genetic variants associated with DNA methylation. Genome Res. 2015;25(6):907–17.
Sun W. A statistical framework for eQTL mapping using RNA-seq data. Biometrics. 2012;68(1):1–11.
Wilson D, Ibrahim J, Sun W. Mapping tumor-specific expression QTLs in impure tumor samples. J Am Stat Assoc. 2019:1-8.
Guo WL, et al. CGmapTools improves the precision of heterozygous SNV calls and supports allele-specific methylation detection and visualization in bisulfite-sequencing data. Bioinformatics. 2018;34(3):381–7.
Kumasaka N, Knights AJ, Gaffney DJ. Fine-mapping cellular QTLs with RASQUAL and ATAC-seq. Nat Genet. 2016;48(4):473.
Breslow NE, Clayton DG. Approximate inference in generalized linear mixed models. J Am Stat Assoc. 1993;88(421):9–25.
Chen H, et al. Control for population structure and relatedness for binary traits in genetic association studies via logistic mixed models. Am J Hum Genet. 2016;98(4):653–66.
Power C, Elliott J. Cohort profile: 1958 British Birth Cohort (National Child Development Study). Int J Epidemiol. 2006;35(1):34–41.
Zhou X, Stephens M. Genome-wide efficient mixed-model analysis for association studies. Nat Genet. 2012;44(7):821–4.
Zhou X, Carbonetto P, Stephens M. Polygenic modeling with Bayesian sparse linear mixed models. PLoS Genet. 2013;9(2):e1003264.
Zhou X, Stephens M. Efficient multivariate linear mixed model algorithms for genome-wide association studies. Nat Methods. 2014;11(4):407–9.
Olsson AH, et al. Genome-wide associations between genetic and epigenetic variation influence mRNA expression and insulin secretion in human pancreatic islets. PLoS Genet. 2014;10(12):e1004735.
Ronn T, et al. A six months exercise intervention influences the genome-wide DNA methylation pattern in human adipose tissue. PLoS Genet. 2013;9(6):e1003572.
Volkov P, et al. A genome-wide mQTL analysis in human adipose tissue identifies genetic variants associated with DNA methylation, gene expression and metabolic traits. PLoS One. 2016;11(6):e0157776.
Antequera F, Bird A. CpG islands as genomic footprints of promoters that are associated with replication origins. Curr Biol. 1999;9(17):R661–7.
Irizarry RA, et al. The human colon cancer methylome shows similar hypo- and hypermethylation at conserved tissue-specific CpG island shores. Nat Genet. 2009;41(2):178–86.
Ziller MJ, et al. Charting a dynamic DNA methylation landscape of the human genome. Nature. 2013;500(7463):477–81.
Zhang DD, et al. Genetic control of individual differences in gene-specific methylation in human brain. Am J Hum Genet. 2010;86(3):411–9.
Zhi D, et al. SNPs located at CpG sites modulate genome-epigenome interaction. Epigenetics. 2013;8(8):802–6.
Thompson MJ, et al. An epigenetic aging clock for dogs and wolves. Aging-Us. 2017;9(3):1055–68.
McKenna A, et al. The genome analysis toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20(9):1297–303.
del Rosario RCH, et al. Sensitive detection of chromatin-altering polymorphisms reveals autoimmune disease mechanisms. Nat Methods. 2015;12(5):458–64.
Deelen P, et al. Calling genotypes from public RNA-sequencing data enables identification of genetic variants that affect gene-expression levels. Genome Medicine. 2015;7(1):30.
Do C, et al. Genetic-epigenetic interactions in cis: a major focus in the post-GWAS era. Genome Biol. 2017;18(1):120.
Wilkinson LS, Davies W, Isles AR. Genomic imprinting effects on brain development and function. Nat Rev Neurosci. 2007;8(11):832–43.
Knowles DA, et al. Allele-specific expression reveals interactions between genetic variation and environment. Nat Methods. 2017;14(7):699–702.
Zhang Y, Liu JS. Fast and accurate approximation to significance tests in genome-wide association studies. J Am Stat Assoc. 2011;106(495):846–57.
Segal BD, et al. Fast approximation of small p-values in permutation tests by partitioning the permutations. Biometrics. 2018;74(1):196–206.
Tobi EW, et al. DNA methylation signatures link prenatal famine exposure to growth and metabolism. Nat Commun. 2015;6:5592.
Richardson TG, et al. Systematic Mendelian randomization framework elucidates hundreds of CpG sites which may mediate the influence of genetic variants on disease. Hum Mol Genet. 2018;27(18):3293–304.
Huang JV, et al. DNA methylation in blood as a mediator of the association of mid-childhood body mass index with cardio-metabolic risk score in early adolescence. Epigenetics. 2018;13(10–11):1072–87.
Krueger, F., Trim Galore!: a wrapper tool around Cutadapt and FastQC to consistently apply quality and adapter trimming to FastQ files. 2015, 0.4.
Guo WL, et al. BS-Seeker2: a versatile aligning pipeline for bisulfite sequencing data. BMC Genomics. 2013;14(1):774.
Danecek P, et al. The variant call format and VCFtools. Bioinformatics. 2011;27(15):2156–8.
Wall JD, et al. Genomewide ancestry and divergence patterns from low-coverage sequencing data reveal a complex history of admixture in wild baboons. Mol Ecol. 2016;25(14):3469–83.
Lea AJ, et al. Maximizing ecological and evolutionary insight in bisulfite sequencing data sets. Nature Ecol Evol. 2017;1(8):1074–83.
Lea AJ, et al. Resource base influences genome-wide DNA methylation levels in wild baboons (Papio cynocephalus). Mol Ecol. 2016;25(8):1681–96.
Du P, et al. Comparison of Beta-value and M-value methods for quantifying methylation levels by microarray analysis. BMC Bioinformatics. 2010;11:587.
Gardinergarden M, Frommer M. Cpg islands in vertebrate genomes. J Mol Biol. 1987;196(2):261–82.
Ioshikhes IP, Zhang MQ. Large-scale human promoter mapping using CpG islands. Nat Genet. 2000;26(1):61–3.
Saxonov S, Berg P, Brutlag DL. A genome-wide analysis of CpG dinucleotides in the human genome distinguishes two distinct classes of promoters. Proc Natl Acad Sci U S A. 2006;103(5):1412–7.
Esteller M. CpG island hypermethylation and tumor suppressor genes: a booming present, a brighter future. Oncogene. 2002;21(35):5427–40.
Bird A. DNA methylation patterns and epigenetic memory. Genes Dev. 2002;16(1):6–21.
Doi A, et al. Differential methylation of tissue- and cancer-specific CpG island shores distinguishes human induced pluripotent stem cells, embryonic stem cells and fibroblasts. Nat Genet. 2009;41(12):1350–3.
Bibikova M, et al. High density DNA methylation array with single CpG site resolution. Genomics. 2011;98(4):288–95.
Sandoval J, et al. Validation of a DNA methylation microarray for 450,000 CpG sites in the human genome. Epigenetics. 2011;6(6):692–702.
Lindblad-Toh K, et al. Genome sequence, comparative analysis and haplotype structure of the domestic dog. Nature. 2005;438(7069):803–19.
Fan Y, Vilgalys T, Sun S, Peng Q, Tung J, Zhou X. IMAGE: high-powered detection of genetic effects on DNA methylation using integrated methylation QTL mapping and allele-specific analysis. Source Code Github Repository, 2019. https://github.com/fanyue322/IMAGE. Accessed 13 July 2019.
Fan Y, Vilgalys T, Sun S, Peng Q, Tung J, Zhou X. IMAGE: high-powered detection of genetic effects on DNA methylation using integrated methylation QTL mapping and allele-specific analysis. Source Code DOI. 2019. https://doi.org/10.5281/zenodo.3334384. Accessed 13 July 2019.
Fan Y, Vilgalys T, Sun S, Peng Q, Tung J, Zhou X. IMAGE: high-powered detection of genetic effects on DNA methylation using integrated methylation QTL mapping and allele-specific analysis. Analysis Code Github Repository. 2019. https://github.com/fanyue322/IMAGEreproduce. Accessed 13 July 2019.
Fan Y, Vilgalys T., Sun S, Peng Q, Tung J, Zhou X. IMAGE: high-powered detection of genetic effects on DNA methylation using integrated methylation QTL mapping and allele-specific analysis. Analysis Code Zenodo. 2019. https://doi.org/10.5281/zenodo.3334388. Accessed 13 July 2019.
We thank Yichen Si at the University of Michigan for helping with the initial exploration of the method. This study also makes use of data generated by the Amboseli Baboon Research Project (ABRP), data generated in the Yellowstone gray wolf population, and the Wellcome Trust Case Control Consortium (WTCCC). A full list of ABRP past and current funding sources and contributors to these data is available at http://amboselibaboons.nd.edu. A full list of the investigators who contributed to the generation of the WTCCC data is available from http://www.wtccc.org.uk/. Funding for the WTCCC project was provided by the Wellcome Trust under award 076113 and 085475.
Review history
The review history is available as Additional file 4.
This study was supported by the National Institutes of Health (NIH) grants R01HD088558 and R01HG009124 and National Science Foundation (NSF) grants DMS1712933 and BCS1751783. YF is also supported by a scholarship from the China Scholarship Council. Computing in this study is supported in part by the North Carolina Biotechnology Center (Grant 2016-IDG-1013).
Systems Engineering Institute, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, People's Republic of China
Yue Fan & Qinke Peng
Department of Biostatistics, University of Michigan, Ann Arbor, MI, 48109, USA
Yue Fan, Shiquan Sun & Xiang Zhou
Departments of Evolutionary Anthropology and Biology, Duke University, Durham, NC, 27708, USA
Tauras P. Vilgalys & Jenny Tung
Duke University Population Research Institute, Duke University, Durham, NC, 27708, USA
Jenny Tung
Center for Statistical Genetics, University of Michigan, Ann Arbor, MI, 48109, USA
Xiang Zhou
Yue Fan
Tauras P. Vilgalys
Shiquan Sun
Qinke Peng
JT and XZ conceived the idea and provided funding support. YF and XZ developed the method and designed the experiments. YF implemented the software and performed simulations with assistance from SS and QP. YF and TPV performed real data analysis. YF, JT, and XZ wrote the manuscript with input from all other authors. All authors read and approved the final manuscript.
Correspondence to Xiang Zhou.
Supplementary text on IMAGE modeling and inference details.
Supplementary figures on the performance evaluation of IMAGE and on the quality control of the real data applications.
Supplementary tables on functional enrichment analyses and type I error control examination.
Review history.
Fan, Y., Vilgalys, T.P., Sun, S. et al. IMAGE: high-powered detection of genetic effects on DNA methylation using integrated methylation QTL mapping and allele-specific analysis. Genome Biol 20, 220 (2019). https://doi.org/10.1186/s13059-019-1813-1
Allele-specific methylation
Methylation quantitative trait locus
mQTL
Bisulfite sequencing
Binomial mixed model
Penalized quasi-likelihood | CommonCrawl |
Comparative Migration Studies
Correction to: Between fragmentation and institutionalisation: the rise of migration studies as a research field
Nathan Levy1,
Asya Pisarevskaya1 &
Peter Scholten1
Comparative Migration Studies volume 8, Article number: 29 (2020) Cite this article
The Original Article was published on 06 July 2020
Correction to: Comparative Migration Studies 8, 24 (2020)
Following publication of the original article (Levy, Pisarevskaya, & Scholten, 2020), the authors reported several errors.
In the Abstract, "co -authorships" has been corrected to "co-authorships".
Footnote 1 contained a typesetting mistake – duplicate text was added. It has been corrected to: "E.g. a transdisciplinary article is one where it becomes difficult to ascertain the discipline from which it has originated, even though it is clearly identified as belonging to migration studies."
In the section 'Bibliometric analysis', the formula has been corrected to:
$$ {P}_t=\frac{N_t\ast \left({N}_t-1\right)}{2},\mathrm{where}\ \mathrm{N}\ \mathrm{is}\ \mathrm{a}\ \mathrm{Total}\kern0.17em \mathrm{number}\ \mathrm{of}\ \mathrm{sources}\ \mathrm{for}\ \mathrm{period}\;t. $$
The 8th paragraph of the 'Bibliometric analysis' contained a typesetting mistake – the first part (highlighted in bold typeface) was omitted. This paragraph has been corrected to: "We did this in five year increments (1975–1979; 1980–1984, and so on, with the exception of the final period, 2015–2018). The network files exported from VOSviewer can be found in the Harvard Dataverse (see Levy, Pisarevskaya, & Scholten, 2020). Following our iterative logic, this enabled us to analyse the data in the same terms – i.e. "early 1980s", "late 1990s" – as our interviewees described their perception of the field's development. VOSviewer clusters the authors according to how often they are cited together. We take these clusters to approximate the variety of epistemic communities within the field in each period. To assign labels, we used Google Scholar to find the unifying features of each cluster. We checked the research of each cluster's most-cited authors, and the first-page results (usually the authors' higher-cited works) enabled us to grasp their conceptual, thematic, or disciplinary focus. We triangulated this information with the reflections shared by our expert interviewees."
Footnote 2 contained a typesetting mistake – duplicate text was added. It has been corrected to: "See sheet 'all countries weighted' for relativized co-authorship statistics."
In the 9th paragraph of the 'Bibliometric analysis', "co -citation" has been corrected to "co-citation".
In the 2nd paragraph of the 'Disciplines and cross-disciplinary osmosis', "most -cited" has been corrected to "most-cited".
In the 5th paragraph of the 'Disciplines and cross-disciplinary osmosis', "Pennix" has been corrected to "Penninx".
The 5th paragraph of the 'Conclusion and discussion: fragmentation and institutionalisation in the field of migration studies' section contained a typesetting mistake – the phrase "that refer to" was duplicated. The duplicated phrase was removed.
The original article (Levy et al., 2020) has been corrected with regards to the above errors.
Levy, N., Pisarevskaya, A., & Scholten, P. (2020). Between fragmentation and institutionalisation: the rise of migration studies as a research field. Comparative Migration Studies, 8, 24 https://doi.org/10.1186/s40878-020-00180-7.
Department of Public Administration and Sociology, Erasmus University Rotterdam, Rotterdam, Netherlands
Nathan Levy, Asya Pisarevskaya & Peter Scholten
Nathan Levy
Asya Pisarevskaya
Peter Scholten
Correspondence to Nathan Levy.
The original article can be found online at https://doi.org/10.1186/s40878-020-00180-7
Levy, N., Pisarevskaya, A. & Scholten, P. Correction to: Between fragmentation and institutionalisation: the rise of migration studies as a research field. CMS 8, 29 (2020). https://doi.org/10.1186/s40878-020-00200-6 | CommonCrawl |
Sharp rate of convergence to Barenblatt profiles for a critical fast diffusion equation
CPAA Home
On a conjectured reverse Faber-Krahn inequality for a Steklov--type Laplacian eigenvalue
January 2015, 14(1): 83-106. doi: 10.3934/cpaa.2015.14.83
Mean value properties of fractional second order operators
Fausto Ferrari 1,
Dipartimento di Matematica dell'Università di Bologna, Piazza di Porta S. Donato, 5, 40126 Bologna
Received February 2014 Revised April 2014 Published September 2014
In this paper we introduce a method to define fractional operators using mean value operators. In particular we discuss a geometric approach in order to construct fractional operators. As a byproduct we define fractional linear operators in Carnot groups, moreover we adapt our technique to define some nonlinear fractional operators associated with the $p-$Laplace operators in Carnot groups.
Keywords: Carnot groups, Mean operators, fundamental solutions, nonlinear operators..
Mathematics Subject Classification: Primary: 35H20, 35J60; Secondary: 35E05, 35J9.
Citation: Fausto Ferrari. Mean value properties of fractional second order operators. Communications on Pure & Applied Analysis, 2015, 14 (1) : 83-106. doi: 10.3934/cpaa.2015.14.83
C. Bjorland, L. Caffarelli and A. Figalli, Nonlocal tug-of-war and the infinity fractional Laplacian, Comm. Pure Appl. Math., 65 (2012), 337-380. doi: 10.1002/cpa.21379. Google Scholar
J. Bliedtner and W. Hansen, Potential Theory, An Analytic and Probabilistic Approach to Balayage, Universitext, Springer, Berlin-Heidelberg, 1986. doi: 10.1007/978-3-642-71131-2. Google Scholar
K. Bogdan and T. .Zak, On Kelvin transformation, Journal of Theoretical Probability, 19 (2006), 89-120. doi: 10.1007/s10959-006-0003-8. Google Scholar
A. Bonfiglioli and E. Lanconelli, Subharmonic functions in sub-Riemannian settings, J. Eur. Math. Soc., 15 (2013), 387-441. doi: 10.4171/JEMS/364. Google Scholar
A. Bonfiglioli, E. Lanconelli and F. Uguzzoni, Stratified Lie Groups and Potential Theory for Their Sub-Laplacians, Springer Monographs in Mathematics, 2007. Google Scholar
L. Capogna, D. Danielli, S. Pauls and J. Tyson, An Introduction to the Heisenberg Group and the Sub-Riemannian Isoperimetric Problem, Birkhäuser, 2006. Google Scholar
G. Citti, N. Garofalo and E. Lanconelli, Harnack's inequality for sum of squares of vector fields plus a potential, Amer. J. Math., 115 (1993), 699-734. doi: 10.2307/2375077. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar
E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar
E. B. Fabes, C. E. Kenig and R. P. Serapioni, The local regularity of solutions of degenerate elliptic equations, Comm. Partial Differential Equations, 7 (1982), 77-116. doi: 10.1080/03605308208820218. Google Scholar
F. Ferrari and B. Franchi, Harnack inequality for fractional sub-Laplacians in Carnot groups,, \emph{preprint}, (). Google Scholar
F. Ferrari, Q. Liu and J. J. Manfredi, On the characterization of p-harmonic functions on the Heisenberg group by mean value properties, Discrete Contin. Dyn. Syst., 34 (2014), 2279-2793. doi: 10.3934/dcds.2014.34.2779. Google Scholar
F. Ferrari and A. Pinamonti, Characterization by asymptotic mean formulas of $q-$harmonic functions in Carnot groups, Potential Anal., (2014), DOI 10.1007/s11118-014-9430-9. doi: 10.2478/agms-2013-0001. Google Scholar
F. Ferrari and I. E. Verbitsky, Radial fractional Laplace operators and Hessian inequalities, J. Differential Equations, 253 (2012), 244-272. doi: 10.1016/j.jde.2012.03.024. Google Scholar
B. Franchi, R. Serapioni and F. Serra Cassano, On the structure of finite perimeter sets in step $2$ Carnot groups, J. Geom. Anal., 13 (2003), 421-466. doi: 10.1007/BF02922053. Google Scholar
W. Fulks, An approximate Gauss mean value theorem, Pacific J. Math., 14 (1964), 513-516. Google Scholar
N. Garofalo and E. Lanconelli, Level sets of the fundamental solution and Harnack inequality for degenerate equations of Kolmogorov type, Trans. Amer. Math. Soc., 321 (1990), 775-792. doi: 10.2307/2001585. Google Scholar
D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Reprint of the 1998 edition, Classics in Mathematics, Springer-Verlag, Berlin, 2001. Google Scholar
C. Gutiérrez and E. Lanconelli, Classical viscosity and average solutions for PDE's with nonnegative characteristic form, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl., 15 (2004), 17-28. Google Scholar
D. Hartenstine and M. Rudd, Statistical functional equations and p-harmonious functions, Adv. Nonlinear Stud., 13 (2013), 191-207. Google Scholar
D. Hartenstine and M. Rudd, Kelvin transform for $\alpha-$ harmonic functions in regular domains, Demostratio Mathematica, XLV (2012), 361-376. Google Scholar
B. Kawohl, J. J. Manfredi and M. Parviainen, Solutions of nonlinear PDEs in the sense of averages, J. Math. Pures Appl., 97 (2012), 173-188. doi: 10.1016/j.matpur.2011.07.001. Google Scholar
V. Julin and P. Juutinen, A new proof for the equivalence of a weak and viscosity solutions for the $p-$laplace equation, Comm. Partial Differential Equations, 37 (2012), 934-946. doi: 10.1080/03605302.2011.615878. Google Scholar
P. Juutinen, P. Lindqvist and J. J. Manfredi, On the equivalence of viscosity solutions and weak solutions for a quasi-linear elliptic equation, SIAM J. Math. Anal., 33 (2001), 699-717. doi: 10.1137/S0036141000372179. Google Scholar
N. S. Landkof, Foundations of Modern Potential Theory, Translated from the Russian by A. P. Doohovskoy. Die Grundlehren der mathematischen Wissenschaften, Band 180. Springer-Verlag, New York-Heidelberg, 1972 Google Scholar
P. Lindqvist, Notes on the p-Laplace equation, Report. University of Jyväskylä Department of Mathematics and Statistics, 102. University of Jyväskylä, Jyväskylä, 2006. Google Scholar
H. Liu and X. Yang, Asymptotic mean value formula for sub-$p$-harmonic functions on the Heisenberg group, J. Funct. Anal., 264 (2013), 2177-2196. doi: 10.1016/j.jfa.2013.02.009. Google Scholar
J. J. Manfredi, M. Parviainen and J. D. Rossi, On the definition and properties of p-harmonious functions, Proc. Amer. Math. Soc., 138 (2010), 881-889. doi: 10.1090/S0002-9939-09-10183-1. Google Scholar
J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization for p-harmonic functions, Ann. Sc. Norm. Super. Pisa Cl. Sci., 11 (2012), 215-241. Google Scholar
R. Monti and F. Serra Cassano, Surface measures in Carnot-Carathodory spaces, Calc. Var. Partial Differential Equations, 13 (2001), 339-376. doi: 10.1007/s005260000076. Google Scholar
K. Michalik and M. Ryznar, Asymptotic statistical characterizations of p-harmonic functions of two variables, Rocky Mountain J. Math., 41 (2011), 493-504. doi: 10.1216/RMJ-2011-41-2-493. Google Scholar
I. Netuka and J. Veselý, Mean value property and harmonic functions, Classical and modern potential theory and applications (Chateau de Bonas, 1993), 359-398, NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., 430, Kluwer Acad. Publ., Dordrecht, 1994. Google Scholar
C. Pucci and G. Talenti, Elliptic (second-order) partial differential equations with measurable coefficients and approximating integral equations, Advances in Math., 19 (1976), 48-105 . Google Scholar
M. Riesz, Intégrales de Riemann-Liouville et potentiels, Acta Szeged, 9 (1938), et Communic. Sémin. Math. de l'Univ. de Lund, 4 (1939), 1-42. Google Scholar
Patricio Felmer, Alexander Quaas. Fundamental solutions for a class of Isaacs integral operators. Discrete & Continuous Dynamical Systems, 2011, 30 (2) : 493-508. doi: 10.3934/dcds.2011.30.493
Shuhong Chen, Zhong Tan. Optimal partial regularity results for nonlinear elliptic systems in Carnot groups. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3391-3405. doi: 10.3934/dcds.2013.33.3391
Yaiza Canzani, A. Rod Gover, Dmitry Jakobson, Raphaël Ponge. Nullspaces of conformally invariant operators. Applications to $\boldsymbol{Q_k}$-curvature. Electronic Research Announcements, 2013, 20: 43-50. doi: 10.3934/era.2013.20.43
Ludovic Rifford. Ricci curvatures in Carnot groups. Mathematical Control & Related Fields, 2013, 3 (4) : 467-487. doi: 10.3934/mcrf.2013.3.467
Ismail Kombe. On the nonexistence of positive solutions to doubly nonlinear equations for Baouendi-Grushin operators. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5167-5176. doi: 10.3934/dcds.2013.33.5167
Jeffrey R. L. Webb. Positive solutions of nonlinear equations via comparison with linear operators. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5507-5519. doi: 10.3934/dcds.2013.33.5507
Fausto Ferrari, Michele Miranda Jr, Diego Pallara, Andrea Pinamonti, Yannick Sire. Fractional Laplacians, perimeters and heat semigroups in Carnot groups. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 477-491. doi: 10.3934/dcdss.2018026
István Győri, László Horváth. On the fundamental solution and its application in a large class of differential systems determined by Volterra type operators with delay. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1665-1702. doi: 10.3934/dcds.2020089
Lanzhe Liu. Mean oscillation and boundedness of Toeplitz Type operators associated to pseudo-differential operators. Communications on Pure & Applied Analysis, 2015, 14 (2) : 627-636. doi: 10.3934/cpaa.2015.14.627
Huseyin Coskun. Nonlinear decomposition principle and fundamental matrix solutions for dynamic compartmental systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6553-6605. doi: 10.3934/dcdsb.2019155
Thaís Jordão, Xingping Sun. General types of spherical mean operators and $K$-functionals of fractional orders. Communications on Pure & Applied Analysis, 2015, 14 (3) : 743-757. doi: 10.3934/cpaa.2015.14.743
Jean Louis Woukeng. $\sum $-convergence and reiterated homogenization of nonlinear parabolic operators. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1753-1789. doi: 10.3934/cpaa.2010.9.1753
Matthieu Alfaro, Isabeau Birindelli. Evolution equations involving nonlinear truncated Laplacian operators. Discrete & Continuous Dynamical Systems, 2020, 40 (6) : 3057-3073. doi: 10.3934/dcds.2020046
Luis Caffarelli, Luis Duque, Hernán Vivas. The two membranes problem for fully nonlinear operators. Discrete & Continuous Dynamical Systems, 2018, 38 (12) : 6015-6027. doi: 10.3934/dcds.2018152
Isabeau Birindelli, Stefania Patrizi. A Neumann eigenvalue problem for fully nonlinear operators. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 845-863. doi: 10.3934/dcds.2010.28.845
Isabeau Birindelli, Francoise Demengel. The dirichlet problem for singluar fully nonlinear operators. Conference Publications, 2007, 2007 (Special) : 110-121. doi: 10.3934/proc.2007.2007.110
Chantelle Blachut, Cecilia González-Tokman. A tale of two vortices: How numerical ergodic theory and transfer operators reveal fundamental changes to coherent structures in non-autonomous dynamical systems. Journal of Computational Dynamics, 2020, 7 (2) : 369-399. doi: 10.3934/jcd.2020015
Ali Maalaoui. A note on commutators of the fractional sub-Laplacian on Carnot groups. Communications on Pure & Applied Analysis, 2019, 18 (1) : 435-453. doi: 10.3934/cpaa.2019022
Jerome A. Goldstein, Ismail Kombe, Abdullah Yener. A unified approach to weighted Hardy type inequalities on Carnot groups. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 2009-2021. doi: 10.3934/dcds.2017085
Norihisa Ikoma. Multiplicity of radial and nonradial solutions to equations with fractional operators. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3501-3530. doi: 10.3934/cpaa.2020153
Fausto Ferrari | CommonCrawl |
Two-dimensional DOA estimation of coherent sources using two parallel uniform linear arrays
Heping Shi1,
Zhuo Li2,3,
Jihua Cao4 &
Hua Chen5
A novel two-dimensional (2-D) direction-of-arrival (DOA) estimation approach based on matrix reconstruction is proposed for coherent signals impinging on two parallel uniform linear arrays (ULAs). In the proposed algorithm, the coherency of incident signals is decorrelated through two equivalent covariance matrices, which are constructed by utilizing cross-correlation information of received data between the two parallel ULAs and the changing reference element. Then, the 2-D DOA estimation can be estimated by using eigenvalue decomposition (EVD) of the new constructed matrix. Compared with the previous works, the proposed algorithm can offer remarkably good estimation performance. In addition, the proposed algorithm can achieve automatic parameter pair-matching without additional computation. Simulation results demonstrate the effectiveness and efficiency of the proposed algorithm.
2-D direction-of-arrival (DOA) estimation of incident coherent source signals has received increasing attention in radar, sonar, and seismic exploration [1–5]. Many high-resolution techniques, such as MUSIC [6] and ESPRIT [7], have achieved exciting estimation performance. However, the aforementioned methods assume the incident signals are independent, which would encounter performance degradation due to the rank deficiency when coherent signals exist. To decorrelate coherent signals, the spatial smoothing (SS) [8] or forward-backward spatial smoothing (FBSS) [9] are especially noteworthy. However, this technique generally reduces the effective array aperture, and the maximum number of resolvable signals cannot exceed the number of array sensors. In [10], an effective matrix decomposition method utilizing cross-correlation matrix is proposed to decorrelate coherent signals. Chen et al. [11] have proposed a 2-D ESPRIT-like method that realizes decorrelation by reconstructing a Toeplitz matrix. With the help of three correlation matrices, Wang et al. [12] have presented a 2-D DOA estimation method. Recently, Nie et al. [13] have introduced an efficient subspace algorithm for 2-D DOA estimation. In [14], a novel 2-D DOA estimation method using a sparse L-shaped array is proposed to obtain high performance and less complexity. Xia et al. [15] have proposed a polynomial root-finding-based method for 2-D DOA estimation by using two parallel uniform linear arrays (ULAs), which has less computational burden. Some decorrelation algorithms are proposed in [16–18] to achieve 2-D DOA estimation by utilizing two parallel ULAs. However, the limitation of the abovementioned algorithms is that the estimation performance cannot be satisfactory due to the fact that the structure of the array is not being fully exploited.
For the purpose of description, the following notations are used. Boldface italic lower/uppercase letters denote vectors/matrices. (·)*, (·)T, (·)†, and (·)H stand for the conjugation, transpose, Moore-Penrose pseudo-inverse, and conjugate transpose of a vector/matrix, respectively. The notation E(x) and diag (·) separately denote the expectation operator and the diagonal matrix, respectively.
Date model
As illustrated in Fig. 1, the antenna array consists of two parallel ULAs (X a and Y a ) in the x − y plane. Each ULA has N omnidirectional sensors with spacing d x , and the interelement spacing between the two ULAs is d y . Suppose that M far-field narrowband coherent signals impinge on the two parallel ULAs from 2-D distinct directions (α i , β i )(1 ≤ i ≤ M), where α i and β i are measured relatively to the x axis and to the y axis, respectively.
Parallel array configuration for 2-D DOA estimation
Let the kth element of the subarray X a be the phase reference and then the observed signals \( {x}_m^k(t) \) at the mth element can be expressed as
$$ {x}_m^k(t)={\displaystyle \sum_{i=1}^M{e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}{s}_i(t)}+{n}_{x, m}(t) $$
where s i (t) denotes the complex envelope of the ith coherent signal, λ is the signal wavelength, and d x represents the spacing between two adjacent sensors. The superscript k(k = 1, 2, ⋯, N) of the \( {x}_m^k(t) \) stands for the number of the reference element in subarray X a , and the subscript m(m = 1, 2, ⋯, N) of the \( {x}_m^k(t) \) denotes the number of the element along the x positive axis in subarray X a . n x,m (t) is the additive Gaussian white noise (AGWN) of the mth element in subarray X a .
Note that when m = k, the observed signals at the kth element can be expressed as
$$ \begin{array}{c}{x}_k^k(t)={\displaystyle \sum_{i=1}^M{e}^{- j\left(2\pi /\lambda \right)\left( k- k\right){d}_x \cos {\alpha}_i}{s}_i(t)}+{n}_{x, k}(t)\\ {}\kern2.3em ={\displaystyle \sum_{i=1}^M{s}_i(t)}+{n}_{x, k}(t)\end{array} $$
With a similar processing, employing the kth element of the subarray Y a as the phase reference and then the observed signals \( {y}_m^k(t) \) at the mth element can be expressed as
$$ {y}_m^k(t)={\displaystyle \sum_{i=1}^M{e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_y \cos {\alpha}_i}{e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i}{s}_i(t)}+{n}_{y, m}(t) $$
Similarly as in (1), the superscript k(k = 1, 2, ⋯, N) of the \( {y}_m^k(t) \) stands for the number of the reference element in subarray Y a , and the subscript m(m = 1, 2, ⋯, N) of the \( {y}_m^k(t) \) denotes the number of the element along the x positive axis in subarray Y a . n y,m (t) is the AGWN of the mth element in subarray Y a .
The observed vectors X k(t) and Y k(t) can be written as
$$ {\mathbf{X}}^k(t)={\left[{x}_1^k(t),{x}_2^k(t),\cdots, {x}_N^k(t)\right]}^T $$
$$ {\mathbf{Y}}^k(t)={\left[{y}_1^k(t),{y}_2^k(t),\cdots, {y}_N^k(t)\right]}^T $$
The proposed algorithm
For the subarray X a , the auto-correlation calculation is defined as follows:
$$ \begin{array}{c}\kern0.1em {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{x}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.9em ={\displaystyle \sum_{i=1}^M{g}_i(t){e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}}+{\sigma}^2\delta \left( m, k\right)\end{array} $$
$$ {g}_i(t)={\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t)} $$
$$ \delta \left( m, k\right)=\left\{\begin{array}{c}\hfill 1,\kern1.3em m= k\hfill \\ {}\hfill 0,\kern1.3em m\ne k\hfill \end{array}\right. $$
Assume that the kth element of the subarray X a is the phase reference. Thus, the auto-correlation vectors \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) between X k(t) and the corresponding reference element \( {x}_k^k(t) \) can be defined as follows:
$$ \begin{array}{c}\kern0.2em {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{\mathbf{X}}^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern3em ={\left[{r}_{x_1^k{\left({x}_k^k\right)}^{\ast}}^k,{r}_{x_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {r}_{x_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^{\mathrm{T}}\end{array} $$
It is obvious that N column vectors will be achieved as the superscript k of the \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) is changed from 1 to N. Therefore, we construct an equivalent auto-covariance matrix R xx as follows:
$$ \begin{array}{c}\kern0.2em {\mathbf{R}}_{x x}=\left[{\mathbf{r}}_{{\mathbf{X}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\mathbf{r}}_{{\mathbf{X}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\mathbf{r}}_{{\mathbf{X}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\left[\begin{array}{cccc}\hfill {r}_{x_1^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_1^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_1^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill {r}_{x_2^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_2^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_2^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill {r}_{x_N^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {r}_{x_N^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots\ \hfill & \hfill {r}_{x_N^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \end{array}\right]\end{array} $$
Similarly as in (6), for the subarray Y a , the cross-correlation calculation \( {\tilde{r}}_{y_m^k{\left({x}_k^k\right)}^{\ast}}^k \) can be written as
$$ \begin{array}{c}\kern0.1em {\tilde{r}}_{y_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{y}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.9em ={\displaystyle \sum_{i=1}^M{g}_i(t){e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}}{e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i}\end{array} $$
Then, the cross-correlation vectors \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) between Y k(t) and the reference element \( {x}_k^k(t) \) in subarray X a can be expressed as
$$ \begin{array}{c}\kern0.2em {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{\mathbf{Y}}^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern3em ={\left[{\tilde{r}}_{y_1^k{\left({x}_k^k\right)}^{\ast}}^k,{\tilde{r}}_{y_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {\tilde{r}}_{y_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^T\end{array} $$
Obviously, we can obtain another N column vectors when the superscript k of the \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) is varied from 1 to N. Based on the N column vectors, an equivalent cross-covariance matrix R yx can be given by
$$ \begin{array}{c}\kern0.1em {\mathbf{R}}_{y x}=\left[{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\left[\begin{array}{cccc}\hfill {\tilde{r}}_{y_1^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_1^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_1^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill {\tilde{r}}_{y_2^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_2^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_2^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill {\tilde{r}}_{y_N^1{\left({x}_1^1\right)}^{\ast}}^1\hfill & \hfill {\tilde{r}}_{y_N^2{\left({x}_2^2\right)}^{\ast}}^2\hfill & \hfill \cdots \hfill & \hfill {\tilde{r}}_{y_N^N{\left({x}_N^N\right)}^{\ast}}^N\hfill \end{array}\right]\end{array} $$
In order to obtain the final matrix form of the equivalent auto-covariance matrix R xx as in (10), we need to further investigate the auto-correlation calculation \( {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k \) in (6).
$$ \begin{array}{c}\kern0.1em {r}_{x_m^k{\left({x}_k^k\right)}^{\ast}}^k= E\left[{x}_m^k(t)\right({x}_k^k\left( t{\left)\right)}^{\ast}\right]\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{- j\left(2\pi /\lambda \right)\left( m- k\right){d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{- j\left(2\pi /\lambda \right)\left[\left( m-1\right)-\left( k-1\right)\right]{d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\displaystyle \sum_{i=1}^M{\displaystyle \sum_{j=1}^M{s}_i(t){s}_j^{\ast }(t){e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_i}\cdotp {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_i}}}+{\sigma}^2\delta \left( m, k\right)\kern0.1em \\ {}\kern2.8em =\left[\begin{array}{cccc}\hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_1}\hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_2}\hfill & \hfill \cdots \hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right]\cdotp \\ {}\kern3.6em \left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right]\cdotp \left[\begin{array}{c}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_1}\hfill \\ {}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_2}\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {e}^{j\left(2\pi /\lambda \right)\left( k-1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right]+{\sigma}^2\delta \left( m, k\right)\\ {}\kern2.8em ={\mathtt{a}}_m\left(\alpha \right)\mathbf{G}{\mathtt{a}}_k^H\left(\alpha \right)+{\sigma}^2\delta \left( m, k\right)\end{array} $$
$$ \mathbf{G}= diag\left[\begin{array}{cccc}\hfill {g}_1(t)\hfill & \hfill {g}_2(t)\hfill & \hfill \cdots \hfill & \hfill {g}_M(t)\hfill \end{array}\right] $$
$$ {\mathtt{a}}_m\left(\alpha \right)=\left[\begin{array}{ccc}\hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_1}\hfill & \hfill \cdots \hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( m-1\right){d}_x \cos {\alpha}_M}\hfill \end{array}\right] $$
It can be seen from (16) that \( {\mathtt{a}}_m\left(\alpha \right) \) is the mth row of the steering matrix in covariance matrix with the scenario when the first element of the subarray X a is set to be the reference element. According to (14), (15), and (16), Eq. (9) can be rewritten as
$$ \begin{array}{l}\kern0.2em {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k={\left[{r}_{x_1^k{\left({x}_k^k\right)}^{\ast}}^k,{r}_{x_2^k{\left({x}_k^k\right)}^{\ast}}^k,\cdots, {r}_{x_N^k{\left({x}_k^k\right)}^{\ast}}^k\right]}^T\\ {}\kern3em =\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathtt{a}}_k^H\left(\alpha \right)+{\sigma}^2\delta \left( m, k\right)\end{array} $$
where \( \mathbf{A}\left(\alpha \right)=\left[\begin{array}{cccc}\hfill \mathtt{a}\left({\alpha}_1\right)\hfill & \hfill \mathtt{a}\left({\alpha}_2\right)\hfill & \hfill \cdots \hfill & \hfill \mathtt{a}\left({\alpha}_M\right)\hfill \end{array}\right] \) is the steering matrix of the covariance matrix along the subarray X a , and \( \mathtt{a}\left({\alpha}_i\right)={\left[\begin{array}{cccc}\hfill 1\hfill & \hfill {e}^{- j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i}\hfill & \hfill \cdots \hfill & \hfill {e}^{- j\left(2\pi /\lambda \right)\left( N-1\right){d}_x \cos {\alpha}_i}\hfill \end{array}\right]}^T \).
Based on (17), the matrix R xx in (10) can be rewritten as
$$ \begin{array}{c}\kern0.2em {\mathbf{R}}_{x x}=\left[{\mathbf{r}}_{{\mathbf{X}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\mathbf{r}}_{{\mathbf{X}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\mathbf{r}}_{{\mathbf{X}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.8em =\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)+ diag\left[{\sigma}_1^2,{\sigma}_2^2,\cdots, {\sigma}_N^2\right]\end{array} $$
where \( {\sigma}_i^2 \) is the noise power on the ith element of the subarray X a .
Similar to the equivalent auto-covariance matrix R xx in (18), the equivalent cross-covariance matrix R yx in (13) can be rewritten as
$$ \begin{array}{l}\kern0.2em {\mathbf{R}}_{yx}=\left[{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^1{\left({x}_1^1\right)}^{\ast}}^1,{\tilde{\mathbf{r}}}_{{\mathbf{Y}}^2{\left({x}_2^2\right)}^{\ast}}^2,\cdots, {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^N{\left({x}_N^N\right)}^{\ast}}^N\right]\\ {}\kern1.9em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)\end{array} $$
$$ \boldsymbol{\Psi} \left(\beta \right)=\left[\begin{array}{cccc}\hfill \upsilon \left({\beta}_1\right)\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill \upsilon \left({\beta}_2\right)\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill \upsilon \left({\beta}_M\right)\hfill \end{array}\right] $$
From (18) and (19), it is easy to see that since α i ≠ α j , (i ≠ j), A(α) is a full column rank matrix with rank (A(α)) = M. Similarly, since β i ≠ β j , (i ≠ j), Ψ(β) is a full-rank diagonal matrix with rank (Ψ(β)) = M. According to (7) and (15), note that the incident signals s i (t) ≠ 0, (i = 1, 2 ⋯ M), so g i (t) ≠ 0. As a result, G is a full-rank diagonal matrix, namely, rank(G) = M. That is, if the narrowband far-field signals are statistically independent, the diagonal element g i (t) of the matrix G represents the power of the ith incident signal. If the narrowband far-field signals are fully coherent, the diagonal element g i (t) of the matrix G denotes the sum of the powers of the M incident signals. Notice that if the narrowband far-field signals are the coexistence of the uncorrelated and coherent signals, which means there are K coherent signals, the remaining are M − K statistically independent signals. Then, the diagonal element g i (t) of the matrix G stands for the sum of the powers of the K coherent signals when the subscript of the diagonal element g i (t) in matrix G corresponding to the source signal belongs to one of the K coherent signals. If the subscript of the diagonal element g i (t) in matrix G corresponding to the source signal belongs to one of the remaining M − K mutually independent signals, the diagonal element g i (t) of the matrix G denotes the power of the ith independent signal.
From the above theoretical analysis, the coherency of incident signals is decorrelated through matrices constructing no matter whether the signals are uncorrelated, coherent, or partially correlated.
From (18), we can obtain the noiseless auto-covariance matrix \( {\widehat{\mathbf{R}}}_{xx} \)
$$ {\widehat{\mathbf{R}}}_{xx}=\mathbf{A}\left(\alpha \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right) $$
The eigenvalue decomposition (EVD) of \( {\widehat{\mathbf{R}}}_{xx} \) can be written
$$ {\widehat{\mathbf{R}}}_{xx}={\displaystyle \sum_{i=1}^M{\lambda}_i{\mathbf{U}}_i{\mathbf{U}}_i^H} $$
where {λ 1 ≥ λ 2 ≥ ⋯ ≥ λ M } and {U 1, U 2, ⋯, U M } are the non-zero eigenvalues and eigenvector of the noiseless auto-covariance matrix \( {\widehat{\mathbf{R}}}_{xx} \), respectively. Then, the pseudo-inverse of \( {\widehat{\mathbf{R}}}_{xx} \) is
$$ {\mathbf{R}}_{xx}^{\dagger }={\displaystyle \sum_{i=1}^M{\lambda}_i^{-1}{\mathbf{U}}_i{\mathbf{U}}_i^H} $$
Since A(α) is a column full-rank matrix, the Eq. (22) can be expressed as
$$ \begin{array}{c}\kern0.1em \mathbf{G}{\mathbf{A}}^H\left(\alpha \right)={\mathbf{A}}^{-1}\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\\ {}\kern3.8em ={\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\end{array} $$
According to (19) and (25), the matrix R yx can be rewritten as
$$ \begin{array}{l}\kern0.1em {\mathbf{R}}_{yx}=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\mathbf{G}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern1.8em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right){\widehat{\mathbf{R}}}_{xx}\end{array} $$
Right-multiplying both sides of (26) by \( {\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right) \)
$$ \begin{array}{l}\kern0.1em {\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern6em {\widehat{\mathbf{R}}}_{xx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)\end{array} $$
Substituting (23) and (24) into (27) yields
$$ \begin{array}{l}\kern0.2em {\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern7em \left({\displaystyle \sum_{i=1}^M{\lambda}_i{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\left({\displaystyle \sum_{i=1}^M{\lambda}_i^{-1}{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\mathbf{A}\left(\alpha \right)\\ {}\kern5.1em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}{\mathbf{A}}^H\left(\alpha \right)\\ {}\kern6.9em \left({\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H}\right)\mathbf{A}\left(\alpha \right)\\ {}\kern5em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right){\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)}^{-1}\left({\mathbf{A}}^H\left(\alpha \right)\mathbf{A}\left(\alpha \right)\right)\\ {}\kern5em =\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right)\end{array} $$
Notice that \( {\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H} \) is an identity matrix, that is, \( {\displaystyle \sum_{i=1}^M{\mathbf{U}}_i{\mathbf{U}}_i^H}=\mathbf{I} \). Based on (24) and (26), a new matrix R can be defined as follows:
$$ \mathbf{R}={\mathbf{R}}_{yx}{\mathbf{R}}_{xx}^{\dagger } $$
From (29), the (28) can be further rewritten as
$$ \mathbf{R}\mathbf{A}\left(\alpha \right)=\mathbf{A}\left(\alpha \right)\boldsymbol{\Psi} \left(\beta \right) $$
Obviously, the columns of A(α) are the eigenvectors corresponding to the major diagonal elements of diagonal matrix Ψ(β). Therefore, by performing the EVD of R, the A(α) and Ψ(β) can be achieved. Then, the DOA estimation of the coherent signals can be achieved according to \( \upsilon \left({\beta}_i\right)={e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i} \) and \( \mathtt{a}\left({\alpha}_i\right)={\left[1,{e}^{- j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i},\cdots, {e}^{- j\left(2\pi /\lambda \right)\left( N-1\right){d}_x \cos {\alpha}_i}\right]}^T \) without additional computations for parameter pair-matching and 2-D peak searching.
Up to now, the steps of the proposed matrix reconstruction method with the finite sampling data are summarized as follows:
Calculate the column vectors \( {\mathbf{r}}_{{\mathbf{X}}^k{\left({x}_k^k\right)}^{\ast}}^k \) of the equivalent auto-covariance matrix R xx by (6) and (9). Similarly, compute the column vectors \( {\tilde{\mathbf{r}}}_{{\mathbf{Y}}^k{\left({x}_k^k\right)}^{\ast}}^k \) of the equivalent cross-covariance matrix R yx according to (11) and (12)
Achieve the matrix R xx and the matrix R yx by (10) and (13)
Obtain the noiseless auto-covariance matrix \( {\widehat{\mathbf{R}}}_{xx} \) by (22). Then, perform EVD to obtain pseudo-inverse matrix \( {\mathbf{R}}_{xx}^{\dagger } \)
Construct the new matrix R by (29) and then get the A(α) and Ψ(β) by performing EVD of the new matrix R
Estimate the 2-D DOAs θ i = (α i , β i ) of incident coherent source signals via \( \upsilon \left({\beta}_i\right)={e}^{j\left(2\pi /\lambda \right){d}_y \cos {\beta}_i} \) and \( \mathtt{a}\left({\alpha}_i\right)={\left[1,{e}^{- j\left(2\pi /\lambda \right){d}_x \cos {\alpha}_i},\cdots, {e}^{- j\left(2\pi /\lambda \right)\left( N-1\right){d}_x \cos {\alpha}_i}\right]}^T \).
Simulation result
In this section, computer simulations are performed to ascertain the performance of the proposed algorithm. The proposed method is compared with another efficient algorithm (DMR-DOAM) in [17]. The number of sensors in each subarray is N = 7 with sensor displacement d x = d y = λ/2. Consider M = 4 coherent signals with carrier frequency f = 900MHz coming from α = (75o, 100o, 120o, 60o) and β = (65o, 75o, 90o, 50o). The phases of coherent signals are [π/5, π/3, π/3, π/3]. Results on each of the simulation are analyzed by 1000 Monte Carlo trials. Two performance indices, called the root-mean-square-error (RMSE) and normalized probability of success (NPS), are defined to evaluate the performance of the proposed algorithm and DMR-DOAM algorithm.
$$ \begin{array}{l}\kern0.1em \mathrm{RMSE}\left(\alpha \right)=\sqrt{\frac{1}{1000 K}{{\displaystyle \sum_{i=1}^{1000}{\displaystyle \sum_{n=1}^M\left({\widehat{\alpha}}_n(i)-{\alpha}_n\right)}}}^2}\\ {}\mathrm{RMSE}\left(\beta \right)=\sqrt{\frac{1}{1000 K}{{\displaystyle \sum_{i=1}^{1000}{\displaystyle \sum_{n=1}^M\left({\widehat{\beta}}_n(i)-{\beta}_n\right)}}}^2}\end{array} $$
where \( {\widehat{\alpha}}_n(i) \) and \( {\widehat{\beta}}_n(i) \) are the estimates of α n and β n for the ith Monte Carlo trial respectively, and K is the source number.
$$ \mathrm{N}\mathrm{P}\mathrm{S}=\frac{\varUpsilon_{\mathrm{suc}}}{T_{\mathrm{total}}} $$
where ϒ suc and Τ total denote the times of success and Monte Carlo trial, respectively. Furthermore, a successful experiment is that satisfies \( \max \left(\left|{\widehat{\theta}}_n-{\theta}_n\right|\right)<\varepsilon \), and ε equals 0.5 for estimation of the coherent signals.
In the first simulation, we evaluate the performance of the two algorithms with respect to the input signal-to-noise ratio (SNR). The number of snapshots is fixed at 1000, and the SNR varies from −10 to 10 dB. The RMSE of the DOAs versus the SNR is shown in Fig. 2. It can be seen from Fig. 2 that the proposed algorithm can provide better DOA estimation than the DMR-DOAM algorithm no matter whether the RMSE curve of the α or the RMSE curve of the β. Fig. 3 shows the NPS of the DOAs versus SNR, which illustrates that the performance of the proposed algorithm is better than that of the DMR-DOAM algorithm. Furthermore, even at low SNR, the proposed algorithm can still achieve better estimation performance. The reason is that the proposed algorithm takes full advantage of all the received data of the two parallel ULAs to construct the equivalent auto-covariance matrix R xx and cross-covariance matrix R yx , which can improve the estimation precision. On the contrary, the DMR-DOAM algorithm obtains the DOAs of signals at the cost of reduction in array aperture, which often leads to poorer DOA estimation.
The RMSE of the DOA estimates versus input SNR
The NPS of the DOA estimates versus input SNR
In the second simulation, we investigate the performance of the two algorithms versus the number of snapshots. The simulation conditions are similar to those in the first simulation, except that the SNR is set at 5 dB, and the number of snapshots is varied from to 10 to 250. The RMSE of the DOAs versus the number of snapshots is depicted in Fig. 4. As shown in Fig. 4, the proposed algorithm behaves better performance than the DMR-DOAM algorithm.
The RMSE of the DOA estimates versus input snapshots
The result in Fig. 5 shows the NPS of the DOAs versus the number of snapshots. From Fig. 5, it can be observed that the proposed algorithm has much higher estimation performance than the DMR-DOAM algorithm as the number of snapshots increases. Moreover, the superiority of the proposed algorithm is much more obvious than the DMR-DOAM algorithm no matter whether the number of snapshots is small or large. This indicates that the proposed algorithm is more useful especially when the low-computational cost and highly real-time data process are inquired.
The NPS of the DOA estimates versus input snapshots
In the last simulation, we assess the performance of the proposed algorithm as the correlation factor ρ is varied from 0 to 1 between s 1(t) and s 2(t). The SNR is set at 5 dB, and the number of snapshots is 800. Note that the ε in (32) is set to be 0.6 in this simulation. The performance curves of the DOA estimation against correlation factor are shown in Figs. 6 and 7. From Figs. 6 and 7, we can see that the proposed algorithm outperforms the DMR-DOAM algorithm.
The RMSE of the DOA estimates versus correlation factor
The NPS of the DOA estimates versus correlation factor
A novel decoupling algorithm for 2-D DOA estimation with two parallel ULAs has been presented. In the proposed algorithm, two equivalent covariance matrices are reconstructed to achieve the decorrelation of the coherent signals and the estimated angle parameters are pair-matched automatically. It has been shown that the proposed algorithm yields remarkably better estimation performance than the DMR-DOAM algorithm.
H Krim, M Viberg, Two decades of array signal processing research: the parametric approach. IEEE Signal Process. Mag. 13(4), 67–94 (1996)
Z Li, K Liu, Y Zhao et al., Ma MaPIT: an enhanced pending interest table for NDN with mapping bloom filter. IEEE Comm. Lett. 18(11), 1423–1426 (2014)
Z Li, L Song, H Shi, Approaching the capacity of K-user MIMO interference channel with interference counteraction scheme. Ad Hoc Netw. 2016, 1–6 (2016)
Z Li, Y Chen, H Shi et al., NDN-GSM-R: a novel high-speed railway communication system via named data networking. EURASIP J. Wirel. Commun. Netw. 2016(48), 1–5 (2016)
X Liu, Z Li, P Yang et al., Information-centric mobile ad hoc networks and content routing: a survey. Ad Hoc Netw. 2016, 1–14 (2016)
RO Schmidt, Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag. 34(3), 276–280 (1986)
R Roy, T Kailath, ESPRIT-estimation of signal parameters via rotational invariance techni- ques. IEEE Trans. Acoust. Speech Signal Process. 37(7), 984–995 (1989)
N Tayem, HM Kwon, L-shape 2-dimensional arrival angle estimation with propagator method. IEEE Trans. Antennas Propag. 53(5), 1622–1630 (2005)
S Marcos, A Marsal, M Benidir, The propagator method for source bearing estimation. Signal Process. 42(2), 121–138 (1995)
JF Gu, P Wei, HM Tai, 2-D direction-of-arrival estimation of coherent signals using cross- correlation matrix. Signal Process. 88(1), 75–85 (2008)
Article MATH Google Scholar
F Chen, S Kwong, CW Kok, ESPRIT-like two-dimensional DOA estimation for coherent signals. IEEE Trans. Aerospace and Electronic Systems 46(3), 1477–1484 (2010)
GM Wang, JM Xin, NN Zheng et al., Computationally efficient subspace-based method for two-dimensional direction estimation with L-shaped array. IEEE Trans. Signal Process. 59(7), 3197–3212 (2011)
X Nie, LP Li, A computationally efficient subspace algorithm for 2-D DOA estimation with L-shaped array. IEEE Signal Process Lett. 21(8), 971–974 (2014)
JF Gu, WP Zhu, MNS Swamy, Joint 2-D DOA estimation via sparse L-shaped array. IEEE Trans. Signal Process. 31(5), 1171–1182 (2015)
TQ Xia, Y Zheng, Q Wan et al., Decoupled estimation of 2-D angles of arrival using two parallel uniform linear arrays. IEEE Trans. Antennas Propag. 55(9), 2627–2632 (2007)
TQ Xia, Y Zheng, Q Wan et al., 2-D angle of arrival estimation with two parallel uniform linear arrays for coherent signals. IEEE Radar. Conf. 55(9), 244–247 (2007)
L Wang, GL Li, WP Mao, New method for estimating 2-D DOA in coherent source environment based on data matrix reconstruction data matrix reconstruction. J. Xidian Univ. 40(2), 159–168 (2013)
H Chen, C Hou, Q Wang et al., Cumulants-based Toeplitz matrices reconstruction method for 2-D coherent DOA estimation. IEEE Sensors J. 14(8), 2824–2832 (2014)
This research was supported by the National Natural Science Foundation of China (61602346), by the Key Talents Project for Tianjin University of Technology and Education (TUTE) (KYQD16001), by the Tianjin Municipal Science and Technology innovation platform, intelligent transportation coordination control technology service platform (16PTGCCX00150), and by the National Natural Science Foundation of China (61601494).
School of Automotion and Transportation, Tianjin University of Technology and Education, Tianjin, 300222, China
Heping Shi
Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin, 300387, China
Zhuo Li
College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin, 300387, China
School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin, 300222, China
Jihua Cao
The 28th Research Institute of China Electronics Technology Group Corporation, Nanjing, 210007, China
Hua Chen
Correspondence to Zhuo Li.
Shi, H., Li, Z., Cao, J. et al. Two-dimensional DOA estimation of coherent sources using two parallel uniform linear arrays. J Wireless Com Network 2017, 60 (2017). https://doi.org/10.1186/s13638-017-0844-0
Matrices reconstruction
2-D DOA estimation
Coherent signals
Decoupled estimation
Uniform linear array (ULA)
Radar and Sonar Networks | CommonCrawl |
Spatial and temporal changes in zooplankton abundance, biovolume, and size spectra in the neighboring waters of Japan: analyses using an optical plankton counter
Kaede Sato1,
Kohei Matsuno2,
Daichi Arima1,
Yoshiyuki Abe1 &
Atsushi Yamaguchi1
An optical plankton counter (OPC) was used to examine spatial and temporal changes in the zooplankton size spectra in the neighboring waters of Japan from May to August 2011.
Based on the zooplankton biovolume of equivalent spherical diameter (ESD) in 45 bins for every 0.1 mm between 0.5 and 5.0 mm, a Bray-Curtis cluster analysis classified the zooplankton communities into six groups. The geographical distribution of each group varied from each of the others. Groups with a dominance of 4 to 5 mm ESD were observed in northern marginal seas (northern Japan Sea and Okhotsk Sea), while the least biovolume with a dominance of a small-size class (0.5 to 1 mm) was observed for the Kuroshio extension. Temporal changes were observed along the 155° E line, i.e., a high biovolume group dominated by 2 to 3 mm ESD during May shifted to other size spectra groups during July to August. These temporal changes were caused by the seasonal vertical descent of dominant large Neocalanus copepods during July to August. As a specific characteristic of the normalized biomass size spectra (NBSS), the slope of NBSS was moderate (−0.90) for the Neocalanus dominant spring group but was at −1.11 to −1.24 for the other groups. Theoretically, the slope of the NBSS of the stable marine ecosystem is known to settle at approximately −1.
Based on the analysis by OPC, zooplankton size spectra in the neighboring waters of Japan were separated into six groups. Most groups had −1.11 to −1.24 NBSS slopes, which were slightly higher than the theoretical value (−1). However, one group had a moderate slope of NBSS (−0.90) caused by the dominance of large Neocalanus copepods.
From the perspective of fisheries, mesozooplankton is an important food source for pelagic fish and larvae. The size of the mesozooplankton determines the bioenergetics of fish (Sheldon et al. 1977) and affects the growth and mortality rates of fish larvae (Van der Meeren and Næss 1993). From the perspective of oceanography, the size of the mesozooplankton is also important. In an oceanic region (>200 m depth, 92% of ocean area), mesozooplankton transport particulate organic matter vertically from the surface to deeper layers (Longhurst 1991; Boyd and Newton 1999). The activity of this process, termed "biological pump," is known to be correlated with the size of dominant mesozooplankton in the epipelagic layer (Michaels and Silver 1988; Ducklow et al. 2001). Thus, information on the size spectra of mesozooplankton community is important from the viewpoint of both fisheries and oceanography.
The size spectra of the mesozooplankton community were evaluated using NBSS (cf. Marcolin et al. 2013). The size of the mesozooplankton was accurately quantified using an optical plankton counter (OPC, Herman 1988). Recently, spatial and temporal changes in the size spectra of worldwide ocean mesozooplankton were evaluated by NBSS obtained using OPC measurements (cf. Huntley et al. 1995; Piontkovski et al. 1995; Zhou and Huntley 1997; Herman and Harvey 2006; Kimmel et al. 2006). The slope of NBSS is an index of bottom-up or top-down control of the marine ecosystem (Zhou 2006). In a nutrient-rich high productivity ecosystem, dominance of small-sized mesozooplankton induced a high intercept and slope of NBSS (bottom-up). However, high predation on the smaller size class may induce a low intercept and slope of NBSS (top-down) (Moore and Suthers 2006). Visual predators such as fish typically remove large particles, which act to steepen the slope and maintain the intercept (Suthers et al. 2006). Thus, information on spatial and temporal changes in mesozooplankton NBSS is highly valuable for evaluating structures of the marine ecosystem (cf. García-Comas et al. 2014).
The neighboring waters of Japan include the subarctic, transitional, and subtropical Western North Pacific and their adjacent seas: Okhotsk Sea, Japan Sea, and East China Sea. The oceanographic characteristics of these oceans vary greatly from each other. Various studies have been previously performed on mesozooplankton abundance, biomass, and community structure in these oceans. For example, studies were performed in the subarctic and transitional Western North Pacific (Odate 1994; Chiba et al. 2006, 2008, 2009), the subtropical Western North Pacific (Nakata et al. 2001; Nakata and Koyama 2003), and the Japan Sea (Hirota and Hasegawa 1999; Iguchi 2004). Based on these studies, regional, seasonal, and annual changes in mesozooplankton abundance, biomass, and community structure were evaluated. However, little information is available for their size spectra, and few attempts have been made regarding NBSS analysis. Because the NBSS of zooplankton connects phytoplankton with fisheries biomass, providing spatial and temporal change patterns of NBSS in this region will be valuable.
In the present study, spatial and temporal changes in the mesozooplankton size spectra in the neighboring waters of Japan (subarctic, transitional, and subtropical Western North Pacific and their adjacent seas: Okhotsk Sea, Japan Sea, and East China Sea) were evaluated using OPC measurements of net mesozooplankton samples collected by the same methods between May and August 2011. For all of the OPC data, NBSS analyses were performed and compared with the NBSS reported from various worldwide oceans. These comparisons revealed spatial and temporal changes in the size spectra of mesozooplankton in the neighboring waters of Japan; additionally, their characteristics were evaluated.
Field sampling
Mesozooplankton samplings were obtained on board the T/S Oshoro-Maru along the 155° E line (38° to 44° N) in the Western North Pacific from May 16 to 20, in the Okhotsk Sea from June 10 to 11, in the Japan Sea from June 8 to 24, in the East China Sea from July 1 to 9, in the subtropical Western North Pacific from July 11 to 12, and along the 155° E line (38° to 44° N) in the Western North Pacific from July 27 to August 2, 2011. The total number of stations was 78 (Figure 1). Samples were collected via the vertical hauls of a NORPAC net (mouth diameter 45 cm, mesh size 335 μm, Motoda 1957) from 150 m to the surface during the day and/or night. At stations where the depth was shallower than 150 m, vertical tows from 5 m above the bottom were performed. The volume of water filtered through the net was estimated from a reading of the flowmeter (Rigosha & Co., Ltd., Saitama, Japan) mounted on the net ring. The collected samples were immediately fixed with 5% borax-buffered formalin on board the ship. At each station, temperature and salinity were measured using a CTD system (Sea-Bird SBE 911 Plus, Sea-Bird Electronics, Bellevue, WA, USA). Because sampling depths varied from station to station (from 0 to 30 to 0 to 150 m), we applied temperature and salinity data at the euphotic zone (0 to 30 m) to evaluate their spatial and temporal changes (Figure 2).
Location of sampling stations in the Western North Pacific and their adjacent seas. From May 16 to 20 (right panel) and June 8 to August 2 (left panel) 2011. Open and solid symbols denote stations where sampling was performed during the day and night, respectively. Approximate positions of OY: Oyashio, KE: Kuroshio Extension and SAF: Subarctic Front are superimposed (cf. Yasuda 2003). Samplings were conducted during the following periods: Western North Pacific along 155° E line (38° to 44°N) from May 16 to 20, Okhotsk Sea from June 10 to 11, Japan Sea from June 8 to 24, East China Sea from July 1 to 9, subtropical Western North Pacific from July 11 to 12 and along the 155° E line (38° to 44° N), in the Western North Pacific from July 27 to August 2, 2011.
Horizontal distribution of the integrated mean temperature (A) and salinity (B). At 0 to 30 m in the Western North Pacific and their adjacent seas from May 16 to 20 (right panels) and June 8 to August 2 (left panels) 2011. Open and solid symbols denote stations where sampling was performed during the day and night, respectively.
OPC measurements
At the land laboratory, the mesozooplankton samples were divided into half-aliquots using a Motoda box splitter (Motoda 1959). For each half-aliquot, the zooplankton were filtered using a 100-μm mesh under low vacuum, and the wet mass was measured using an electronic microbalance with a precision of 10 mg. The remaining 1/2 sub-samples were used for OPC (Model OPC-1 L: Focal Technologies Corp., Dartmouth, NS, Canada) measurements using the flow-through system (CT&C Co. Ltd., Tokyo, Japan). OPC measurements were made at a low flow rate (ca. 10 L min−1) and low particle density (<10 counts s−1) without staining (Yokoi et al. 2008).
Abundance and biovolume
The abundance per cubic meter (N: ind. m−3) for each of the 4,096 ESD size categories was calculated from the following equation:
$$ N=\frac{n}{s\times F} $$
where n is the number of particles (=zooplankton ind.), s is the split factor of each sample, and F is the filtered volume of the net (m3). The biovolume of the zooplankton community at 4,096 size categories was calculated from the ESD data, and the biovolume (mm3 m−3) was calculated by multiplying N and volume (mm3 ind.−1) derived from ESD. Analyses on the mesozooplankton biovolume were performed with separation of six size classes (0 to 1, 1 to 2, 2 to 3, 3 to 4, 4 to 5, and >5 mm ESD). Day and night samplings accounted for 44 and 34 stations of all sampling stations, respectively (Figure 1). Day-night comparisons of the entire zooplankton abundance and biomass based on the whole sampling area showed no significant differences (U-test, abundance: p = 0.567, biomass: p = 0.945); thus, no day-night conversion for abundance or biomass was necessary.
To evaluate spatial and temporal changes in the size spectra of the zooplankton biovolume, cluster analysis was performed. Prior to the analysis, the biovolume data on 1,744 categories between 0.5 and 5.0 mm ESD were binned into 45 size classes at 0.1 mm ESD intervals (0.5 to 0.6, 0.6 to 0.7,…, 4.9 to 5.0 mm). Based on these biovolume data, similarities between the samples were evaluated using Bray-Curtis methods. To group the samples, similarity indices were coupled with hierarchical agglomerative clustering using a complete linkage method (Unweighted Pair Group Method using Arithmetic mean, UPGMA; Field et al. 1982). Non-metric multidimensional scaling (NMDS) ordination was performed to delineate the sample groups on a two-dimensional map (Field et al. 1982). To clarify which environmental parameters (latitude, longitude, integrated mean temperature and salinity at 0 to 30 m) exhibited significant relationships with the zooplankton sample groups, multiple regressions (Y = aX 1 + bX 2 + c, where Y is the environmental variable, X 1 and X 2 are axes 1 and 2 of NMDS, and a, b, and c are constants, respectively) were made using StatView (SAS Institute Inc., Cary, NC, USA).
Normalized biomass size spectra
From the OPC data, NBSS was calculated following Zhou (2006). First, zooplankton biovolume (\( \overline{B} \): mm3 m−3 [=μm3 L−1]) was averaged for every 100 μm ESD size class. To calculate the X-axis of NBSS (X: log10 zooplankton biovolume [mm3 ind.−1]), \( \overline{B} \) was divided by the abundance of each size class (ind. m−3) and converted to a common logarithm. To calculate the Y-axis of NBSS (Y: log10 zooplankton biovolume [mm3 m−3]/Δbiovolume [mm3]), \( \overline{B} \) was divided by the interval of biovolume (Δbiovolume [mm3]) and converted to a common logarithm. Based on these data, the NBSS liner model (Y = aX + b) was calculated, where a and b are the slope and intercept of NBSS, respectively.
Based on the mesozooplankton groups clustered based on their size spectra, inter-group differences in the zooplankton data (abundance, biovolume, and slope of NBSS) were tested using one-way analysis of variance (ANOVA) and Fisher's protected least-squares difference (PLSD) method. To determine the factors that govern the slope of NBSS, an analysis of covariance (ANCOVA) was performed using StatView, in which the intercept of NBSS and zooplankton group as independent variables.
Throughout the entire sampling area and period, the integrated mean temperature at 0 to 30 m in the water column of each station ranged from 3.8 to 29.4°C (Figure 2A). Integrated mean temperatures were lower in the Okhotsk Sea and higher at the southern low-latitude station. With the temporal change between May and June to August, the temperature along the 155° E line in the Western North Pacific increased ca. 4°C within the same latitude from June to August. Integrated mean salinity ranged from 32.2 to 34.7 (Figure 2B) and showed a similar pattern to that of the integrated mean temperature and thus was lower in the Okhotsk Sea and higher in the southern low-latitude stations. Temporal changes in salinity along 155° E between May and June to August were not marked; this was comparable to the case of integrated mean temperature (Figure 2B).
OPC calibration
Comparison between OPC-derived wet mass (Y: mm3 m−3) and measured wet mass (X: mg m−3) showed a highly significant correlation (Y = 0.950X, r 2 = 0.691, p < 0.0001, Figure 3). As an exception, one station (41° N, 155° E in 25 July) returned substantially higher values of direct measurement mass (marked with an open symbol in Figure 3). From microscopic observation, dominance of fragments of jellyfish and doliolids was the cause of the sample variation. We excluded the data of this station from the following analysis.
Comparison between OPC-derived biovolume and directly measured wet mass of whole samples. The long-dashed line indicates position 1:1. One datum at an anomalously high value of directly measured wet mass indicated by an open symbol was omitted from the following analysis.
Zooplankton abundance, biovolume, and community
Zooplankton abundance ranged from 16.8 to 1,076 ind. m−3 and showed no clear spatial and temporal change pattern (Figure 4A). Zooplankton biovolume ranged from 2.24 to 1,007 mm3 m−3 and was higher for the northern stations, particularly in the northern Japan Sea and north of the 155° E line (Figure 4B). Regarding temporal change, biovolume along the 155° E line was higher in May compared to June to August, with a factor of 1.4 to 9.8 times at the same latitude.
Horizontal distribution of abundance (A) and biovolume (B) of mesozooplankton in the Western North Pacific and adjacent seas. From May 16 to 20 (right panels) and June 8 to August 2, 2011 (left panels).
Based on the biovolume data of 45 size classes binned at every 0.1 mm, zooplankton communities were classified into six groups (A, B1, B2, B3, C, and D) using cluster analysis at 42% dissimilarities (Figure 5A). Each group contained 7 to 20 stations. Hydrographic variables showing significant relationships on the NMDS ordination were integrated mean temperature and integrated mean salinity at 0 to 30 m water column; these variables accounted for 18% and 15% of the changes, respectively.
Results of cluster analysis based on mesozooplankton biovolume size spectra. In the Western North Pacific and adjacent seas from May and June to August 2011. (A) Six groups (A, B1, B2, B3, C, and D) were identified from Bray-Curtis dissimilarity connected with UPGMA. Numbers in parentheses indicate the number of stations contained in each group. (B) NMDS plots of each group. For correlation analyses with environmental parameters (temperature, salinity, latitude, and longitude), temperature and salinity showed a significant correlation (percentage indicates the coefficients of determination, r 2). (C) Mean biovolume and size composition (ESD, mm) of each group.
Total abundance also significantly varied according to group and was the least for group D, followed by group A and was the highest for group B3 (Table 1). Total abundance was dominated by the 0.5 to 1 mm ESD size class for all groups. For the 1 to 2, 2 to 3, and 3 to 4 mm ESD size classes, the highest abundance was observed for group C. Moreover, the 4 to 5 mm ESD size class was the highest for group B1. Thus, the highest abundance group varied with size classes (Table 1).
Table 1 Comparison of zooplankton abundance, biovolume, and slopes ( a ) and ( b ) of NBSS ( Y = aX + b ) of each group
For the total biovolume, the common group order was the least for group D followed by group A as observed for all size classes (Table 1, Figure 5C). The highest zooplankton biovolume was observed for group C; this was due to the dominance of biovolume at the 2 to 3 mm ESD size class (Figure 5C). Within the size class, group B3 was the highest in the 0.5 to 1 mm size class, while group B1 was the highest in the 4 to 5 mm size class. For the other size classes (1 to 2, 2 to 3, and 3 to 4 mm), group C was the highest biovolume group (Table 1).
The horizontal and temporal distribution of each group varied from those of the others (Figure 6). Groups A, B2, and B3 occurred in a broader region: Japan Sea, Western North Pacific, and East China Sea and had no geographical pattern. However, the horizontal distribution of groups B1, C, and D showed a clear geographical pattern. Group B1 was dominated by a large 4 to 5 mm ESD size class and was found in the Northern Japan Sea, Okhotsk Sea, and subarctic Western North Pacific. Group C, which was characterized by the highest biovolume and dominance of the 2 to 3 mm size class, occurred along the 155° E line during May and in the northern Japan Sea, Okhotsk Sea, and subarctic Western North Pacific from June to August. Group D, which was characterized by the least biovolume and was dominated by a small-sized 0.5 to 1 mm size class, was found at lower latitudes of the 155° E line (Kuroshio extension) (Figure 6).
Horizontal distribution of six groups (A, B1, B2, B3, C, and D) identified from cluster analysis on mesozooplankton biovolume size spectra (cf. Figure 5 A). In the Western North Pacific and adjacent seas from May and June to August 2011. Open and solid symbols denote day and night samples, respectively. The geographical ranges of each group are marked with boxes and circles.
NBSS
Results of the mean NBSS based on the complete data of each group are shown in Figure 7. For group C, the marked peak value on the X-axis (log10 zooplankton biovolume [mm3 ind.−1]) was observed at approximately 0.7; this corresponded with a 2 to 3 mm ESD size class and consisted of the copepodid stage 5 of the large copepod Neocalanus spp. (Figure 7). Significant inter-group differences were observed for the slope (a) and intercept (b) of NBSS (Table 1). The moderate slope (−0.90) of group C was significantly different from those of other groups (−1.11 ~ −1.24) (ANCOVA, p < 0.001, Table 2). For the intercept, the least was found for group D, while the highest was found for group C; the order of the intercept of each group corresponded to the order of total zooplankton biovolume (Table 1). From the ANCOVA analysis, there was no interaction between group and intercept, but significant relationships between slope and group were found (p < 0.0001, Table 2).
Mean NBSS of six groups (A, B-1, B-2, B-3, C and D) identified from cluster analysis on mesozooplankton biovolume size spectra (cf. Figure 5 A). In the Western North Pacific and adjacent seas during May and June to August 2011. Numbers in parentheses indicate the number of stations belonging to each group. The mean and standard deviations of copepodid five stages of Neocalanus copepods (Nc, Neocalanus cristatus; Nf, N. flemingeri; and Np, N. plumchrus, Yamaguchi et al. 2014) are shown in panel for group C.
Table 2 Result of the ANCOVA for the slope ( a ) of NBSS ( Y = aX + b )
There have been several studies on OPC measurements for zooplankton in the western North Pacific. Table 3 summarizes regressions between the OPC-derived mass and directly measured mass, ranges of zooplankton abundance, and biovolume from previous studies. The OPC-derived masses were 1.05 to 1.18 times the directly measured mass in previous studies (Yokoi et al. 2008; Matsuno and Yamaguchi 2010; Fukuda et al. 2012). In this study, the zooplankton biovolume estimated by OPC was 0.95 times the directly measured mass (Figure 3). This factor is slightly smaller than the previously reported values, but all these values (0.95 to 1.18) corresponded well at nearly 1:1. In the present study, a substantial underestimation of the OPC biovolume was caused by fragments of jellyfish and doliolids at one station (41° N, 155° E on 29 July). Regarding dominance of doliolids and gelatinous zooplankton in the Western North Pacific, Yokoi et al. (2008) reported that their dominance was observed at locations of thermocline-developed stations in the transitional domain. This condition may also have been present at 41° N, 155° E on July 29 of this study.
Table 3 Comparison on regressions between OPC derived mass and directly measured mass, ranges of abudance, and biovolume of mesozooplankton
Based on this same method (OPC measurement of net samples), zooplankton abundance and biovolume in the Western North Pacific were reported ranging from 128 to 580 ind. m−3 and 96 to 880 mm3 m−3, respectively (Yokoi et al. 2008; Matsuno and Yamaguchi 2010; Fukuda et al. 2012). Both the zooplankton abundance (16.8 to 1,076 ind. m−3) and biovolume (2.24 to 1,007 mm3 m−3) in this study fell in these ranges. This was shown by previous studies based on 0 to 150 m samplings obtained offshore in the North Pacific; this study included neritic to oceanic regions of the North Pacific and marginal seas, and sampling depths varied with each station (the shallowest station was 0 to 30 m sampling). These discrepancies in methodology (region and sampling depth) between previous studies and this study might have induced the extension of the range of abundance and biovolume data in this study.
Spatial changes
For spatial changes in zooplankton, the higher biovolume at high latitudes (Figure 4B) and horizontal distributions of groups B1, C, and D showed distinct geographical patterns (Figure 6).
Group B1 was observed only in areas north of the Japan Sea, in the Okhotsk Sea, and at the subarctic region along the 155° E line in the Western North Pacific (Figure 6). For zooplankton biomass in the Japan Sea, the southern part was reported to be less than that found in the north and was similar to that of the Kuroshio region of the Western North Pacific (Hirota and Hasegawa 1999; Iguchi 2004). Common to this study, the zooplankton biovolume in the Japan Sea was higher in the northern area (Figure 4B). In the northern Japan Sea, where the surface temperature is low, the large-sized cold-water species, i.e., chaetognath Parasagitta elegans, amphipod Themisto japonica and euphausiid Euphausia pacifica are known to perform diel vertical migration and distribute on the surface at night (Ikeda et al. 1992; Iguchi et al. 1993; Terazaki 1993). The proportion of 4 to 5 mm ESD in group B1 was much higher than that in the other groups (Figure 5C). This size range (4 to 5 mm) exceeded the size of copepods (the size of the largest copepods in this region: Neocalanus spp. C5 is 2 to 3 mm ESD, Yokoi et al. 2008) and was considered caused by macrozooplankton such as amphipods, euphausiids, and chaetognaths. These taxa frequently occurred for samples belonging to group B1. The dominance of large-sized chaetognath P. elegans and amphipod Themisto pacifica was also reported for the Okhotsk Sea (Volkov 2008). Thus, group B1 was only observed for the northern area of the Japan Sea, Okhotsk Sea, and subarctic Western North Pacific, which were characterized by a higher proportion of the large 4 to 5 mm ESD size class consisting of macrozooplankton.
In contrast, the horizontal distribution of group D was only observed in the Kuroshio extension from June to August (Figure 6). Group D was characterized by the least biovolume and dominance of the small-size class (0.5 to 1 mm ESD) (Figure 5C). From microscopic observation, the samples belonging to group D were dominated by small-sized copepods, e.g., Paracalanus parvus and poecilostomatoida. Hydrography of group D was also marked, and all of the stations showed high salinity (near 34.0, Figure 8), which is characteristic of the Kuroshio extension (Yasuda 2003). From the NMDS plot (Figure 5B), the direction of the two arrows indicates that groups A, B1, and C showed euryhaline and eurytherm, while the others (B2, B3, and D) were restricted, occurring in narrower ranges of temperature and salinity. These findings suggested that the horizontal distribution of zooplankton was regulated by water mass formation. In the Kuroshio region, it is well known that in an oligotrophic ocean, zooplankton fauna is dominated by small-sized copepods such as Paracalanus spp. and poecilostomatoida (Nakata et al. 2001; Nakata and Koyama 2003; Hsieh et al. 2004).
T-S diagrams of the six groups (A, B1, B2, B3, C, and D) identified from cluster analysis on mesozooplankton biovolume size spectra (cf. Figure 5 A). In the Western North Pacific and adjacent seas from May and June to August 2011. To construct T-S diagrams, hydrographic data from 0 to 1,000 m were applied. The numbers in parentheses indicate the number of stations belonging to each group.
However, same ocean latitudes were adjusted with the Kuroshio extension; the East China Sea, southern coast of Japan, and southern Japan Sea were not as oligotrophic as the Kuroshio extension region, and middle-sized copepods, such as Calanus sinicus, are known to dominate in zooplankton fauna (Hirakawa et al. 1995; Shimode et al. 2006; Hsiao et al. 2011). Due to the dominance of middle-sized copepods (C. sinicus) for these marginal seas, zooplankton size spectra in these regions were dispersed into groups A, B2, and B3 (Figure 6). Thus, it is difficult to identify specific characteristics in the horizontal distribution of these groups.
Temporal changes
In the present study, two time samplings in May and July to August were performed along the 155° E line in the Western North Pacific. In May, group C, which is characterized by a high biovolume and dominance of the 2 to 3 mm ESD size class, dominated; however, this changed to other groups during July to August (Figure 6). The 2 to 3 mm ESD zooplankton in this region corresponded with the C5 stages of large copepods Neocalanus spp., and their importance has been reported in previous OPC studies (Yokoi et al. 2008; Matsuno and Yamaguchi 2010; Fukuda et al. 2012). Along the 155° E line, group C was dominated by the 2 to 3 mm ESD size class and was characterized by a high biovolume during May and at the northern stations from July to August.
Neocalanus spp. is known to perform seasonal vertical migration. C1 of Neocalanus spp. grow to C5 near the surface from mid-March to June, descend to deep layers from June to August, and subsequently molt to an adult in the deep layers (Kobari et al. 2003). The large copepod Eucalanus bungii also perform seasonal vertical migration and reproduce near the surface from April to May during phytoplankton bloom (Shoden et al. 2005). Consequently, the total zooplankton biomass in this region peaked during May due to the dominance of Neocalanus and Eucalanus spp. near the surface (Odate 1994). The high biovolume dominated by the 2 to 3 mm ESD size class in May along the 155°E line in this study was caused by the dominance of these large copepods near the surface layer. After the descent of these large copepods into the deep layer, the biovolume decreased and size spectra characteristics changed to different groups from July to August.
The high biomass dominated by late copepodid stages of large copepods is a characteristic of a limited period (1 to 2 months in and near May) in the subarctic North Pacific (Odate 1994). This high zooplankton biomass season was reported to vary inter-annually owing to the decadal climate regime shift (Chiba et al. 2006, 2008). Chiba et al. (2006) observed that the zooplankton peak season in spring varied by one month depending on the decadal climate changes. Although there was a slight change in timing, the specific characteristics caused by seasonal changes in the zooplankton biomass of the neighboring waters of Japan were at high biomass caused by the dominance of large-sized copepods near the surface layer during spring.
The slope of NBSS is known to be an index of productivity, transfer efficiency, and predation in each marine ecosystem (Zhou 2006; Zhou et al. 2009). From the theoretical mean, the slope of NBSS of the stable marine ecosystem settled at approximately −1 (Sprules and Munawar 1986). The steep slope of NBSS indicated high productivity but low transfer efficiency to a higher trophic level. However, the moderate NBSS slope may be caused by low productivity and high energy-transfer efficiency (Sprules and Munawar 1986). In the present study, the slopes of NBSS were slightly higher than −1 for most of the groups, except for group C (Table 1). These findings suggest that most of the zooplankton communities in neighboring Japanese waters were characterized by a bottom-up marine ecosystem. The intercept of NBSS is a reflection of the amount of primary production (Zhou 2006; Marcolin et al. 2013). The high correlation between the intercept of NBSS and the total zooplankton biovolume of each group in this study may confirm this theory (Table 1).
The slopes of NBSS of the zooplankton community reported from various oceans are summarized in Table 4. As previously mentioned, the slope of NBSS is an index of productivity, transfer efficiency, and predation. However, the treated size range varied with the study and ranged from microzooplankton (0.025 to 4.0 mm, Napp et al. 1993) to fish (20 to 1,200 mm, Macpherson et al. 2002). Thus, a careful, direct comparison of the NBSS slope is required. In the present study, specific characteristics of the NBSS slope include a moderate slope of group C (−0.90) caused by the dominance of C5 stages of large Neocalanus spp. (Figure 7). Along the 155° E line, group C during May varied with other groups from July to August; thus, the temporal changes were remarkable (Figure 6). A similar situation (alternation of NBSS slope by dominance of specific taxa) was reported for the dominance of barnacle larvae in the Chukchi Sea (Matsuno et al. 2012).
Table 4 Comparison of the slope ( a ) of NBSS ( Y = aX + b ) on the mesozooplankton community at various locations
Through OPC analysis of zooplankton samples, zooplankton size spectra in the neighboring waters of Japan were separated into six groups. Most groups had −1.11 to −1.24 NBSS slopes, which were slightly higher than the theoretical value (−1). However, one group had moderate slope NBSS (−0.90) caused by the dominance of large Neocalanus copepods. Temporal changes in the slope of NBSS were observed from moderate slope (−0.90) in May back to high NBSS slope (−1.11 to −1.24) caused by a descent of Neocalanus copepods to deep layers from July to August. This temporal change in the NBSS slope was not caused by predator-prey interaction, but due to the seasonal vertical migration of dominant large-sized zooplankton species. This finding suggests that the ecology of dominant species (growth or seasonal vertical migration) also should be considered as a cause of temporal changes in NBSS slope.
Baird ME, Timko PG, Middleton JH, Mullaney TJ, Cox DR, Suthers IM (2008) Biological properties across the Tasman Front off southeast Australia. Deep-Sea Res I 55:1438–1455
Basedow SL, Tande KS, Zhou M (2010) Biovolume spectrum theories applied: spatial patterns of trophic levels within a mesozooplankton community at the polar front. J Plank Res 32:1105–1119
Boyd PW, Newton PP (1999) Does planktonic community structure determine downward particulate organic carbon flux in different oceanic provinces? Deep-Sea Res I 46:63–91
Chiba S, Tadokoro K, Sugisaki H, Saino T (2006) Effects of decadal climate change on zooplankton over the last 50 years in the western subarctic North Pacific. Global Change Biol 12:907–920
Chiba S, Aita MN, Tadokoro K, Saino T, Sugisaki H, Nakata K (2008) From climate regime shifts to lower-trophic level phenology: synthesis of recent progress in retrospective studies of the western North Pacific. Prog Oceanogr 77:112–126
Chiba S, Sugisaki H, Nonaka M, Saino T (2009) Geographical shift of zooplankton communities and decadal dynamics of the Kuroshio-Oyashio currents in the western North Pacific. Global Change Biol 15:1846–1858
Ducklow HW, Steinberg DK, Buesseler KO (2001) Upper ocean carbon export and the biological pump. Oceanography 14:50–58
Field JG, Clarke KR, Warwick RM (1982) A practical strategy for analysing multispecies distribution patterns. Mar Ecol Prog Ser 8:37–52
Fukuda J, Yamaguchi A, Matsuno K, Imai I (2012) Interannual and latitudinal changes in zooplankton abundance, biomass and size composition along a central North Pacific transect during summer: analyses with an Optical Plankton Counter. Plank Benthos Res 7:64–74
García-Comas C, Chang C-Y, Ye L, Sastri AR, Lee Y-C, Gong G-C, Hsieh C-H (2014) Mesozooplankton size structure in response to environmental conditions in the East China Sea: how much does size spectra theory fit empirical data of a dynamic coastal area? Prog Oceanogr 121:141–157
Herman AW (1988) Simultaneous measurement of zooplankton and light attenuance with new optical plankton counter. Cont Shelf Res 8:205–221
Herman AW, Harvey M (2006) Application of normalized biomass size spectra to laser optical plankton counter net inter comparisons of zooplankton distributions. J Geophys Res 111:C05S05, doi:10.1029/2005JC002948
Hirakawa K, Kawano M, Nishihama S (1995) Seasonal variability in abundance and composition of zooplankton in the vicinity of the Tsushima Straits, southwestern Japan Sea. Bull Japan Sea Natl Fish Res Inst 45:25–38
Hirota Y, Hasegawa S (1999) The zooplankton biomass in the Sea of Japan. Fish Oceanogr 8:274–283
Hsiao S-H, Ka S, Fang T-H, Hwang J-S (2011) Zooplankton assemblages as indicators of seasonal changes in water masses in the boundary waters between the East China Sea and the Taiwan Strait. Hydrobiologia 666:317–330
Hsieh C-H, Chiu T-S, Shih C-T (2004) Copepod diversity and composition as indicators of intrusion of the Kuroshio Branch Current into the northern Taiwan Strait in Spring 2000. Zool Stud 43:393–403
Huntley ME, Zhou M, Nordhausen W (1995) Mesoscale distribution of zooplankton in the California Current in late spring, observed by optical plankton counter. J Mar Res 53:647–674
Iguchi N (2004) Spatial/temporal variations in zooplankton biomass and ecological characteristics of major species in the southern part of the Japan Sea: a review. Prog Oceanogr 61:213–225
Iguchi N, Ikeda T, Imamura A (1993) Growth and life cycle of a euphausiid crustacean (Euphausia pacifica HANSEN) in Toyama Bay, southern Japan Sea. Bull Japan Sea Natl Fish Res Inst 43:69–81
Ikeda T, Hirakawa K, Imamura A (1992) Abundance, population structure and life cycle of a hyperiid amphipod Themisto japonica (Bovallius) in Toyama Bay, southern Japan Sea. Bull Plankton Soc Japan 39:1–16
Iriarte JL, González HE (2004) Phytoplankton size structure during and after the 1997/98 El Niño in a coastal upwelling area of the northern Humboldt Current System. Mar Ecol Prog Ser 269:83–90
Kimmel DG, Roman MR, Zhang X (2006) Spatial and temporal variability in factors affecting mesozooplankton dynamics in Chesapeake Bay: evidence from biomass size spectra. Limnol Oceanogr 51:131–141
Kobari T, Shinada A, Tsuda A (2003) Functional roles of interzonal migrating mesozooplankton in the western subarctic Pacific. Prog Oceanogr 57:279–298
Longhurst AR (1991) Role of the marine biosphere in the global carbon cycle. Limnol Oceanogr 36:1507–1526
Macpherson E, Gordoa A, Garcia-Rubies A (2002) Biomass size spectra in littoral fishes in protected and unprotected areas in the NW Mediterranean. Estuar Coast Shelf Sci 55:777–788
Marcolin CR, Schultes S, Jackson GA, Lopes RM (2013) Plankton and seston size spectra estimated by the LOPC and ZooScan in the Abrolhos Bank ecosystem (SE Atlantic). Cont Shelf Res 70:74–87
Matsuno K, Yamaguchi A (2010) Abundance and biomass of mesozooplankton along north-south transects (165°E and 165°W) in summer in the North Pacific: an analysis with an optical plankton counter. Plank Benthos Res 5:123–130
Matsuno K, Yamaguchi A, Imai I (2012) Biomass size spectra of mesozooplankton in the Chukchi Sea during the summers of 1991/1992 and 2007/2008: an analysis using optical plankton counter data. ICES J Mar Sci 69:1205–1217
Michaels AF, Silver MW (1988) Primary production, sinking fluxes and microbial food web. Deep-Sea Res 35A:473–490
Moore SK, Suthers IM (2006) Evaluation and correction of subresolved particles by the optical plankton counter in three Australian estuaries with pristine to highly modified catchments. J Geophys Res 111:C05S04, doi:10.1029/2005JC002920
Motoda S (1957) North Pacific standard net. Inform Bull Plank Japan 4:13–15 (in Japanese with English abstract)
Motoda S (1959) Devices of simple plankton apparatus. Mem Fac Fish Hokkaido Univ 7:73–94
Nakata K, Koyama S (2003) Interannual changes of the winter to early spring biomass and composition of mesozooplankton in the Kuroshio Region in relation to climatic factors. J Oceanogr 59:225–234
Nakata K, Koyama S, Matsukawa Y (2001) Interannual variation in spring biomass and gut content composition of copepods in the Kuroshio current, 1971–89. Fish Oceanogr 10:329–341
Napp JM, Ortner PB, Pieper RE, Holliday DV (1993) Biovolume-size spectra of epipelagic zooplankton using a multi-frequency acoustic profiling system (MAPS). Deep-Sea Res I 40:445–459
Nogueira E, Gonzárez-Nuevo G, Bode A, Varela M, Morán XAG, Valdés L (2004) Comparison of biomass and size spectra derived from optical plankton counter data and net samples: application to the assessment of mesoplankton distribution along the Northwest and North Iberian Shelf. ICES J Mar Sci 61:508–517
Odate K (1994) Zooplankton biomass and its long-term variation in the western North Pacific Ocean, Tohoku Sea Area, Japan. Bull Tohoku Natl Fish Res Inst 56:115–173 (in Japanese with English abstract)
Piontkovski SA, Williams R, Melnik TA (1995) Spatial heterogeneity, biomass and size structure of plankton of the Indian Ocean: some general trends. Mar Ecol Prog Ser 117:219–227
Quinones RA, Platt T, Rodriguez J (2003) Patterns of biomass-size spectra from oligotrophic waters of the Northwest Atlantic. Prog Oceanogr 57:405–427
Rodriguez J, Mullin MM (1986) Relation between biomass and body weight of plankton in a steady state oceanic ecosystem. Limnol Oceanogr 31:361–370
Schultes S, Sourisseau M, LeMasson E, Lunven M, Marié L (2013) Influence of physical forcing on mesozooplankton communities at the Ushant tidal front. J Mar Syst 109–110:S191–S202
Sheldon RW, Sutcliffe WH Jr, Paranjape M (1977) Structure of pelagic food chain and relationship between plankton and fish production. J Fish Res Bd Can 34:2344–2353
Shimode S, Toda T, Kikuchi T (2006) Spatio-temporal changes in diversity and community structure of planktonic copepods in Sagami Bay, Japan. Mar Biol 148:581–197
Shoden S, Ikeda T, Yamaguchi A (2005) Vertical distribution, population structure and life cycle of Eucalanus bungii (Copepoda: Calanoida) in the Oyashio region, with notes on its regional variations. Mar Biol 146:497–511
Sourisseau M, Carlotti F (2006) Spatial distribution of zooplankton size spectra on the French continental shelf of the Bay of Biscay during spring 2000 and 2001. J Geophys Res 111, doi: 10.1029/2005JC003063
Sprules WG, Munawar M (1986) Plankton size spectra in relation to ecosystem productivity, size and perturbation. Can J Fish Aquat Sci 43:1789–1794
Suthers IM, Taggart CT, Rissik D, Baird ME (2006) Day and night ichthyoplankton assemblages and zooplankton biomass size spectrum in a deep ocean island wake. Mar Ecol Prog Ser 322:225–238
Tarling GA, Stowasser G, Ward P, Poulton AJ, Zhou M, Venables HJ, McGill RAR, Murphy EJ (2012) Seasonal trophic structure of the Scotia Sea pelagic ecosystem considered through biomass spectra and stable isotope analysis. Deep-Sea Res II 59–60:222–236
Terazaki M (1993) Deep-sea adaptation of the epipelagic chaetognath Sagitta elegans in the Japan Sea. Mar Ecol Prog Ser 98:79–88
Van der Meeren T, Næss T (1993) How does cod (Gadus morhua) cope with variability in feeding conditions during early larval stage? Mar Biol 116:637–647
Volkov AF (2008) Mean annual characteristics of zooplankton in the sea of Okhotsk, Bering Sea and Northwestern Pacific (annual and seasonal biomass values and predominance). Russ J Mar Biol 34:437–451
Yamaguchi A, Matsuno K, Abe Y, Arima D, Ohgi K (2014) Seasonal changes in zooplankton abundance, biomass, size structure and dominant copepods in the Oyashio region analysed by an optical plankton counter. Deep-Sea Res I 91:115–124
Yasuda I (2003) Hydrographic structure and variability of the Kuroshio-Oyashio transition area. J Oceanogr 59:389–402
Yokoi Y, Yamaguchi A, Ikeda T (2008) Regional and interannual changes in the abundance, biomass and community structure of mesozooplankton in the western North Pacific in early summer; as analysed with an optical plankton counter. Bull Plank Soc Japan 55:79–88 (in Japanese with English abstract)
Zhou M (2006) What determines the slope of a plankton biomass spectrum? J Plankton Res 28:437–448
Zhou M, Huntley ME (1997) Population dynamics theory of plankton based on biomass spectra. Mar Ecol Prog Ser 159:61–73
Zhou M, Tande KS, Zhu Y, Basedow S (2009) Productivity, trophic levels and size spectra of zooplankton in northern Norwegian shelf regions. Deep-Sea Res II 56:1934–1944
We are grateful to the captain and crew of the T/S Oshoro-Maru for their help in our field sampling. This study was supported by a Grant-in-Aid for Scientific Research (A) 24248032 and a Grant-in-Aid for Scientific Research on Innovative Areas 24110005 from the Japan Society for the Promotion of Science (JSPS).
Laboratory of Marine Biology, Graduate School of Fisheries Science, Hokkaido University, 3-1-1 Minatomachi, Hakodate, Hokkaido, 041-8611, Japan
Kaede Sato, Daichi Arima, Yoshiyuki Abe & Atsushi Yamaguchi
Arctic Environmental Research Center, National Institute of Polar Research, 10-3 Midori-cho, Tachikawa, Tokyo, 190-8518, Japan
Kohei Matsuno
Kaede Sato
Daichi Arima
Yoshiyuki Abe
Atsushi Yamaguchi
Correspondence to Atsushi Yamaguchi.
KS and AY wrote the manuscript. KM, DA, and YA participated in the design of the study and helped with OPC measurements. All authors read and approved the final manuscript.
Sato, K., Matsuno, K., Arima, D. et al. Spatial and temporal changes in zooplankton abundance, biovolume, and size spectra in the neighboring waters of Japan: analyses using an optical plankton counter. Zool. Stud. 54, 18 (2015). https://doi.org/10.1186/s40555-014-0098-z
DOI: https://doi.org/10.1186/s40555-014-0098-z
Neocalanus | CommonCrawl |
Consensus strategy in genes prioritization and combined bioinformatics analysis for preeclampsia pathogenesis
Eduardo Tejera ORCID: orcid.org/0000-0002-1377-04131,
Maykel Cruz-Monteagudo2,3,6,7,
Germán Burgos1,
María-Eugenia Sánchez1,
Aminael Sánchez-Rodríguez4,
Yunierkis Pérez-Castillo5,
Fernanda Borges6,
Maria Natália Dias Soeiro Cordeiro7,
César Paz-y-Miño8 &
Irene Rebelo9,10
BMC Medical Genomics volume 10, Article number: 50 (2017) Cite this article
Preeclampsia is a multifactorial disease with unknown pathogenesis. Even when recent studies explored this disease using several bioinformatics tools, the main objective was not directed to pathogenesis. Additionally, consensus prioritization was proved to be highly efficient in the recognition of genes-disease association. However, not information is available about the consensus ability to early recognize genes directly involved in pathogenesis. Therefore our aim in this study is to apply several theoretical approaches to explore preeclampsia; specifically those genes directly involved in the pathogenesis.
We firstly evaluated the consensus between 12 prioritization strategies to early recognize pathogenic genes related to preeclampsia. A communality analysis in the protein-protein interaction network of previously selected genes was done including further enrichment analysis. The enrichment analysis includes metabolic pathways as well as gene ontology. Microarray data was also collected and used in order to confirm our results or as a strategy to weight the previously enriched pathways.
The consensus prioritized gene list was rationally filtered to 476 genes using several criteria. The communality analysis showed an enrichment of communities connected with VEGF-signaling pathway. This pathway is also enriched considering the microarray data. Our result point to VEGF, FLT1 and KDR as relevant pathogenic genes, as well as those connected with NO metabolism.
Our results revealed that consensus strategy improve the detection and initial enrichment of pathogenic genes, at least in preeclampsia condition. Moreover the combination of the first percent of the prioritized genes with protein-protein interaction network followed by communality analysis reduces the gene space. This approach actually identifies well known genes related with pathogenesis. However, genes like HSP90, PAK2, CD247 and others included in the first 1% of the prioritized list need to be further explored in preeclampsia pathogenesis through experimental approaches.
The study of preeclampsia (PE) from a bioinformatics approach will be affected by several aspects that will inevitable affect the interpretation and will establish an implicit frame to our analysis. The PE is a multifactorial disease that probably involves several genes and environmental factors. However, the main theory behind PE is that the disorder results from placenta ischemia, with further releases of several factors into the maternal circulation [1, 2]. The ischemia origin is supported mainly by a failure in the transformation of the spiral artery caused by a trophoblastic invasion abnormality [1,2,3]. Therefore, placenta (at this level) is the central organ for pathogenesis. From this point forward, the possible scenarios could be more complex. Nevertheless, the endothelial dysfunction seems to be a primary factor leading to the remaining problems and clinical manifestations. The roll of placenta is clearly reflected in the application of "omic" tools, specifically microarrays studies. The simple inspection of microarray data [4, 5] (through GEO and ArrayExpress databases) reveals that majority were obtained in placenta samples by a case/control design.
Even when arrays technologies could be valuable to provide a wide gene-disease association, a problem arises from the experimental design (case/control). With this type of experimental design will be hard to differentiate pathogenic from non-pathogenic genes. Its means that if we obtain a very significant up or down-regulated gene, we can't be sure that it is involved in pathogenesis. Moreover we can't confirm that these up or down-regulated genes can be used as a risk evaluator or predictive measure without further experimental analysis in a longitudinal design. Even with all previous considerations, microarrays information is used for bioinformatics analysis and gene prioritization suggesting that here are genes that can be probably related with pathogenesis [6,7,8,9,10,11,12].
How is the scenery in scientific literature? The ratio between case/control and prospective analysis is biased. It is difficult to proof this statement without a rigorous analysis of the scientific information. However, using pre-eclampsia (MeSH Term) in PubMed database, we obtain 13,173 publications in the last 10 years, but adding the terms "longitudinal studies" or "prospective" the previous search is reduced to 1578 in the same time interval. Even when this approach can be considered as superficial clearly indicate the bias toward case/control studies. Therefore, any prioritization strategy based on text mining or even database exploration, will provide us with a genetic-disease association. However we can't confirm that there genes are primarily related to PE pathogenesis.
There are few studies in PE focused in system biology or other bioinformatics tools [6,7,8,9,10,11,12,13,14]. Some of them use the microarray information previously described while others used (combined or not), text mining and protein-protein interaction networks (PPI). All these methods will be affected by the previous discussed issues. Still, a more important problem with bioinformatics tools is actually its diversity. There are several ways in which we can combine the information and not all of them will converge into the same results. For example, in the recent work of Miranda van Uitert et al. [6] on microarray data, their proposed several genes but when compared with other two similar studies we found an overlap of 77% with Vaiman et al. [13] and 44% with Moslehi et all [14]. Between these three studies a total of 556 genes were selected but only 47 are common (~8%) which is a very low overlapping (this microarray information will be further discussed).
Each particular problem could have a better tool to solve it and in terms of prioritization, the consensus strategy had proof to be the most effective way to explore gene-disease association [15, 16]. However, we are not so clear if consensus is also effective for identification of pathogenic genes and it will be our first step in the current work. Consequently we will include several prioritization strategies that will be integrated using a consensus strategy in order to rank the genes in the gene-disease association. The consensus result will be integrated in a common pathway and compared with previous results in microarray meta-analysis in order to clarify the genetic function. The goal of this second step including network analysis and metabolic pathways analysis is to additionally evaluate the capacity of identify pathogenic pathways and it relevance.
Selection of pathogenic genes for validation
In order to validate the prioritization strategy on pathogenic genes we need to identify specific genes with a high probability of being involved in PE pathogenesis. Through manually literature inspection we considered a gene as pathogenic if:
The silencing or induced overexpression of the proposed gene in animal models generate a clinical phenotype like preeclampsia (this group of genes was named as G1)
At least one variation (polymorphism) was associated with PE. We only consider the articles that apply meta-analysis methods (this group of genes was named as G2)
The full analysis of the genes in each group can be found in Additional file 1. We found 35 unique genes combining G1 and G2 groups (off course it is not an exhaustive list). The selected genes in each group and its corresponding Entrez Gene ID identifier are:
G1 (n = 27): ADA (100), ADORA2A (135), ADORA2B (136), AGTR1 (185), APOH (350), CD73 (4907), CRP (1401), ENG (2022), EDN1 (1606), FLT1 (2321), GADD45A (1647), HADHA (3030), HIF1A (3091), IDO1 (3620), IL10 (3586), IL17A (3605), IL6 (2569), NOS1 (4842), NOS2 (4843), NOS3 (4846), PGF (5228), ROS1 (6098), TACR3 (6870), TGFB1 (7040), TNF (7124), TNFSF14 (8740), VEGFA (7422).
G2 (n = 13): F5 (2153), F2 (2147), AGT (183), MTHFR (4524), NOS3 (4846), ACE (1636), SERPINE1 (5054), VEGFA (7422), LEPR (3953), TGFB1 (7040), AGTR1 (185), HLA-G (3135), IL10 (3586).
Prioritization algorithms and consensus strategy
From the prioritization portal [15, 16] we selected the methods according to the following criteria: 1) fully available in web service platform and 2) requiring only the disease name for gene prioritization. Under these conditions we found 12 methods: Biograph [17], Candid [18], Glad4U [19], PolySearch [20], Cipher [21], Guildify [22], DisgeNet [23], GeneProspector [24], Genie [25], SNPs3D [26], GeneDistiller [27] and MetaRanker [28]. The following methods: Cipher, Guildify and DisgeNet were not selected from the prioritization portal but from literature, however, fulfilling the same two requirements. These methods have several characteristics that had being fully comprised by other authors previously [15, 29].
Our strategy to combine the different scores obtained in each independent methods is similar to the method used in [30, 31]. Each gene (denoted as i) in the ranked list provided by each method (denoted as j) was normalized (GeneN i , j which means, the normalized score of the gene "i" in the method "j") in order to integrate all methods for the consensus approach. For the final score by gene, we considered the average normalized score as well as the number of methods which predict the gene (denoted as n i ) using the formula:
$$ {Gene}_i=\sqrt{\left(\frac{\left({n}_i-1\right)}{\left(12-1\right)}\right)\left(\frac{1}{j}{\varSigma}_j{ Gene N}_{i,j}\right)} $$
The Eq. 1 correspond with the geometrical mean between the average score of each gene obtained in each method and the normalized score according to the number of methods which predict the association between the gene and the disease. However, this formula will be zero if the gene is only predicted by only one method. Therefore we sort the genes according to the Gene i values and according to the average (\( \left[\left(\frac{\left({n}_i-1\right)}{\left(12-1\right)}\right)+\left(\frac{1}{j}{\varSigma}_j\kern0.5em {GeneN}_{i,j}\right)\right]/2 \)). This sorting will produce a ranking that further normalized leading to the final score of each gene (ConsenScore i ). If two genes are predicted by only one method and also have the same normalized scores then will also have the same value of ConsenScore i .
The final list of prioritized genes is actually very long (more than 18,000). We needed a strategy to create a rational cutoff in the number of genes comprising the major pathogenic information with minimal noise. To accomplish this task we used the same pathogenic genes already defined. We defined the following index: \( {I}_i=\frac{TP_i}{FP_i+1}{ConsenScore}_i \) where, TP and FP are the true and false positive values (up to the ranking value of the gene i) respectively. The maximal value of I i can be understood as the maximal compromise between the true positive and false positive rate compensated with the ranking index of each gene.
Early recognition analysis in prioritization
Several enrichment metrics have been proposed in the chemoinformatics literature to measure the enrichment ability of a virtual screening protocol [32] and had being recently applied in gene prioritization [33]. In this work and similar to [33], we used some of the most extended metrics to estimate the enrichment ability in order to compare different gene prioritization strategies. The overall enrichment metrics include the area under the accumulation curve (AUAC); the area under the ROC curve (ROC); and the enrichment factor (EF) evaluated at the top 1, 5, 10 and 20% of the ranked list. At the same time, the early recognition metrics used were the robust initial enhancement (RIE) and the Boltzmann-enhanced discrimination of ROC (BEDROC) evaluated at the top 1, 5, 10 and 20% of the ranked list [32]. The calculation of both classic and early recognition enrichment metrics was conducted by using the perl scriptCresset_VS [34].
Enrichment analysis
We used David Bioinformatics Resource [35, 36] for gene ontology (GO) and pathways enrichment analysis. The number of GO terms could be very big considering the amount of genes. Therefore we used Revigo [37] in order to simplify the GO terms keeping those with highest specificity. We additionally used RSpider [38], to obtain an integrated pathway combining Reactome and KEGG databases. In these databases the pathways are not the same so any enrichment will produce different pathways that otherwise could be connected or even very similar in the two databases. The use of RSpider will produce not only a statistical analysis of the enrichment but also a network representation integrating the information in both databases. The main goal in RSpider is to connect into non-interrupted sub-network component as many input genes as possible using minimal number of missing genes.
Protein-protein interaction network and analysis
We used String Database [39] to create the protein-protein interactions network with a confidence cutoff of 0.9 and zero node addition. We also used Cytoscape [40] for centrality indexes calculation and network visualization.
Communality (or cliques) network analysis by clique percolation method was applied using CFinder [41]. The communality analysis provides a better topology description of the network including the location of highly connected sub-graphs (cliques) and/or overlapping modules that usually correspond with relevant biological information. The selection of the value "k-cliques" will affect the number of community and also the number of genes in each community. We create a rational cutoff by balancing the number of communities and the genes distribution across them. In general higher values of k-cliques imply few communities while lower values lead to many communities. In our network both extremes (too small or to high k-cliques values) result in an unbalanced distribution of the genes across communities. Therefore we create the following index "S" as: \( {S}^k=\frac{\left| mean\left({N}_g^k\right)- median\left({N}_g^k\right)\right|}{N_c^k} \) where \( {N}_g^k \) and \( {N}_c^k \) are the number of genes in each community and the number of communities for a defined k-clique cutoff value.
In each community obtained using CFinder, we performed a pathways enrichment analysis followed by a ranking of all pathways. This ranking or scoring was done as follow: if \( {ConsenScore}_i^k \) is the ConsenScore i of the gene "i" in the community "k" then:
Each community "k" was weighted as: \( {W}_k=\sum {ConsenScore}_i^k/{N}_k \), where N k is the number of communities.
Each pathway "m" was weighted as: \( {PathRankScore}_m=\sum {W}_k^m/{N}_k^m \), where \( {W}_k^m \) is the weight (W k ) of each communities connected with the pathway "m" and \( {N}_k^m \) is the number of communities connected with the pathway "m".
A second weight was given to the pathway "m" (PathGeneScore m ) considering all the genes involved in the pathway as: \( {PathGeneScore}_m=\sqrt{\left\langle {ConsenScore}_i^m\right\rangle \frac{n_m}{N_m}} \), where Nm is the total number of genes in the pathway "m" while nm is the number of those genes which are also found in the protein-protein interaction network. \( \left\langle {ConsenScore}_i^m\right\rangle \) is the average of the ConsenScore i of all genes presents in the pathway "m".
The final score associated with the pathway "m" (PathScore m )is calculated as the geometrical mean between PathGeneScore m and the normalized PathRankScore m .
A total of five studies were considered in microarray data integration and were named as: A1 [7], A2 [6], A3 [14], A4 [8] and A5 [10]. In each study we extracted the significant up-regulated/down-regulated genes considering the procedure of each author in the correspondent articles. In any case was considered the fold-expression as significant cutoff but the adjusted p-value reported by the authors. The criterion was an adjusted p-value < 0.05. The strategies in the reported articles considering: microarrays integration, gene expression correction and annotations where different (we will discuss more about it in results section and a brief description can be found in Additional file 2). The adjusted p-values were used to create a ranking of genes in each study followed by independent normalization.
We could go through a meta-analysis cross-normalization approach as in [10, 33]. However, because different strategies are possible to accomplish this analysis leading to different results we choose to consider each study separately. In each study ("j") a particular up-regulated or down-regulated gene ("i") will have a normalized score according to ranking (GeneS i , j , the normalized score of the gene "i" in the study "j"). The consensus scoring of each gene in the microarray data was carried out similarly to consensus prioritization strategy. This means, the final score of each gene was calculated as \( {GeneAS}_i=\sqrt{\left(\frac{\left({Narray}_i-1\right)}{\left(5-1\right)}\right)\left(\frac{1}{j}{\varSigma}_j{GeneS}_{i,j}\right)} \)where, Narrayi correspond with the number of studies reporting the gene "i". Combining all genes in the selected studies we found 1944 genes: 916 always reported up-regulated, 1013 always reported as down-regulated and 13 genes with ambiguous expression. The full list of genes is presented in Additional file 2 as well as the calculated scores. This final score (GeneAS i ) will have a double meaning 1) inter-studies agreement and 2) the measure of the statistical significance of the gene in the study. Therefore, highest values of the score imply that the gene was identifying in several studies and also with highest statistical significance.
Consensus prioritization
The detections of pathogenic genes in all methods are presented in Table 1. As we can notice not all methods are capable to identify the 35 proposed pathogenic genes.
Table 1 Identification (in %) of pathogenic genes in each approach
Consensus strategy identify the entire G2 set in the first 1% of the final gene list (>18,000 genes) and in all cases remain as the method with higher identification of pathogenic genes. Very close to consensus strategy approach is the MetaRanker [28] method. The identification of the pathogenic genes is important but even more relevant is the early recognition ability.
The average rank of the studied genes is lower in consensus strategy than in other methods (Table 2) used independently. The average rank of the detected genes is not properly speaking a measure of early recognition. However, intuitively will means that consensus strategy early detect the pathogenic genes (identified in G1 and G2 groups). The MetaRanker is one more time the closer strategy. Even when these two previous analyses could indicate that the consensus strategy prioritize better the pathogenic genes, we additionally calculate several indexes directly related with the evaluation of early enrichment (Table 3). Because MetaRanker is the method with closer results, the early enrichment analysis was only performed comparing consensus and MetaRanker strategies.
Table 2 Average rank of identified pathogenic genes in each method
Table 3 Initial enrichment indexes for the MetaRanker and the Consensus strategy
The indexes related with the early enrichment clearly state that consensus strategy over perform the result of MetaRanker in pathogenic genes detection locating more genes with a significant lower rank. We compare the rank of the pathogenic genes between the two methods for G1, G2 and G1,2 using signed Wilconson test. The p-value was lower than 0.01 in the three groups indicating statically significant differences in the ranking obtained by the two methods.
Previous calculations are based on predefined genes in G1 and G2 groups. In order to explore the consistency of our results by changing those genes, we performed a bootstrap sampling as follow:
We remove 5 random genes form the 35 "pathogenic genes" (around 14%) and evaluate the median rank of the remaining ones in both: consensus and MetaRanker
We repeat the previous step 1000 times, each time selecting a new set of 5 random genes.
The density distribution (using Gaussian kernel of function "density" in R) of the 1000 values in both methods is presented in the Fig. 1.
Average ranking distribution in consensus and MetaRanker strategies in 1000 generations randomly removing the 14% of the pathogenic genes (G1,2) each time
As we can noticed, consensus strategies compared to MetaRanker provides more frequently the lower rank for the genes in agreement with our previous results in Table 3 after evaluation in 1000 samples modifications.
Enrichment analysis of preeclampsia related genes and protein-protein interaction network
Using G1 and G2 unique genes (n = 35), we can notice (Table 1) that consensus strategy already identify the 89% in the 10% of the data, this means that the 89% of the 35 genes are in the initial 1800 genes obtained from prioritization. This is a very big number; therefore a strategy for a rational cutoff was designed (see Methods). The implementation of Ii considering the true positive and false positive ratio could be used as a rational cutoff to reduce the amount of genes. This procedure is showed in Fig. 2.
Left) ROC curve obtained with prioritized genes for PE and the proposed pathogenic list. Right) Variation of I i with respect to genes ranking. The maximal value of I i is the 0.76085 and correspond with a ranking value of 476
The maximal (Fig. 2) value of I i is 0.76085 and correspond with a ranking value of 476, therefore the reduced list for PE comprise the first 476 genes. The entire gene list as well as their scores and ranking can be found in Additional file 3. In the 476 genes there are 30 of 35 predefined pathogenic genes.
The enrichment analysis of biological processes in these genes results into more than 500 terms with an adjusted p-value <0.01 (considering FDR) (Additional file 4). In order to simplify this list we used Revigo [37] to calculate the frequencies of the gene ontology terms. We only consider those terms with a frequency lower than 0.01% (full list of terms can be found in Additional file 4). With this consideration the number of terms remains high so only some of the initial ones (more relevant) are presented in Table 4.
Table 4 Some of the more specific biological process obtained by enrichment analysis in PE genes
Similarly, the enrichment analysis of metabolic pathways is presented in Table 5 using to main databases: KEGG and Reactome.
Table 5 Pathways enrichment analysis using Reactome and KEGG databases
The pathways presented in Table 5 are only a partial list but it is entirely presented for Reactome and KEGG in Additional file 4 .
The biological processes and enriched pathways are consistent between them and also with the scientific knowledge about PE, however would be hard to establish some kind of relevance between them without further consideration. In this way we carried on a network analysis.
With the indicated cutoff of 0.9, the final protein-protein interaction network has 417 nodes, corresponding with the 87.6% of the initial genes (476). The S k index (as proposed in the Methods section to identify a rational k-clique number) will rich a minimum either by an increment in the number of communities and/or by an increment in the similarity between the mean and median values of the number of genes in all communities. We can notice (Fig. 3) that the desired values will be between 8 and 10. The number of communities for k = 8 is 16 compared to 9 and 5 for k = 9 and 10 respectively. Considering that in each community several biological analyses will be carried on, 16 communities will be difficult to study. Additionally in k = 8, one of the communities have almost twofold the number of genes with respect the remaining communities. For this reason we select the k = 9 in our analysis (Fig. 4. Left).
Values of S k with respect to each k-clique cutoff value
Left). Community analysis for k-cliques = 9. Black nodes represent genes which are parts of several communities. The rest of the colors correspond with the 9 communities obtained. Right) Gradient connectivity degree distribution (min = 9 with white color and max = 85 with red color and indicated by PIK3R1 gene)
Each community can be weighted considering the ConsenScore i of each gene in the community (see Table 6). Additionally we also included the number of pathogenic genes present in the community.
Table 6 Communities membership and scores
Communities 2 and 6 could be considered as the more relevant showing. However could be useful the prioritization of the metabolic pathways by an enrichment analysis in each community (full list presented in Additional file 4) and weighted as presented in the Methods section.
Microarray data integration
From the 1944 genes collected in microarrays data only 80 are present in our 476 obtained in consensus prioritization representing only a 4%. The worst gene overlapping with respect to other microarrays is with the study A1. The A2, A3, A4, A5 conserve 40 genes in common but drastically reduced to 2 by adding A1 (Fig. 5 Left). The agreement between selected microarray studies is not good in terms of genes identifications as we can see in the Venn diagrams (Fig. 5, Left). It is a direct consequence of the differences in initial microarray data and processing strategies (presented in Additional file 2). The study A1 is the only one with any meta-analysis strategy. Both A2 and A3 carried out a meta-analysis, while A4 and A5 go specifically through cross-platform normalization. The differences between both strategies in microarray data integration had being explored previously [42]. Actually A2 and A3 share 111 genes and similarly A4 and A5 share 237 genes. This gene space is reduced 40 genes when all four studies are combined. Moreover as we can see in Additional file 2, A4 and A5 share a number of similarities regarding initial microarray data.
Left) Venn diagrams between the five microarray studies. Right) Agreement between each microarray study and the consensus gene list
Analyzing the amount of genes that each study independently shares with the initial 476 prioritized genes (Fig. 5. Right) we found that: A1 (n = 12, 3.8%), A2 (n = 30, 7.7%), A3 (n = 30, 7.9%), A4 (n = 53, 4.2%) and A5 (n = 26, 7.5%). The result indicates that the methodology of Moslehi et al. [14] will represent better our prioritized genes (even when very close to A2 and A5). Moreover considering the average consensus score of those genes we found: A1 (0.396), A2 (0.561), A3 (0.597), A4 (0.111) and A5 (0.540). This average scoring also suggests that the work of Moslehi et al. [14] also cover better ranked genes (even not so distant of A2 and A5). These values will be discussed later.
There are a total of 41 up-regulated and 39 down-regulated genes commonly found between all integrated genes in microarray data and the 476 already prioritized genes (a total of 80 genes). The up-regulated are: VEGFA, FLT1, STOX1, SERPINE1, LEP, INHA, INHBA, ENG, HMOX1, VWF, TGFB1, TFPI, ADAM12, CRH, PAPPA2, VEGFC, CP, MMP14, FN1, SERPINA3, SIGLEC6, ACE2, PREP, FABP4, EGFR, FSTL3, IL6ST, VDR, IGFBP5, MMP15, ITGA5, TRIM24, CGA, MET, DUSP1, MIF, TAPBP, NR1H2, MMP11, HPN, GLRX and the down-regulated are: ACVRL1, ADRB3, AGTR1, ANGPT1, CD4, CD14, COL1A1, COL1A2, F5, F13A1, FCER1G, FGF2, GHR, CFH, HGF, HSD11B2, CFI, IGF1, IGFBP7, IL10RA, IDO1, JAK1, KLRD1, MMP1, NEDD4, ENPP1, PLAUR, MAPK1, CCL2, SOD1, SPP1, TGFBR3, THBS1, TLR4, VCAM1, APLN, HGS, ROCK2, PLAC1. From these 80 genes 34 (42.5%) were located with a ranking less than 180 (around the first 1% of the list) in the consensus strategy prioritization.
Comparing the scores of consensus strategy and scores obtained from the microarrays studies (Fig. 6) we can arrive to some interesting results. Moreover, from these 80 genes 72 are also present in the 417 forming the interaction network and 19 are also part of some community. We can evaluate the contribution of these 19 genes in each community using the average GeneAS i of the genes which belong to a particular community in a similar way as we did previously (Table 6). The corresponding weights for each community are: 1 (0.062), 2(0.140), 3(0.072), 4(0.033), 5(0.014), 6(0.132), 7(0.054), 8(0.031) and 9(0.048). These weights also confirm that communities 2 and 6 could be the more relevant as previously presented.
Relationship between the score obtained from microarray data and the consensual strategy prioritization. The red line indicates and scores in consensus prioritization equal to 0.7
Integrated metabolic network
Using RSpider [38] from the 476 genes only 272 were mapped to a reference global network obtaining three significant models (Table 8).
The p-value indicates the probability for a random gene/protein list to have a maximal connected component of the same or larger size. This p-value is computed by Monte Carlo simulation as described in [38]. Beside this statistical analysis, we should also consider that in the initial data (476) there are 80 genes also matching with microarray data while in the smallest network 23 of 98 are also present in the microarray information. This enrichment is statistically different (p-value = 0.036) compared to random gene extraction. The 23 genes are: ANGPT1, COL1A1, COL1A2, F5, F13A1, FCER1G, FLT1, FN1, IGF1, IGFBP5, ITGA5, JAK1, MET, MMP1, SERPINE1, PLAUR, SPP1, TFPI, THBS1, VEGFA, VEGFC, VWF and PAPPA2. The network associated with Model 1 is presented in Fig. 7. The network in Model 3 of Table 8 is presented in Additional file 5.
Integrated metabolic network with 98 genes colored according to our microarray data. The color are: green, red and blue, indicating down-regulated, up-regulated and no information from microarray respectively
The expanded integrated metabolic network (Model 3) allows the entrance of 114 genes in order to bring connections between initial genes. However, it also incorporates 32 compounds that also act as connectors. These compounds obtained from the integrated network are presented in Table 8, some of them could be very generic like "fatty acids" o could be very specifics as "serotonin" or "L-Homocysteine". These compounds can be easily grouped mainly in lipids, steroids, amino acids, and purine metabolisms and will be further explored in the discussion.
Consensus prioritization and enrichment analysis
Our results confirm that the consensus strategy actually improve the detection and prioritization of pathogenic genes. Application of early recognition measures are important and should be considered together with identification capabilities. The ability to rank the relevant genes on the top of a long prioritized list will directly reduce de cost of experimental validation. Previous authors had being probed that consensual strategy in prioritization improve the detection of genes related with specific pathology [15, 16, 33]. However, we are proving here for the first time that the consensus strategy also improves the early enrichment ability of genes related with pathogenesis (at least in PE).
Any study in gene-disease association is intrinsically focused into pathogenesis discovery. During this process obviously some relations could be established and not necessarily because of pathogenesis but a secondary modification (the experimental design will be directly related with this type of result). If several prioritization strategies are combined, then, the possibility of removing noisy relationships (in pathogenesis terms) increases as well as the agreement in relevant genes.
The biological processes as well as metabolic pathways enrichment analysis lead us to already expected information. Some of the biological processes, like those related to blood pressure or vasoconstrictions have a direct association with PE clinical development. Biological processes associated with inflammation, angiogenesis, cytokine, immune system and hormones regulation could also be associated with PE clinical manifestation or even pathogenesis [6, 7, 43, 44] and there are also well related with the metabolic pathways enrichment results (Table 5). The pathway analysis also reflects a good agreement with previous works. The cytokine pathway, VEGF and PDGF signaling, immune system and even some of the cancer related pathways were previously reported by other authors [6, 7, 9, 14, 44, 45]. Signaling pathways in general, are highly relevant as well as several routes connected with cancer (see Additional file 4) which also agree with the previous studies [14, 46, 47].
Protein-protein interaction network, communality analysis and microarrays integration
The enrichment analysis can be helpful. However it is hard to establish a ranking of the pathways according to their implications in pathogenesis without further analysis. It is the main reason to combine the analysis of the protein-protein interaction network. The entire network contains 417 nodes but only 111 are part of some community. The network with 417 already comprises 29 of the 35 predefined pathogenic genes. The sub-network containing only genes which belong to some community have 12 of 29 predefined pathogenic genes. Moreover only 3 genes (HADHA, IDO1 and HLA-G) of the remaining 17 genes are not directly connected with the sub-network. On the other hand, the average degree of the pathogenic genes is 23.6 which is statistically significant higher than non-pathogenic genes (14.2) at p-value <0.05. This result indicates that the node degree could be associated with pathogenesis in this network. The black colored nodes represent those genes that are present in more than one community and therefore are usually those with higher connectivity degree as we can also see in the Fig. 3 Right (and Table 6).
The top 20 genes with highest connectivity degree are: PIK3R1, SRC, VEGFA, KNG1, AKT1, IL6, TP53, TGFB1, STAT3, IGF1, AGT, EDN1, JAK2, INS, EGFR, SHC1, MAPK8, MMP9, STAT5A and MAPK1. The majority of them are located between communities (black colored) and only 3 (EGFR, SHC1 and MAPK8) were not identified as member of any community. The community's analysis (Table 6) indicates that communities 2 and 6 could be considered as the more relevant showing: 1) the highest scores; 2) the minimal average ranking and 3) both include the major number of pathogenic genes. In terms of connectivity degree the community 2 have the greater value instead community 6 which have a middle one. Looking at genes in the community 2 we can clearly identify elements of the VEGF signaling and also NOS metabolism through AKT1 and generally a core of possible mechanism well established in PE that will be discussed later. Moreover, the prioritization of the metabolic pathways shown that VEGF signaling pathway is not only the most relevant pathway (Table 7) but also it is exclusively enriched in community 2.
Table 7 Pathways enrichment analysis in communities and their associated weights
Actually we can notice that tops pathways primarily involve communities 2 and 6. It indicates that those communities as well as their genes are highly relevant in PE. Additionally, the community number 5 is exclusively related with the Renin-angiotensin system and considering that it is also enriched in neuroactive ligand-receptor interaction and vascular smooth muscle contraction, we can suspect that this community has a strong connection with the hypertensive disorder. Interestingly community 8 have the major number of associated pathways. However, most of them are related with signaling pathways like TGF-beta signaling. The enrichment in signaling processes could indicate that it is probably a central group of genes acting as connectors between several metabolic processes and therefore would be relevant to comprehend PE heterogeneity.
Following the importance of community 2 and 6, the major pathways ordered by relevance connected to both communities are: VEGF signaling pathway, mTOR signaling pathway, Adipocytokine signaling pathway, Intestinal immune network for IgA production, Leukocyte transendothelial migration, Progesterone-mediated oocyte maturation, Cytokine-cytokine receptor interaction, Jak-STAT signaling pathway, complement and coagulation cascades, TGF-beta signaling pathway, focal adhesion and regulation of actin cytoskeleton.
In order to explore our results using additional experimental information we included the microarrays analysis. The worst genes overlapping with respect to prioritized list is with the study A1 (Fig. 5), while the other four studies show more consistent results. The reason for this difference in A1 is mainly because is the only study that is not a meta-analysis. We included it because it is the largest independent study.
Our result indicates that the study of Moslehi et al. [14] identify more common genes and also better ranked in our consensus strategy. Both studies using meta-analysis (A2 and A3) shown a better agreement with consensus than A4 and A5. The A5 is better that A4, and similar to A2 and A3 probably because the use of combat [48] and an increased number of arrays. Also, A2 is the study that carried out the largest microarray integration related with PE. The use of combat in cross-platform normalization had being favored in term of clinical and biological meaning agreement [42]. The differences in the percentage of genes (Fig. 5 Righ) shared with consensus strategy is really small comparing A3 with A2 and A5 studies. The A3 study also comprise some similarities with A4 and A5 regarding initial microarray data. However, A3 study exclusively considers meta-analysis in microarrays of early-onset preeclampsia. It is a very important difference regarding other studies. We had probed that consensus prioritization actually improve pathogenic early recognition and we also known that genes involved in early onset preeclampsia are probably closer to pathogenesis than late-onset preeclampsia. This could explain why the A3 have the highest average score with our prioritized consensus list. Therefore it is a logical result considering the previous analysis and also an indirect validation of our consensus strategy. The A5 and A4, consider similar microarrays than A3 but including other do not exclusively related with early-onset preeclampsia.
Regarding all differences between microarrays studies, we should remember that all genes extracted from microarray data were statistically significant up or down-regulated in each corresponding study. Moreover we carried out a complete integration of the gene space between all microarrays data considered, so, to our effects we didn't exclude any genes because be part or not of a particular study. Therefore, this disparity in microarrays studies could only affect the GeneAS i scoring. The score should be interpreted as a commitment between agreement across methods and their statistical significance. Even when is reasonable to assume that a gene with a simultaneously high agreement and high statistical significance could be very important (i.e. LEP, FLT1, INHA) (Fig. 6), also the condition of high agreement with low statistically significance is equally relevant (because actually leads to highest score). In other words, a statistical significance don't necessarily means that the gene is more relevant to the disease than any other with a reduced but significant change.
Previously we presented evidence indicating that consensus prioritization is capable to identify genes with high pathogenic probabilities in the first portion of the data. It is clearly presented in Fig. 6 with VEGFA, AGTR1, F5 and TGFB1 which are well related with pathogenesis [49,50,51,52,53,54,55]; however, the score obtained from microarray data is relatively low in these cases (less than 0.5). Considering a high cutoff value (i.e. >0.7) we can identify: LEP, FLT1, INHA, ENG, PAPPA2, and CRH. There are sufficient evidences to associate these genes with PE pathogenesis or clinical manifestation [14, 52, 53, 56,57,58,59,60]. Our calculations also indicate that communities 6 and 2 are those with highest enrichment of genes coming from microarray data confirming our previous results using the network and consensus prioritization (Table 6). This consistency support that the prioritization strategy is actually pointing us in the correct direction and also justify the idea that the associated pathways could also be highly relevant.
Metabolic involvement
In all previous analysis, the VEGF signaling pathways had being selected as the most relevant in PE. This pathway is presented (Fig. 6) with the genes: VEGFA, VEGFB, VEGFC, FLT1, KDR, FLT4, PGF, NRP2 and NRP1. These genes are directly connected with arginine (NOS1, NOS2 and NOS3) and nitric oxide metabolisms (NOSIP and HSP90AA1). Considering the previous results in communities and pathways enrichment analysis we should conclude that this processes will be actually the most significant for PE pathogenesis. Interestingly the involvement of VEGF, FLT1 and several elements in arginine metabolism, including NO production, was proposed in [61] as the primary mechanism in placenta leading to PE.
In the protein-protein interaction network analysis we can noticed that genes like VEGFA, NOS3, SRC and AKT1 are highly connected (community 2 in Table 6 and Fig. 4) but in the integrated metabolic network (Model 1, Fig. 7) their connectivity is not so elevated (or even in the extended integrated metabolic network showed in Additional file 5). The reason for these differences is a direct consequent of the pathway representation in KEGG and Reactome. For example, VEGF mediates the ezrin/calpain/PI3K/Akt pathway-dependent stimulation of NOS3 phosphorylation leading to Ca2+ independent NO generation [62,63,64]. This connection will be reflected in the PPI as an edge between VEGFA and NOS3 or even between VEGFA and PIK3R1 (distant nodes in Fig. 6) and these both interactions will not be seen in the Fig. 7. This is why we should additionally use the Model 3 for integrated pathway (presented in Additional file 5) and consider the pathway in Fig. 7 as the simplest representation of the biological meaning of genes involved in PE. Form the metabolic integrated network (Fig. 6) we can clearly identify some well-known mechanisms related with VEGF and PE and other relevant effects that will be discussed further.
The increment in FLT1 production (noticed by the microarray data) could lead to the increment in the soluble-Flt1 rescuing the extracellular VEGFA [65, 66]. Therefore the increment in VEGFA expression in placenta could be a compensatory response to restore normal angiogenesis [67]. This mechanism of interaction between soluble-Flt1 and VEGFA as well as soluble-Eng and TGFB1 (also present in community 2) had being long term related with PE pathogenesis [66]. The increment in VEGFA could also be associated with an increment in HSP90, also acting with SRC in the NOS3 expression and NO production. As we previously explained, this can also be accomplished through PI3K and the involvement of AKT1. The expression of HSP90 could be polemic because apparently it is related with the disease progression and also placenta location [68]. However several authors had being found an increment in the placenta expression of HSP90 in PE [68,69,70] at mRNA and protein levels. Moreover, this increment can be a protective reaction that is stimulated by HIF1A [71] and consequently connected to disease progression as described in [68]. Actually, there are differences in HIF1A placenta expression in early-onset PE and late-onset PE [72] showing that both stages have different hypoxia compensatory mechanism. Interestingly the HIF1A gene is part of our prioritized list (ranking at 46) and PPI network. HIF1A is not part of any community but connected to several of them, especially with community 2 (HIF1A is not connected with communities 1 and 7) and it is also missing in the integrated metabolic models. There are not studies of HSP90 (or closely related HSP70) gene variations or promoter polymorphism in PE to know for sure if the protective roll could be compromised leading to early or late PE manifestation.
In the extended model (Model 3, in Additional file 5) the VEGF pathways is connected with several members of the HLA family through CD247 and PAK2 but also to another variety of pathways through ROCK2 and FGD3. Both connections can be related with apoptosis, trophoblastic affectation and endothelial cells organization [73, 74]. Even when PAK2 had being poorly studied in PE, we know that it is directly involved in gestational trophoblastic disease [45] and that endothelial cells PAK and/or CDC42 are directly involved with KDR and consequently essential for endothelial cells organization [74, 75].
The roll of angiogenesis in PE pathogenesis is clearly revealed in our theoretical analysis as well as scientific literature. However, we should discuss other aspects related with pathogenesis, specially the renin-angiotensin pathway and the roll of catechol-O-methyltransferase (COMT).
We can notice in our prioritization list that AGT and AGTR1 are the first two genes in our ranking (ACE and AGTR2 were also found in position 7 and 19 respectively) but interestingly only AGTR1 was found down-regulated in our microarray data. The down-regulation of AGTR1 from microarray data was only identified (considering our microarray data) in [6, 8], however, other authors [76, 77] found an increment in AGTR1 expression. In any case there are evidences that renin-angiotensin system is modified in hypoxic conditions [77, 78] and it is well connected with the HIF1A previously discussed. In our list of pathogenic genes, we should notice the AGT derive from 1) the AT1-AA auto-antibody that interact with AGTR1 or 2) from some polymorphism in AGT that had being associated with increased risk of preeclampsia [54, 79, 80]. The origin of AT1-AA in PE is quite unknown [50, 51, 81] but some authors shown that it is related to B-cells and it is connected with IL10 during pregnancy as well as with other cytokines (i.e. TNF) [82, 83]. Other authors indicates that an increment in CD4(+) T-cells and a decrement in T regulatory cells stimulates TNF, IL6, endothelin (EDN1), IL-17 and B-cells production of AT1-AA [83,84,85]. Some of these genes are clearly involved in our networks and communities, especially EDN1 and IL6. We can't clearly state that renin-angiotensin pathway is not part of PE pathogenesis but our network based results reduce the importance of this pathway. These suggest that its finding in the top prioritized gene list is actually a consequence of the hypertension effect more than the PE pathogenesis.
The additional consideration of the compounds involved in the integrated metabolic network (Model 3) (Table 8) also leads us to interesting points. A deregulation in ammonia and urea cycles as well as in phospholipid and bile acid metabolism has being reported previously in metabolomics analysis [86, 87]. We know that steroid hormones are related with vascular endothelium, for instance, estradiol and progesterone/estradiol ratios are altered in placenta of PE women probably related with NO metabolism [88]. However, one of the most relevant results in this Table 9 and in the expanded integrated metabolic network (Model 3) is the presence of catechol-O-methyltransferase (COMT) and 2-methoxyoestradiol. In our prioritized gene list (see Additional file 3) the COMT gene is ranked at position 47 and we know that pregnant mouse with a deficiency in catechol-O-methyltransferase (COMT) (consequently no 2-methoxyoestradiol) lead to PE phenotype [89]. This animal model was not considered in our initial pathogenic data analysis but it is clearly expressed in the integrated pathway and prioritization strategy. In the model 3 (Additional file 5), the COMT gene is quite far connected with VEGF, however, was recently showed that 2-methoxyestradiol has an anti-angiogenic effect connected to KDR and HIF1A probably through a different mechanism not involving sFlt-1 [90].
Table 8 Compound list of metabolic species present in the expanded integrated metabolic network model
Table 9 Results of integrated metabolic pathways
Our results confirm that consensus prioritization strategy lead us to genes with pathogenic involvement, at least in PE. Moreover, the introductions of network and enrichment analysis are capable to narrow the metabolic and gene space leading us toward reasonable conclusions in agreement with our scientific knowledge of the disease. However, the proposed strategies need to be further improved in several topics. For instance: a) the inclusion of prioritization algorithms based in learning strategies, b) the inclusion of other network processing methods to reduce the gene lost and 3) the differentiation between early and late-onset preeclampsia. Additionally, as previously stated, there are several genes relevant in our analysis with poor or almost no information in their PE involvement. Therefore further experimental analysis will needed to validate the participation of these genes in PE pathogenesis or clinical manifestation.
From all the prioritization methods used in our work MetaRanker brings the better results. However, our results confirm that consensus strategy of several prioritization tools improve the detection and initial enrichment of pathogenic genes, at least in preeclampsia condition.
The combination of around the first percent of the prioritized genes and protein-protein interaction network followed by communality analysis brings the possibilities to reduce the gene space and actually group well known genes related with pathogenesis. In this analysis communities connected with VEGF-signaling pathway are highly enriched. This pathway is also enriched considering the microarray data. Actually the pathways weighting strategy together with network analysis agrees with the results obtained in microarray data.
The integrated metabolic pathway clearly indicates main routes involved in preeclampsia pathogenesis. Our result could support previous publications indicating that hypoxia and also angiotensin pathways are secondary manifestations and could be actually connected with disease progression or differentiation between early and late onset preeclampsia development. Our result point to VEGF, FLT1, KDR as relevant pathogenic genes, as well as those connected with NO metabolism. However, other genes like HSP90, PAK2, CD247 and others included in the first 1% of the prioritized list need to be further explored in preeclampsia pathogenesis through experimental approaches.
AUAC:
Under the accumulation curve
BEDROC:
Boltzmann-enhanced discrimination of ROC
EF:
Enrichment factor
RIE:
Robust initial enhancement (RIE)
ROC:
Receiver operating curve
Chaiworapongsa T, Chaemsaithong P, Yeo L, Romero R. Pre-eclampsia part 1: current understanding of its pathophysiology. Nat Rev Nephrol. 2014;10:466–80. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25003615
Fisher SJ. Why is placentation abnormal in preeclampsia? Am J Obstet Gynecol. 2015;213:S115–22. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26428489
Zhou Y, Damsky CH, Fisher SJ. Preeclampsia is associated with failure of human cytotrophoblasts to mimic a vascular adhesion phenotype. One cause of defective endovascular invasion in this syndrome? J Clin Invest. 1997;99:2152–64. Available from: http://www.ncbi.nlm.nih.gov/pubmed/9151787
Barrett T, Troup DB, Wilhite SE, Ledoux P, Evangelista C, Kim IF, et al. NCBI GEO: archive for functional genomics data sets--10 years on. Nucleic Acids Res. 2011;39:D1005–10. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21097893
Parkinson H, Sarkans U, Kolesnikov N, Abeygunawardena N, Burdett T, Dylag M, et al. ArrayExpress update--an archive of microarray and high-throughput sequencing-based functional genomics experiments. Nucleic Acids Res. 2011;39:D1002–4. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21071405
van Uitert M, Moerland PD, Enquobahrie DA, Laivuori H, van der Post JAM, Ris-Stalpers C, et al. Meta-analysis of placental Transcriptome data identifies a novel molecular pathway related to preeclampsia. PLoS One. 2015;10:e0132468. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26171964
Yong HEJ, Melton PE, Johnson MP, Freed KA, Kalionis B, Murthi P, et al. Genome-wide transcriptome directed pathway analysis of maternal pre-eclampsia susceptibility genes. PLoS One. 2015;10:e0128230. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26010865
Leavey K, Bainbridge SA, Cox BJ. Large scale aggregate microarray analysis reveals three distinct molecular subclasses of human preeclampsia. PLoS One. 2015;10:e0116508. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25679511
Rabaglino MB, Post Uiterweer ED, Jeyabalan A, Hogge WA, Conrad KP. Bioinformatics approach reveals evidence for impaired endometrial maturation before and during early pregnancy in women who developed preeclampsia. Hypertension. 2015;65:421–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25421975
Tejera E, Bernardes J, Rebelo I. Co-expression network analysis and genetic algorithms for gene prioritization in preeclampsia. BMC Med Genet. 2013;6:51. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24219996
Tejera E, Bernardes J, Rebelo I. Preeclampsia: a bioinformatics approach through protein-protein interaction networks analysis. BMC Syst Biol. 2012;6:97. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22873350
Song Y, Liu J, Huang S, Zhang L. Analysis of differentially expressed genes in placental tissues of preeclampsia patients using microarray combined with the connectivity map database. Placenta. 2013;34:1190–5. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24125805
Vaiman D, Calicchio R, Miralles F. Landscape of transcriptional deregulations in the preeclamptic placenta. PLoS One. 2013;8:e65498. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23785430
Moslehi R, Mills JL, Signore C, Kumar A, Ambroggio X, Dzutsev A. Integrative transcriptome analysis reveals dysregulation of canonical cancer molecular pathways in placenta leading to preeclampsia. Sci Rep. 2013;3:2407. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23989136
Börnigen D, Tranchevent L-C, Bonachela-Capdevila F, Devriendt K, De Moor B, De Causmaecker P, et al. An unbiased evaluation of gene prioritization tools. Bioinformatics. 2012;28:3081–8. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23047555
Tranchevent L-C, Capdevila FB, Nitsch D, De Moor B, De Causmaecker P, Moreau Y. A guide to web tools to prioritize candidate genes. Brief Bioinform. 2011;12:22–32. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21278374
Liekens AML, De Knijf J, Daelemans W, Goethals B, De Rijk P, Del-Favero J. BioGraph: unsupervised biomedical knowledge discovery via automated hypothesis generation. Genome Biol. 2011;12:R57. BioMed Central, Available from: http://www.ncbi.nlm.nih.gov/pubmed/21696594
Hutz JE, Kraja AT, McLeod HL. Province MA. CANDID: a flexible method for prioritizing candidate genes for complex human traits. Genet Epidemiol. 2008;32:779–90. Available from: http://www.ncbi.nlm.nih.gov/pubmed/18613097
Jourquin J, Duncan D, Shi Z, Zhang B. GLAD4U: deriving and prioritizing gene lists from PubMed literature. BMC Genomics. 2012;13(Suppl 8):S20. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23282288
Cheng D, Knox C, Young N, Stothard P, Damaraju S, Wishart DS. PolySearch: a web-based text mining system for extracting relationships between human diseases, genes, mutations, drugs and metabolites. Nucleic Acids Res. 2008;36:W399–405. Oxford University Press, Available from: http://www.ncbi.nlm.nih.gov/pubmed/18487273
Wu X, Jiang R, Zhang MQ, Li S. Network-based global inference of human disease genes. Mol Syst Biol. 2008;4:189. European Molecular Biology Organization, Available from: http://www.ncbi.nlm.nih.gov/pubmed/18463613
Guney E, Garcia-Garcia J, Oliva B. GUILDify: a web server for phenotypic characterization of genes through biological data integration and network-based prioritization algorithms. Bioinformatics. 2014;30:1789–90. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24532728
Piñero J, Queralt-Rosinach N, Bravo À, Deu-Pons J, Bauer-Mehren A, Baron M, et al. DisGeNET: a discovery platform for the dynamical exploration of human diseases and their genes. Database (Oxford). 2015;2015:bav028. Oxford University Press, Available from: http://www.ncbi.nlm.nih.gov/pubmed/25877637
Yu W, Wulf A, Liu T, Khoury MJ, Gwinn M, Rebbeck T, et al. Gene prospector: an evidence gateway for evaluating potential susceptibility genes and interacting risk factors for human diseases. BMC Bioinformatics. 2008;9:528. BioMed Central, Available from: http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-9-528
Fontaine J-F, Priller F, Barbosa-Silva A, Andrade-Navarro MA. Génie: literature-based gene prioritization at multi genomic scale. Nucleic Acids Res. 2011;39:W455–61. Oxford University Press, Available from: http://www.ncbi.nlm.nih.gov/pubmed/21609954
Yue P, Melamud E, Moult J. SNPs3D: candidate gene and SNP selection for association studies. BMC Bioinformatics. 2006;7:166. Available from: http://www.ncbi.nlm.nih.gov/pubmed/16551372
Seelow D, Schwarz JM, Schuelke M. GeneDistiller--distilling candidate genes from linkage intervals. PLoS One. 2008;3:e3874. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19057649
Pers TH, Dworzyński P, Thomas CE, Lage K, Brunak S. MetaRanker 2.0: a web server for prioritization of genetic variation data. Nucleic Acids Res. 2013;41:W104–8. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23703204
Gonzalez GH, Tahsin T, Goodale BC, Greene AC, Greene CS. Recent advances and emerging applications in text and data Mining for Biomedical Discovery. Brief Bioinform. 2016;17:33–42. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26420781
Helguera AM, Perez-Castillo Y, Cordeiro MN DS, Tejera E, Paz-Y-Miño C, Sánchez-Rodríguez A, et al. Ligand-based virtual screening using tailored ensembles: a prioritization tool for dual A2AAdenosine receptor antagonists / monoamine Oxidase B inhibitors. Curr Pharm Des. 2016;22:3082–96. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26932160
Perez-Castillo Y, Helguera AM, Cordeiro MNDS, Tejera E, Paz-Y-Miño C, Sánchez-Rodríguez A, et al. Fusing docking scoring functions improves the virtual screening performance for discovering Parkinson's disease dual target Ligands. Curr Neuropharmacol. 2017 [cited 2017 Mar 29]; Available from: http://www.ncbi.nlm.nih.gov/pubmed/28067172.
Truchon J-F, Bayly CI. Evaluating virtual screening methods: good and bad metrics for the "early recognition" problem. J Chem Inf Model. 47:488–508. Available from: http://www.ncbi.nlm.nih.gov/pubmed/17288412
Cruz-Monteagudo M, Borges F, Paz-y-Miño C, Cordeiro MNDS, Rebelo I, Perez-Castillo Y, et al. Efficient and biologically relevant consensus strategy for Parkinson's disease gene prioritization. BMC Med Genet. 2016;9:12. BioMed Central, Available from: http://www.biomedcentral.com/1755-8794/9/12
Mackey MD, Melville JL. Better than random? The chemotype enrichment problem. J Chem Inf Model. 2009;49:1154–62. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19397275
Huang DW, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009;4:44–57. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19131956
Huang DW, Sherman BT, Lempicki RA. Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists. Nucleic Acids Res. 2009;37:1–13. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19033363
Supek F, Bošnjak M, Škunca N, Šmuc T. REVIGO summarizes and visualizes long lists of gene ontology terms. PLoS One. 2011;6:e21800. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21789182
Antonov AV, Schmidt EE, Dietmann S, Krestyaninova M, Hermjakob H. R spider: a network-based analysis of gene lists by combining signaling and metabolic pathways from Reactome and KEGG databases. Nucleic Acids Res. 2010;38:W78–83. Available from: http://www.ncbi.nlm.nih.gov/pubmed/20519200
Szklarczyk D, Franceschini A, Wyder S, Forslund K, Heller D, Huerta-Cepas J, et al. STRING v10: protein-protein interaction networks, integrated over the tree of life. Nucleic Acids Res. 2015;43:D447–52. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25352553
Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, et al. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13:2498–504. Available from: http://www.ncbi.nlm.nih.gov/pubmed/14597658
Palla G, Derényi I, Farkas I, Vicsek T. Uncovering the overlapping community structure of complex networks in nature and society. Nature. 2005;435:814–8. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15944704
Walsh CJ, Hu P, Batt J, Dos SCC. Microarray meta-analysis and cross-platform normalization: integrative genomics for robust biomarker discovery. Microarrays (Basel, Switzerland). 2015;4:389–406. Multidisciplinary Digital Publishing Institute (MDPI), Available from: http://www.ncbi.nlm.nih.gov/pubmed/27600230
Cox B. Bioinformatic approach to the genetics of preeclampsia. Obstet Gynecol. 2014;124:633. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25162267
Jia R, Li J, Rui C, Ji H, Ding H, Lu Y, et al. Comparative proteomic profile of the human umbilical cord blood Exosomes between normal and preeclampsia pregnancies with high-resolution mass spectrometry. Cell Physiol Biochem. 2015;36:2299–306. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26279434
Tejera E, Bernardes J, Rebelo I. Preeclampsia: a bioinformatics approach through protein-protein interaction networks analysis. BMC Syst Biol. 2012;2012:97.
Khangura RK, Khangura CK, Desai A, Goyert G, Sangha R. Metastatic colorectal cancer resembling severe preeclampsia in pregnancy. Case Rep Obstet Gynecol. 2015;2015:487824. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26770850
Romero R, Grivel J-C, Tarca AL, Chaemsaithong P, Xu Z, Fitzgerald W, et al. Evidence of perturbations of the cytokine network in preterm labor. Am J Obstet Gynecol. 2015;213:836.e1. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26232508
Johnson WE, Li C, Rabinovic A. Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics. 2007;8:118–27. Available from: http://www.ncbi.nlm.nih.gov/pubmed/16632515
Iriyama T, Wang W, Parchim NF, Song A, Blackwell SC, Sibai BM, et al. Hypoxia-independent upregulation of placental hypoxia inducible factor-1α gene expression contributes to the pathogenesis of preeclampsia. Hypertension. 2015;65:1307–15. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25847948
Xia Y, Kellems RE. Angiotensin receptor agonistic autoantibodies and hypertension: preeclampsia and beyond. Circ Res. 2013;113:78–87. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23788505
Parrish MR, Murphy SR, Rutland S, Wallace K, Wenzel K, Wallukat G, et al. The effect of immune factors, tumor necrosis factor-alpha, and agonistic autoantibodies to the angiotensin II type I receptor on soluble fms-like tyrosine-1 and soluble endoglin production in response to hypertension during pregnancy. Am J Hypertens. 2010;23:911–6. Available from: http://www.ncbi.nlm.nih.gov/pubmed/20431529
Maynard SE, Min J-Y, Merchan J, Lim K-H, Li J, Mondal S, et al. Excess placental soluble fms-like tyrosine kinase 1 (sFlt1) may contribute to endothelial dysfunction, hypertension, and proteinuria in preeclampsia. J Clin Invest. 2003;111:649–58. Available from: http://www.ncbi.nlm.nih.gov/pubmed/12618519
Venkatesha S, Toporsian M, Lam C, Hanai J, Mammoto T, Kim YM, et al. Soluble endoglin contributes to the pathogenesis of preeclampsia. Nat Med. 2006;12:642–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/16751767
Staines-Urias E, Paez MC, Doyle P, Dudbridge F, Serrano NC, Ioannidis JPA, et al. Genetic association studies in pre-eclampsia: systematic meta-analyses and field synopsis. Int J Epidemiol. 2012;41:1764–75. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23132613
Li X, Shen L, Tan H. Polymorphisms and plasma level of transforming growth factor-Beta 1 and risk for preeclampsia: a systematic review. PLoS One. 2014;9:e97230. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24823830
Macintire K, Tuohey L, Ye L, Palmer K, Gantier M, Tong S, et al. PAPPA2 is increased in severe early onset pre-eclampsia and upregulated with hypoxia. Reprod Fertil Dev. 2014;26:351–7. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23484525
Wagner PK, Otomo A, Christians JK. Regulation of pregnancy-associated plasma protein A2 (PAPPA2) in a human placental trophoblast cell line (BeWo). Reprod Biol Endocrinol. 2011;9:48. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21496272
Fong FM, Sahemey MK, Hamedi G, Eyitayo R, Yates D, Kuan V, et al. Maternal genotype and severe preeclampsia: a HuGE review. Am J Epidemiol. 2014;180:335–45. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25028703
Nezi M, Mastorakos G, Mouslech Z. Corticotropin releasing hormone and the immune/inflammatory response [internet]. Endotext. 2000. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25905246.
Song J, Li Y, An RF. Identification of early-onset preeclampsia-related genes and MicroRNAs by bioinformatics approaches. Reprod Sci. 2015;22:954–63. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25717061
Noris M, Perico N, Remuzzi G. Mechanisms of disease: pre-eclampsia. Nat Clin Pract Nephrol. 2005;1:98–114. Nature Publishing Group, Available from: http://www.nature.com/doifinder/10.1038/ncpneph0035
Dimmeler S, Fleming I, Fisslthaler B, Hermann C, Busse R, Zeiher AM. Activation of nitric oxide synthase in endothelial cells by Akt-dependent phosphorylation. Nature. 1999;399:601–5. Available from: http://www.ncbi.nlm.nih.gov/pubmed/10376603.
Cindrova-Davies T, Sanders DA, Burton GJ, Charnock-Jones DS. Soluble FLT1 sensitizes endothelial cells to inflammatory cytokines by antagonizing VEGF receptor-mediated signalling. Cardiovasc Res. 2011;89:671–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21139021.
Nagai A, Sado T, Naruse K, Noguchi T, Haruta S, Yoshida S, et al. Antiangiogenic-induced hypertension: the molecular basis of signaling network. Gynecol Obstet Investig. 2012;73:89–98. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22222493.
Chappell JC, Taylor SM, Ferrara N, Bautch VL. Local guidance of emerging vessel sprouts requires soluble Flt-1. Dev Cell. 2009;17:377–86. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19758562
Powe CE, Levine RJ, Karumanchi SA. Preeclampsia, a disease of the maternal endothelium: the role of antiangiogenic factors and implications for later cardiovascular disease. Circulation. 2011;123:2856–69. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21690502
Sundrani DP, Reddy US, Joshi AA, Mehendale SS, Chavan-Gautam PM, Hardikar AA, et al. Differential placental methylation and expression of VEGF, FLT-1 and KDR genes in human term and preterm preeclampsia. Clin. Epigenetics. BioMed Central. 2013;5:6. Available from: http://clinicalepigeneticsjournal.biomedcentral.com/articles/10.1186/1868-7083-5-6
Hromadnikova I, Dvorakova L, Kotlabova K, Kestlerova A, Hympanova L, Novotna V, et al. Assessment of placental and maternal stress responses in patients with pregnancy related complications via monitoring of heat shock protein mRNA levels. Mol Biol Rep. 2015;42:625–37. Available from: http://link.springer.com/10.1007/s11033-014-3808-z
Shu C, Liu Z, Cui L, Wei C, Wang S, Tang JJ, et al. Protein profiling of preeclampsia placental tissues. Buratti E, editor. PLoS One. 2014;9:e112890. Public Library of Science, Available from: http://dx.plos.org/10.1371/journal.pone.0112890.
Padmini E, Venkatraman U, Srinivasan L. Mechanism of JNK signal regulation by placental HSP70 and HSP90 in endothelial cell during preeclampsia. Toxicol Mech Methods. 2012;22:367–74. Available from: http://www.tandfonline.com/doi/full/10.3109/15376516.2012.673091.
Padmini E, Uthra V, Lavanya S. Effect of HSP70 and 90 in modulation of JNK, ERK expression in Preeclamptic placental endothelial cell. Cell Biochem Biophys. 2012;64:187–95. Available from: http://link.springer.com/10.1007/s12013-012-9371-0.
Khodzhaeva ZS, Kogan YA, Shmakov RG, Klimenchenko NI, Akatyeva AS, Vavina OV, et al. Clinical and pathogenetic features of early- and late-onset pre-eclampsia. J Matern Neonatal Med. 2015;2015:1–7. Available from: http://www.tandfonline.com/doi/full/10.3109/14767058.2015.1111332.
Siu MKY, Yeung MCW, Zhang H, Kong DSH, Ho JWK, Ngan HYS, et al. p21-activated kinase-1 promotes aggressive phenotype, cell proliferation, and invasion in gestational trophoblastic disease. Am J Pathol. 2010;176:3015–22. Available from: http://www.ncbi.nlm.nih.gov/pubmed/20413688.
Barry DM, Xu K, Meadows SM, Zheng Y, Norden PR, Davis GE, et al. Cdc42 is required for cytoskeletal support of endothelial cell adhesion during blood vessel formation in mice. Development. 2015;142:3058–70. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26253403.
Dubrac A, Genet G, Ola R, Zhang F, Pibouin-Fragner L, Han J, et al. Targeting NCK-mediated endothelial cell front-rear polarity inhibits NeovascularizationCLINICAL PERSPECTIVE. Circulation. 2016;133:409–21. Available from: http://circ.ahajournals.org/lookup/doi/10.1161/CIRCULATIONAHA.115.017537.
Mistry HD, Kurlak LO, Broughton Pipkin F. The placental renin-angiotensin system and oxidative stress in pre-eclampsia. Placenta. 2013;34:182–6. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23246097.
Kurlak LO, Williams PJ, Bulmer JN, Broughton Pipkin F, Mistry HD. Placental expression of adenosine A2A receptor and hypoxia inducible factor-1 alpha in early pregnancy, term and pre-eclamptic pregnancies: interactions with placental renin-angiotensin system. Placenta. 2015;36:611–3. Available from: http://linkinghub.elsevier.com/retrieve/pii/S0143400415008103.
Kurlak LO, Mistry HD, Cindrova-Davies T, Burton GJ, Broughton Pipkin F. Human placental renin-angiotensin system in normotensive and pre-eclamptic pregnancies at high altitude and after acute hypoxia-reoxygenation insult. J Physiol. 2016;594:1327–40. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26574162.
Ni S, Zhang Y, Deng Y, Gong Y, Huang J, Bai Y, et al. AGT M235T polymorphism contributes to risk of preeclampsia: evidence from a meta-analysis. J Renin-Angiotensin-Aldosterone Syst. 2012;13:379–86. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22513276.
Zhao L, Dewan AT, Bracken MB. Association of maternal AGTR1 polymorphisms and preeclampsia: a systematic review and meta-analysis. J Matern Fetal Neonatal Med. 2012;25:2676–80. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22758920.
Dechend R, Gratze P, Wallukat G, Shagdarsuren E, Plehm R, Bräsen J-H, et al. Agonistic autoantibodies to the AT1 receptor in a transgenic rat model of preeclampsia. Hypertension. 2005;45:742–6. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15699466.
Fettke F, Schumacher A, Costa S-D, Zenclussen AC. B cells: the old new players in reproductive immunology. Front Immunol. 2014;5:285. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25002862.
Spradley FT, Palei AC, Granger JP. Immune mechanisms linking obesity and preeclampsia. Biomol Ther. 2015;5:3142–76. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26569331.
Harmon AC, Cornelius DC, Amaral LM, Faulkner JL, Cunningham MW, Wallace K, et al. The role of inflammation in the pathology of preeclampsia. Clin Sci (Lond). 2016;130:409–19. Portland Press Limited, Available from: http://www.ncbi.nlm.nih.gov/pubmed/26846579.
Dhillion P, Wallace K, Herse F, Scott J, Wallukat G, Heath J, et al. IL-17-mediated oxidative stress is an important stimulator of AT1-AA and hypertension during pregnancy. Am J Phys Regul Integr Comp Phys. 2012;303:R353–8. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22718806.
Austdal M, Thomsen LCV, Tangerås LH, Skei B, Mathew S, Bjørge L, et al. Metabolic profiles of placenta in preeclampsia using HR-MAS MRS metabolomics. Placenta. 2015;36:1455–62. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26582504.
Bahado-Singh RO, Syngelaki A, Akolekar R, Mandal R, Bjondahl TC, Han B, et al. Validation of metabolomic models for prediction of early-onset preeclampsia. Am J Obstet Gynecol. 2015;213:530.e1–530.e10. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26116099.
Zheng J-J, Wang H-O, Huang M, Zheng F-Y. Assessment of ADMA, estradiol, and progesterone in severe preeclampsia. Clin Exp Hypertens. 2016;38:347–51. Available from: http://www.ncbi.nlm.nih.gov/pubmed/27152507.
Kanasaki K, Palmsten K, Sugimoto H, Ahmad S, Hamano Y, Xie L, et al. Deficiency in catechol-O-methyltransferase and 2-methoxyoestradiol is associated with pre-eclampsia. Nature. 2008;453:1117–21. Available from: http://www.ncbi.nlm.nih.gov/pubmed/18469803.
Lee DK. Nevo O. 2-Methoxyestradiol regulates VEGFR-2 and sFlt-1 expression in human placenta. Placenta. 2015;36:125–30. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25499009.
This project was partially supported by Foundation for Science and Technology (FCT) and FEDER/COMPETE (Grants UID/QUI/00081/2013, POCI-01-0145-FEDER-006980, and NORTE-01-0145-FEDER-000028). The authors also thank the COST action CA15135 (Multi-Target Paradigm for Innovative Ligand Identification in the Drug Discovery Process, MuTaLig) for support. MC-M (Grant SFRH/BPD/90673/2012) was also supported by FCT and FEDER/COMPETE funds.
All data generated or analyzed during this study are included in this published article and its supplementary information files.
Facultad de Medicina, Universidad de Las Américas, Av. de los Granados E12-41y Colimes esq, EC170125, Quito, Ecuador
Eduardo Tejera, Germán Burgos & María-Eugenia Sánchez
Department of Molecular and Cellular Pharmacology, Miller School of Medicine and Center for Computational Science, University of Miami, FL 33136, Miami, USA
Maykel Cruz-Monteagudo
Department of General Education, West Coast University—Miami Campus, Doral, FL 33178, USA
Departamento de Ciencias Naturales, Universidad Técnica Particular de Loja, Calle París S/N, EC1101608, Loja, Ecuador
Aminael Sánchez-Rodríguez
Escuela de Ciencias Físicas y Matemáticas, Universidad de Las Américas, Quito, Ecuador
Yunierkis Pérez-Castillo
CIQUP/Departamento de Quimica e Bioquimica, Faculdade de Ciências, Universidade do Porto, 4169-007, Porto, Portugal
Maykel Cruz-Monteagudo & Fernanda Borges
REQUIMTE, Department of Chemistry and Biochemistry, Faculty of Sciences, University of Porto, 4169-007, Porto, Portugal
Maykel Cruz-Monteagudo & Maria Natália Dias Soeiro Cordeiro
Centro de Investigaciones genética y genómica, Facultad de Ciencias de la Salud, Universidad Tecnológica Equinoccial, Quito, Ecuador
César Paz-y-Miño
Faculty of Pharmacy, University of Porto, Porto, Portugal
Irene Rebelo
UCIBIO@REQUIMTE, Caparica, Portugal
Eduardo Tejera
Germán Burgos
María-Eugenia Sánchez
Fernanda Borges
Maria Natália Dias Soeiro Cordeiro
ET, MC, AS and YPC were responsible for methodologies and algorithms development. Specifically, ET and MC worked in the consensus strategy and metabolic network while YPC and AS worked in protein-protein interaction network. GB and ME were involved in data recollection from literature concerning to gene-disease confirmed associations as well as microarray data integration. FB, NC and CP were involved genes-disease validation analysis and results discussion. IR was involved in metabolic analysis and genetic results integration with possible interpretation in preeclampsia. All authors were involved in manuscript corrections. All authors had read and approved the manuscript for publication.
Correspondence to Eduardo Tejera.
Identification of pathogenic genes. The file comprises the literature and several observations considered for the selection of our pathogenic gene list. (DOCX 86 kb)
Microarrays consensus. The file comprises all information concerning the microarray data as well as the integration. (XLSX 141 kb)
Prioritized genes. The file comprises our final prioritized genes as well as the consensus score. (XLSX 28 kb)
Enrichment analysis. The file comprises all the enrichment analysis: gene ontology and metabolic pathways. (XLSX 292 kb)
Integrated metabolic network. The file comprises the Integrated Metabolic Network corresponding with Model 3 of Table 8 as well as the list of all compounds contained in the metabolic network. (DOCX 1116 kb)
Tejera, E., Cruz-Monteagudo, M., Burgos, G. et al. Consensus strategy in genes prioritization and combined bioinformatics analysis for preeclampsia pathogenesis. BMC Med Genomics 10, 50 (2017). https://doi.org/10.1186/s12920-017-0286-x
Consensus analysis
Gene periodization
Communality analysis
Early recognition | CommonCrawl |
Selection of cortical dynamics for motor behaviour by the basal ganglia
Francesco Mannella ORCID: orcid.org/0000-0002-7308-08441 &
Gianluca Baldassarre1
Biological Cybernetics volume 109, pages575–595(2015)Cite this article
The basal ganglia and cortex are strongly implicated in the control of motor preparation and execution. Re-entrant loops between these two brain areas are thought to determine the selection of motor repertoires for instrumental action. The nature of neural encoding and processing in the motor cortex as well as the way in which selection by the basal ganglia acts on them is currently debated. The classic view of the motor cortex implementing a direct mapping of information from perception to muscular responses is challenged by proposals viewing it as a set of dynamical systems controlling muscles. Consequently, the common idea that a competition between relatively segregated cortico-striato-nigro-thalamo-cortical channels selects patterns of activity in the motor cortex is no more sufficient to explain how action selection works. Here, we contribute to develop the dynamical view of the basal ganglia–cortical system by proposing a computational model in which a thalamo-cortical dynamical neural reservoir is modulated by disinhibitory selection of the basal ganglia guided by top-down information, so that it responds with different dynamics to the same bottom-up input. The model shows how different motor trajectories can so be produced by controlling the same set of joint actuators. Furthermore, the model shows how the basal ganglia might modulate cortical dynamics by preserving coarse-grained spatiotemporal information throughout cortico-cortical pathways.
Preparation and execution of intentional movements requires the activity of the motor cortex. This cortical region forms re-entrant parallel loops with both the dorsolateral basal ganglia and the cerebellum (Middleton and Strick 2000; Caligiore et al. 2013). In particular, the interaction between the motor cortex and the basal ganglia seems to be organized in relatively segregated cortico-striato-nigro-thalamo-cortical (CSNTC) loops (Alexander et al. 1986; Haber 2003; Romanelli et al. 2005). Various computational approaches have been attempted to explain these loops as implementing motor sequence processing (Beiser and Houk 1998; Berns and Sejnowski 1998), or dimensionality reduction (Bar-Gad et al. 2003). One of the most accredited hypotheses to date is that they implement action selection (Mink 1996; Redgrave et al. 1999; Gurney et al. 2001).
There are two main issues in trying to explain how the motor basal ganglia–cortical loops work. A first issue concerns the nature of the neural encoding used by the motor cortex. This cortical region reaches both the brainstem motor centres and, more directly, the spinal motor neurons projecting to the muscles (Orlovsky et al. 1999; Ijspeert et al. 2007). Thus, the same cortical areas control muscles and subcortical motor centres encoding sophisticated motor patterns (Ijspeert 2008; Ciancio et al. 2013). Over the last two decades, various hypotheses about the representation of movements within the motor cortex have been proposed. A wide number of studies have interpreted data, mainly coming from single-cell electrophysiology, as a proof that the motor cortex implements a topological map of the body in which the activity of single cells can be directly related to the resulting forces acting on the muscles (Evarts 1968; Georgopoulos et al. 1982; Sergio et al. 2005). In this vein, several studies also related the behaviour of distinct motor populations to the control of parameters such as rotation, speed, or direction of movements (among others: Buys et al. 1996; Georgopoulos et al. 1986; Kakei et al. 1999; Wang et al. 2010). On the other hand, several findings indicate that individual neurons in the motor cortex directly project onto wide sets of muscles (Cheney and Fetz 1985), and that the activity of single cortical motor neurons is correlated with complex movements (grasping, reaching, climbing, chewing, etc.; Luppino and Rizzolatti 2000; Graziano and Aflalo 2007). These studies have opened up a new computational interpretation of the motor cortex as forming a set of dynamical systems with time variability and oscillations not directly encoding movement patterns (Churchland et al. 2010; Afshar et al. 2011; Churchland et al. 2012; Mattia et al. 2013).
A second issue, related to the nature of cortical encoding, regards the mechanisms through which the basal ganglia modulate cortical activity in order to select motor plans. A current view is that selection between different channels within CSNTC loops determines which cortical pattern or assembly of neurons will be dishinibited at the level of the cortico-thalamic motor loops (Redgrave et al. 1999; Gurney et al. 2001). In this view, each pattern encoded in an assembly of cortical neurons expresses a distinct motor programme. A channel can release from inhibition a distinct cortical pattern (among others Wickens et al. 1994; Graybiel 1998; Ponzi and Wickens 2010).Footnote 1 This general idea of selection as a differential dishinibition of separated cortical modules has also been extended to explain the interaction between cortex and basal ganglia in cognitive tasks (for instance in the "Prefrontal cortex basal ganglia working memory" PBWM model by Frank et al. 2001; O'Reilly and Frank 2006). All these proposals, while focussing on the selection mechanisms, do not explain how the selected cortical assemblies control the execution of motor programs or cognitive processes.
Here we present a hypothesis reconciling the dynamical nature of cortical encoding with the idea that basal ganglia selection gates thalamo-cortical loops. We propose that selection does not (or not only) choose between different cortical assemblies, but rather between different activity dynamics within the same populations. More in detail, our proposal distinguishes two different processes. The first process consists in the selection of a distinct set of dynamics within a cortical module based on the accumulation of coarse-grained spatiotemporal information at the level of the basal ganglia. The second process regards the interaction between these cortical dynamics and those of other cortical and subcortical areas to gain top-down and bottom-up information. We will show a neural network model implementing this proposal. The model is formed by a dynamical reservoir reproducing the dynamics of a cortical module interacting with the selection mechanism within the basal ganglia which is implemented similarly to what done in Gurney et al. (2001). The model explains how a neural population in the motor cortex can be recruited to generate different movements with the same motor actuators. Furthermore, it shows how the proposed mechanisms can handle cyclic (rhythmic) and end-point (discrete) movements (e.g. a "scratching" movement or a "reaching" movement). In the following, Sect. 2 illustrates the model, in particular Sect. 2.1 illustrates how we used reservoir computing to model cortical dynamics, Sect. 2.2 gives a rationale of how selection processes in the basal ganglia are modelled here and Sect. 2.3 describes our computational hypothesis on how basal ganglia select different dynamics within cortex. Section 4 describes the neural architecture of a core module built to explain the computational hypothesis and a system-level architecture formed by two such core modules to explain the emerging properties of their interaction. Section 4 describes the implementation details of the core module as well as other further details in the implementation of the system-level architecture. Section 5 describes the behaviour of the core module used as the controller of a three-degree-of-freedom (DoF) 2D simulated kinematic arm and a 20-DoF simulated dynamic hand, in tasks requiring the selection and expression of different cyclic or end-point motor behaviours. This section also describes the behaviour of the system-level architecture controlling the same 2D three-DoF simulated kinematic arm in the same tasks. In this case lesions to different parts of the system were exploited for the analysis of its emerging properties.
Selection of cortical dynamics
Cortical reservoirs
Recently, various works have highlighted that reservoir computing can be a candidate computational approach to describe the nature of cortical encoding (Wang 2008; Rigotti et al. 2010; Dominey 2013; Hoerzer et al. 2014). In particular, reservoir networks fulfil two important requirements to model the cortex. First, they are complex distributed dynamical systems with the capacity of dealing with the time course of sensorimotor and cognitive processes. Second, they have a uniform microstructure, with internal connections randomly generated with parametrized procedures. Reservoir computing fundamental principles have been contextually introduced under the notions of liquid state machines (LSM; Maass et al. 2002) and echo state networks (ESN; Jaeger 2002). The idea was anticipated in a work by Dominey (1995) in which the author presented a computational model of cortical sensorimotor sequence learning to control the oculomotor system. LSM and ESM approaches mainly differ in the level of abstraction of the neural units they use. LSM models are usually composed of units that reproduce real neurons at the level of their spiking activity. ESN models are built on the basis of more abstract discrete or leaky-integrator sigmoidal units, leading to dynamical systems which are easier to analyze. We implemented the cortical module of the model presented here as an ESN (see Sect. 4.1). For an extensive review on reservoir computing, see Lukovsevivcius and Jaeger (2009).
a General schema of the functioning of a dynamical reservoir. The units in the reservoir produce nonlinear dynamics which are temporal functions of the input signals. Weights to the read-out unit are modified to obtain a desired temporal function of the network activity. b An example of the internal dynamics of an echo state network: on the top a simple sinusoidal function as the input signal; on the bottom the resulting activities of a sample of units. It can be seen that the activity fades to zero after transient activity when the input signal is set to zero
Dynamical reservoirs are generally formed by a fully recurrent neural network with fixed, typically sparse, random weights, and one further layer of external units connected to the internal units to read out the dynamics of the network. The weights of the connections linking the reservoir units to the read-out units are suitably learned so that the temporal activity of the read-out units is a function of the internal activity of the reservoir. The weights of the internal connections of the reservoir are chosen so that the network activity has two features. First, the activity of its units fades to zero when there is no input and to a fixed-point attractor when there is a constant input (see Fig. 1b). This feature guarantees that the history of inputs is maintained in the activity of the network within a time window because input does not have indefinitely cumulative effects that would result in a chaotic behaviour of the network. As a consequence, if the interval between two inputs exceeds this temporal window, they will not interfere with each other in the modulation of the network activity. Second, the states of activation of the network units have a high variability during fading. This feature guaranties the richness of the temporal response of the reservoir to the input. As a result, the network can in principle reproduce any nonlinear temporal function with its read-out units. In other words, the more the states of the network are dissimilar from one another (low correlation between states in time), the more the time function that the network can learn and reproduce can be complex.
Thus in reservoir networks the internal encoding of input signals and the decoding of the internal activity to reproduce the output responses are two independent processes. Indeed, a reservoir is endowed with its own dynamics and encoding emerges spontaneously without any supervised teaching. This kind of network does not learn the features of the input signals but only converts them into a high-dimensional vector of nonlinearly changing neuronal activations resembling kernel methods used in support-vector networks (Cortes and Vapnik 1995), with the difference that the activations of the reservoir can be viewed as temporal kernels. Learning of a new decoding only updates the weights of the connections projecting from the units of the reservoir to the external read-out units (see Fig. 1a). After learning, a read-out unit transforms the temporal activity of the reservoir correspondent to an input sequence into a single nonlinear signal. Real cortical activity seems to share these features. Decoding of motor responses by reading the activity of motor cortex seems to be possible (Hatsopoulos et al. 2004; Golub et al. 2014). On the other hand, analysis on the same cortical activity reveals the presence of temporal dynamics that are not directly linked to motor responses (Churchland et al. 2012).
As an exception to this architecture, feedback from the external read-out to the internal units can also be present, e.g. to produce rhythmic behaviours without external input. Learning with feedback is not easy in reservoir networks, but various solutions have been found to implement it (Jaeger and Haas 2004; Steil 2004; Sussillo and Abbott 2009). In this case, the internal dynamics of the reservoir is able to acquire information about its output through the learning process. However, this information is mixed with the one coming from the input and transformed in the temporal dynamics of the network, so it could be isolated only with complex statistical methods.
The basal ganglia
This section describes a way in which the basal ganglia might implement selection through disinhibitory competition. The role of the basal ganglia in action selection has been the subject of intense investigation, for example by Mink (1996), Redgrave et al. (1999) and Gurney et al. (2001) (see also Humphries and Gurney 2002; Gurney et al. 2004; Humphries et al. 2006; Bogacz and Gurney 2007). Gurney et al. (2001) proposed one of the most accredited computational hypothesis about the mechanisms behind basal ganglia selection. This hypothesis, together with the reservoir computing idea, is one of the key ingredients of our explanation about the interaction between the basal ganglia and cortex in the control of motor action. We now briefly describe the anatomical organization of the basal ganglia, and then our implementation of the selection hypothesis of Gurney et al. (2001) mentioned above.
Schema of the intrinsic organization of the basal ganglia and their interaction with thalamic and cortical layers. Arrows reaching the borders of the boxes indicate that each unit of a sending layer reaches the corresponding unit of the target layer. In particular, each STN unit reaches all units of GPe and GPi. Acronyms: Inp input signal; Da dopamine efflux; StrD1 D1R-expressing striatal populations; StrD2 D2R-expressing striatal populations, STN subthalamic nucleus, GPi internal globus pallidus, GPe external globus pallidus, Tha thalamus, Ctx cortex
Intrinsic organization of the basal ganglia
Figure 2 shows the intrinsic organization of the basal ganglia and their interaction with the thalamo-cortical loops. The two main input projections of the basal ganglia come from the striatum (Str) and the subthalamic nucleus (STN). Both these nuclei receive most of their afferent projections from the cortex and send efferent projections to the GABAergic output nuclei of the basal ganglia, the internal globus pallidus (GPi) or the substantia nigra pars reticulata (SNpr). Str direct efferent projections to these regions originating from the medium spiny neurons form the direct pathway. These projections are GABAergic and reach subregions of the GPi/SNpr complex through parallel channels. STN efferent projections form the hyper-direct pathway. They are glutamatergic and spread diffusely over the GPi/SNpr output layers and the external globus pallidus (GPe). Projections from the Str to the GPe, and from there to the GPi/SNpr complex, form the indirect pathway. They are GABAergic and segregated in parallel substantially segregated channels similarly to those of the direct pathway. Str spiny neurons whose projections form the direct and indirect pathways are mainly distinguishable for two reasons. First, they tend to express two different families of dopamine receptors in different proportions. Neurons in the direct pathway tend to express more D1-like low-affinity dopamine receptors, while those in the indirect pathway tend to express more D2-like high-affinity dopamine receptors. Second, the direct pathway has a feed-forward organization. Instead, the indirect pathway consists in a multi-synaptic pathway involving a negative feedback circuit. Indeed, the GPe is reached by STN projections that are similar to those reaching the GPi/SNpr complex, with the difference that the former also sends back inhibitory projections to the STN itself (see Fig. 2). The organization in parallel segregated channels within the basal ganglia extends to the pathway going from the GPi/SNpr complex to the thalamus and then to the cortex which projects back to the Str and the STN. Along this pathway local populations maintain a relative segregation so that CSNTC parallel loops can be identified (Alexander et al. 1986; Parent and Hazrati 1995; Middleton and Strick 2000; Romanelli et al. 2005). Importantly, while there is wide evidence that striatal regions also receive information from cortical territories other than those within the same loop, there is instead little if no evidence of such "diagonal" (out of loop) afferent projections to the STN (Romanelli et al. 2005; Mathai and Smith 2011).
Selection within the basal ganglia
Gurney et al. (2001) show how the interaction between the direct and the hyper-direct projections leads to the emergence of centre-off fields of pallidal activations. In particular, a GPi–SNpr neural population reached by highly activated Str afferents is overall inhibited, while its neighbouring populations are excited by the STN glutamatergic projections. As a result, activations of different Str regions compete for the inhibition of the corresponding regions in the output layers through STN lateral excitation. Low differences in the activity of two competing Str regions produce higher differences in the inhibition of the tonic activity of the corresponding SNpr and GPi layers. This leads to the selective disinhibition of distinct thalamo-cortical loops. Moreover, cortical feedback projections to the Str and STN make the internal competition between channels a cumulative dynamical process, similar of those described in neural-field modelling (Si 1977; Erlhagen and Schoner 2002), with the difference that competition within these CSNTC channels is based on disinhibition rather than excitation (Bogacz and Gurney 2007).
Selection locking and unlocking
While the direct pathway and its interaction with the hyper-direct pathway implements the cumulative disinhibition described above, the indirect pathway has been proposed to control the activity passing through the direct/hyperdirect pathway (Gurney et al. 2001). In particular, in this view a lack of tonic dopamine enhances the activity of striatal spiny neurons projecting to the indirect pathway. This condition reduces the efficiency and persistence of basal ganglia selection by reducing the signal/noise ratio, so that the system can be released from a previous selection (see Sect. 5.1 for a detailed description of the process).
Integrating cortex and basal ganglia: the key computational hypothesis
In this section, we discuss how the basal ganglia dishinibitory mechanism can act on a single cortical reservoir to select a specific dynamics within it. Cortical networks can be compared to dynamical reservoirs due to both their uniform microstructure and the temporal dynamics of their neural activity (Sect. 2.1). Our hypothesis is that such cortical reservoirs can be internally modulated by a selection mechanism similar to the one described in Sect. 2.2. Selection by the basal ganglia defines which kind of dynamical system a given cortical module instantiates by changing the modality of response of a part of it so that the cortical module becomes a specific function of the input signals.
This hypothesis requires two sets of assumptions. At the functional level, we assume that direct cortico-cortical projections and selection by the basal ganglia guide the cortical activity in two distinct ways. On the one side, direct cortico-cortical projections regulate cortical dynamics by transmitting fine-grained information that defines the step-by-step time course of the response of the target cortical modules. On the other side, selection by the basal ganglia modulates cortical activity at lower timescales. This feature emerges from two aspects of the selection mechanism. First, basal ganglia integrate in time the differences between different sources of information, filtering fluctuations that occur at fine timscales. Furthermore, once selection locks in (see Sect. 2.2) it becomes less sensitive to further changes in the input signals reaching the striatum. As a result, during selection a part of the thalamo-cortical loop is persistently released, and the activity of the corresponding cortical sub-population enhanced.
Organization of the interactions between a cortical module and the basal ganglia. Each channel within the basal ganglia projects to a sub-population of the thalamo-cortical loop. The cortical part of the sub-population projects back to the striatal input of the channel. Projections from other cortices to the striatum bias the differential activation of the channels. Direct projections to the cortical layer fuel its internal dynamics. The details of the intrinsic organization of the basal ganglia module are skipped in the picture (see Fig. 2)
At the structural level, we make three assumptions about the architecture of the basal ganglia-cortical system (see Fig. 3). First, different CSNTC channels reach a single cortical module. Second, those channels maintain themselves segregated within this module, reaching different subgroups of neurons. Third, all neurons within the cortical module maintain their uniformly sparse internal interconnectivity.
Afferent projections to this system reach two regions, the cortex and the input gates of the basal ganglia. According to the distinction made above, the two sets of projections have two distinct functional roles. While direct projections to the cortex feed the fine-scale dynamics of the reservoir, the projections to the basal ganglia feed the differential accumulation of evidence within the channels so as to bias selection on the basis of large-timescale information.
Simple static control signals (for instance steady excitatory signals coming from other cortices) would suffice for a reservoir network to be modulated in a similar way as done here by the basal ganglia (Sussillo and Abbott 2009). Why then should the cortex need a mechanism such as the competitive disinhibition implemented by the basal ganglia? The model presented here helps to highlight three possible answers. First, selection in the basal ganglia can be easily switched on and off based on neuromodulation. In particular, maintaining striatal dopamine at a high level keeps any selected channel steadily disinhibited. Instead, lowering dopaminergic efflux in the striatum releases the system thus allowing it to switch to another state (see Sect. 2.2). Parkinsonian patients in whom the efflux of dopamine to the striatum is impaired show several abnormalities in the voluntary initiation, speed and several other features of motor control (Muslimovic et al. 2007; Abbruzzese et al. 2009; Espay et al. 2011), thus revealing a main role of striatal dopamine in motor control. Simulation in Sect. 5.1 illustrates how selection in a CSNTC module is switched on/off by changing dopaminergic efflux to the striatum.
Second, maintaining static signals throughout cortico-cortical pathways is difficult. Indeed, following the computational assumptions we made about the nature of cortical dynamics (see Sect. 2.1) any information passing through a cortico-cortical pathway is temporally filtered, resulting in a complex nonlinear transformation. In the case of motor control, information about perception comes quite directly to the primary motor cortex from the somatosensory cortex. Instead, information about the overall movement to perform, originating from the environment and from internal states, reaches the motor cortex indirectly through the dorsal neural pathway, involving parietal and premotor cortex, and through the ventral pathway, involving the temporal, prefrontal and premotor cortex (Baldassarre et al. 2013). As a result, any top-down signal about the overall movement to perform depends on the dynamics of other cortical regions and would not be enough stable to serve for a steady selection of the internal dynamics. The same information filtered by a mechanism as the one of basal ganglia allows the production of steady signals that are robust to fine timescale perturbations. Furthermore, disinhibiting thalamo-cortical loops is a less interfering modulation on cortical activity than direct excitation. In particular, simulations described in Sect. 5 show that while extra excitation of a cortical area tends to saturate its activation, and thus to disrupt the information traversing it, its disinhibition leaves such information intact.
Third, learning task-relevant information at the level of the cortico-striatal synapses is simpler and faster than learning it at the level of cortico-cortical connections. In particular, with respect to the cortex, the basal ganglia can more easily perform the dimensionality reduction to isolate the coarse-grained categories relevant to decide which movement to perform (simulations in Sect. 5 will show this). How does this reconciles with the evidence of cortico-cortical plasticity (Buonomano and Merzenich 1998; Barth 2002; Fu and Zuo 2011) that might lead to learn the categories needed to select movements/tasks? Our idea is that learning at the striatal level occurs relatively fast, and so it can progressively guide the slower learning between cortical modules (Ashby et al. 2007; Shine and Shine 2014; Turner and Desmurget 2010). Following this idea, striatal inputs, once categorized, can steadily bias the selection of the dynamics of the target cortical module. This selection results in a better distinction between cortical dynamics which is easier to detect by learning processes operating at the level of cortico-cortical connections.
Overview of the models
This section describes a neural architecture implementing the hypothesis described in Sect. 2, and a system-level model to study the interactions between multiple instances of such architecture. The description presented here is sufficient to understand the results, while all computational details of the implementation of these models are presented in Sect. 4. The first model (LOOP_MODEL) is composed of a CSNTC loop between a basal ganglia component and a cortical component (see Sect. 4). From now on, we will call this unit a CSNTC module. The architecture of a CSNTC module is shown in Fig. 3. The basal ganglia component is an implementation of the model of Gurney et al. (2001), consisting of 3 channels in loop with three different subpopulations of the cortical component (as in Fig. 2). Each of the three sub-populations is also in loop with a unit representing a thalamic population. Dopamine modulates the input to the units of the striatum in the basal ganglia component. Learning involves only the connections to the cortical read-out units. We used LOOP_MODEL to show that this hypothesized neural organization is able to select, based on the striatal dopaminergic efflux, differential dynamics given the same sensory contextual information and a differential information about the task. In particular, LOOP_MODEL is meant to describe the interaction between the primary motor cortex and the dorsolateral basal ganglia in the control of three different motor behaviours. Sections 5.1 and 5.2 show how this architecture can be used to control both cyclic and end-point movements.
A system-level architecture describing the interaction between primary and higher-level basal ganglia–cortical loops. The model is formed by two CSNTC modules, the one on the centre-left representing an high-level motor area and the other on the right representing a primary motor area module. Sensory input comes from a cortical module representing the somatosensory cortex (on the right). On the left three examples of a train of higher-level input arrays abstracting information about the task coming from prefrontal and associative cortical areas. Each example contains three orthogonal binary input arrays defining three different tasks. Input arrays are grouped to form three categories encoding three different tasks in time. Such a categorization is hardwired in the connections to the high-level motor striatum. The connection in red reaching the primary motor striatum from the high-level motor cortex is the only cortico-striatal connection that is kept free to change, based on the learning rule described in Sect. 4.5 (colour figure online)
The second model (SYSTEM_MODEL, Fig. 4) is a system-level architecture explaining the interaction between multiple CSNTC modules. SYSTEM_MODEL is formed by two CSNTC modules and a further cortical module. In SYSTEM_MODEL one of the CSNTC modules (Fig. 4, centre) represents the primary motor loop and the read-out units of its cortical component directly control movements. The other CSNTC module (Fig. 4, left) represents a higher-level motor loop whose cortical output projects both to the striatum and to the cortex of the previous module. The cortical module at the right of Fig. 4 represents the somatosensory cortex. Its output reaches the striatum and the cortex of the primary motor module similarly to the high motor module (the somatosensory cortical module is not in loop with the basal ganglia, as it happens for primary sensory cortices). The SYSTEM_MODEL is directed to give a computational explanation of how the interaction between CSNTC modules allows for a better cortico-cortical communication of coarse-grained information, e.g. to control the different movements to perform. It also serves as a test of the role of cortico-striatal learning in defining how cortico-striatal information biases basal ganglia selection. A computational analysis of the possible ways to implement cortico-striatal learning is beyond the scope of this study. Instead, we introduce in Sect. 4.5 a simple unsupervised learning mechanism supporting category learning within the striatum (in future work, this mechanism might be strengthened with additional modulation of dopamine to implement reward-based learning). This unsupervised learning mechanism was implemented in the corticostriatal connections reaching the primary motor striatum from the high-level motor cortex. This simple learning mechanism is sufficient to illustrate our hypothesis on the role of the basal ganglia in filtering information at a coarse-grained spatiotemporal definition. Section 5.3 describes in a simulation the effect of lesioning these connections on the learning and expression of three motor tasks.
Throughout the paper we tested the model with three different tasks (hence basal ganglia have three channels) to simplify the visualization of the results. However, the model is able to scale to a higher number of tasks as shown in figure 1 in Online Resource 4.
Computational details
This section illustrates the computational details of the various components of the model and the learning algorithms used to train the cortical read-out and the cortico-striatal connection weights.
The cortical component
We implemented a cortical component as a reservoir network described by the following dynamical system:
$$\begin{aligned} \tau \dot{{\mathbf {u}}} = -{\mathbf {u}} + {\mathbf {W_{ux}}}{\mathbf {x}} + {\mathbf {W_{u}}}{\mathbf {z}} \end{aligned}$$
where \({\mathbf {u}}\in \mathbb {R}^N\) is the vector of activation potentials of the module units, \({\mathbf {x}}\in \mathbb {R}^M\) is the vector of external inputs to the component, \({\mathbf {W_{ux}}}\in \mathbb {R}^{M\times N}\) is the matrix of weights of the connections from the external inputs to the units of the reservoir, \({\mathbf {W_{u}}}\in \mathbb {R}^{N\times N}\) is the matrix of internal connection weights. \({\mathbf {z}}\in \mathbb {R}^N\) is the output function of the vector \({\mathbf {u}}\) given by:
$$\begin{aligned} {\mathbf {z}} = \left[ \tanh \left( \alpha \left( {\mathbf {u}}-th\right) \right) \right] ^{+} \end{aligned}$$
where \(\alpha \) is the slope and th is the threshold of the function. This output function differs from the simple tanh function used in classical echo state models (Jaeger 2002) as it takes only its positive part. We preferred this transfer function so that the activation of the network units, viewed as activity of whole neural populations, has an higher biological plausibility (see Heiberg et al. 2013; Nordlie et al. 2010, for an analysis of rate models). The activity described in Eqs. 1 and 2 is used in all units in the models.
In all simulations lateral connection weights \({\mathbf {W_{u}}}\) were generated randomly and normalized following the constraints of leaky echo state reservoirs (Jaeger et al. 2007), with a further transfomation to improve the richness of the dynamics (see Appendix 1). In the case of online learning (see Sect. 4.4) read-out units are defined as in Eqs. 1 and 2 with the difference that the lateral connections between them are not present. In the case of offline learning (see Sect. 4.4) read-out units are just linear functions of inputs (as usually done in reservoir networks):
$$\begin{aligned} {\mathbf {o}} = {\mathbf {W}}_{oz}{\mathbf {z}} \end{aligned}$$
where \({\mathbf {o}} \in \mathbb {R}^O\) and \({\mathbf {W}}_{oz} \in \mathbb {R}^{O\times N}\).
Feedbacks from read-out units have been extensively studied among others by Jaeger and Haas (2004), Steil (2004), Sussillo and Abbott (2009), and Hoerzer et al. (2014). We chose not to explore this feature so as to maintain the simplicity of the models since the focus of this paper is on the system-level interaction between the basal ganglia and cortex.
The basal ganglia component
The basal ganglia component was an implementation of the model of Gurney et al. (2001) with three channels (see Sect. 3). The model units were modelled through Eqs. 1 and 2. The microarchitecture of the module can be derived from Fig. 2. All layers were formed by three units. Each connection was a feedforward link between one unit and the topological corresponding unit in the following layer, thus reproducing in an abstract fashion the structure of basal ganglia partially segregated channels (one-to-one connections). The only exception to this was the STN, as each of its units was connected which all GPi and GPe units (all-to-all connections).
The modulation of dopaminergic efflux on the activity of striatal D1-expressing units was implemented as a multiplicative excitatory effect:
$$\begin{aligned} \tau \dot{{\mathbf {s}}}_{D1} = - {\mathbf {s}}_{D1} + (bl_{D1} + da_{D1})({\mathbf {W}}_{sc}{\mathbf {c}} + {\mathbf {W}}_{sx}{\mathbf {x}}) \end{aligned}$$
where \({\mathbf {s}}_{D1}\) is the vector of D1R-expressing striatal units, \(bl_{D1}\) defines the responsiveness to the input not due to dopamine, \(da_{D1}\) defines the responsiveness to the input depending on dopamine, \({\mathbf {c}}\) is the vector of inputs from the cortical units, \({\mathbf {W}}_{sc}\) is the matrix of weights of the connections between \({\mathbf {c}}\) and \({\mathbf {s}}\), \({\mathbf {x}}\) is the vector of activities reaching each channel from out-of-loop cortices, \({\mathbf {W}}_{sx}\) is the matrix of weights of the connections between \({\mathbf {x}}\) and \({\mathbf {s}}\).
The modulation of dopaminergic efflux on the activity of striatal D2-expressing units was implemented as a multiplicative inhibitory effect:
$$\begin{aligned} \tau \dot{{\mathbf {s}}}_{D2} = - {\mathbf {s}}_{D2} + \frac{1}{bl_{D2} + da_{D2}}({\mathbf {W}}_{sx}{\mathbf {c}} + {\mathbf {W}}_{xs}{\mathbf {x}}) \end{aligned}$$
where \({\mathbf {s}}_{D2}\) is the vector of D2R-expressing striatal units, \(bl_{D2}\) defines the scale of responsiveness to the input not due to dopamine and \(da_{D1}\) defines the scale of responsiveness to the input depending on dopamine (see also Fiore et al. 2014, for a similar implementation).
The CSNTC module
The CSNTC module was implemented as a composition of a cortical module (Sect. 4.1) and a basal ganglia module (Sect. 4.2), as depicted in Fig. 2. The units of the cortical module project to the Str and the STN layers of the basal ganglia module. Direct input reaches the cortical module as well as the basal ganglia module.
Learning the read-out weights
For the update of the connection weights to the read-out units, we used either batch regression or online learning methods. Regression was used to search the weights when computational speed was needed. Online learning was used to show that the target tasks could also be acquired in a biologically plausible way.
The batch method
For batch regression, we used Tikhonov regularization (Vogel 2002) as usually done in echo state networks optimization (Lukovsevivcius and Jaeger 2009). In particular we considered
The training dataset \({\mathbf {Y}}= \left[ {\mathbf {Y}}_1 \dots {\mathbf {Y}}_i \dots {\mathbf {Y}}_Q \right] ^T\) where \({\mathbf {Y}}_i=\left[ {\mathbf {y}}_1 \dots {\mathbf {y}}_t \dots {\mathbf {y}}_S \right] \) is the array of data for a single desired trajectory and \({\mathbf {y}}_t=\left[ y_1 \dots y_O \right] ^T\) is the point at time t of the desired trajectory \({\mathbf {Y}}_i\).
The input dataset \({\mathbf {X}} = \left[ {\mathbf {X}}_{1} \dots {\mathbf {X}}_{i} \dots ,{\mathbf {X}}_{Q} \right] ^T\) where \({\mathbf {X}}_i =\left[ {\mathbf {z}}_1 \dots {\mathbf {z}}_t \dots {\mathbf {z}}_S\right] \) is the array of input data related to a single desired trajectory and \({\mathbf {z}}_t \in \mathbb {R}^N\) is the vector of input at time t.
On this basis, the learning rule is as follows:
$$\begin{aligned} {\mathbf {W}}_{oz} = \left( {\mathbf {X}}^T{\mathbf {X}} + {\lambda }^2{\mathbf {I}}\right) ^{-1}{\mathbf {X}}^T{\mathbf {Y}} \end{aligned}$$
where \({\mathbf {W}}_{oz}\) is the array of read-out weights (see Eq. 3), \({\mathbf {I}}\) is the identity matrix and \({\lambda }\) is the regularization parameter.
The online method
We used the "backpropagation–decorrelation" (BPDC) algorithm described by Steil (2004) (see also Steil 2007) as the online learning method. We chose it because it has a low computational complexity (O(n)). BPDC has been studied in reservoirs where the read-out units belong to the reservoir and project feedback connections to the other neurons of the network. In BPDC, a decorrelation factor and an error backpropagation factor contribute to the modification of the weights reaching the read-out units. Since we limit our model to feedforward read-out units we can use a simplified version of the BPDC rule:
$$\begin{aligned} \Delta {\mathbf {W}}_{{oz}\ t+1} = \frac{\eta }{\Delta t} {\mathbf {g}}_{t+1} {{\mathbf {d}}^T_t} \end{aligned}$$
where \(\eta \) is the learning rate, \({\mathbf {d}}_t\) is the decorrelation factor:
$$\begin{aligned} {\mathbf {d}}_t = \frac{{\mathbf {z}}_t }{ {\mathbf {z}}^T_t {\mathbf {z}}_t +{\mathbf {x}}^T_t {\mathbf {x}}_t + \beta } \end{aligned}$$
where \(\beta \) is a regularization factor, and the backpropagation factor \({\mathbf {g}}_{t+1}\) simplified to the finite difference of the errors is:
$$\begin{aligned} {\mathbf {g}}_{t+1} = \left( 1 - \Delta t\right) {\mathbf {e}}_{t} - {\mathbf {e}}_{t+1} \end{aligned}$$
where \({\mathbf {e}}_{t} = {\mathbf {o}}_t - {\mathbf {y}}_t\) is the error between the current activations of the read-out units \({\mathbf {o}}_t\) and the vector \({\mathbf {y}}_t\) of the desired activations. In the original rule, the finite difference of the errors \({\mathbf {g}}_{t+1}\) is weighted by a backpropagation term involving the derivatives of read-out activations. This term depends on the autoconnections of the read-out units (Steil 2004), and reduces to zero in case of the absence of such autoconnections as in our model.
Learning the cortico-striatal weights
In the simulations implementing SYSTEM_MODEL learning of the cortico-striatal connections was also simulated (see Sects. 3 and 5). In these cases, we used the unsupervised Oja learning rule (Oja 1982) for the update:
$$\begin{aligned} \Delta {\mathbf {W_{sx}}}_{t+1} = {\eta }_{sx}\left( {\mathbf {s}}_t {\mathbf {c}}^T_t - \left( \left( {\mathbf {s}}_t\odot {\mathbf {s}}_t \right) {\mathbf {1}}^T\right) \odot {\mathbf {W_{sx}}}_t \right) \end{aligned}$$
where \(\eta _{sx}\) is the learning rate, \(\mathbf {W}_{\mathbf {sx}}\) is the matrix of weights from a cortical layer outside the CSNTC module to the striatal layer within the CSNTC module, \({\mathbf {s}}\) is the vector of activities of the target striatal units (as in Eq. 2) filtered by a k-winner-takes-all (kWTA) function (here \(k = 1\)), \(\mathbf {x}\) is the vector of activities of the cortical units filtered by a k-winner-takes-all (kWTA) function (here \(k = 30\)), \({\mathbf {1}}\in \mathbb {R}^{N_c}\) is a vector of all ones with the same length of \(\mathbf {x}\), and \(\odot \) is the element-wise multiplication operator. During the phase of cortico-striatal learning, Gaussian noise \(\mathcal {N}\left( \mu ,\sigma \right) \) was also added to the activation of the striatal units to produce a random perturbation to the selection.
All simulations were implemented in C++ with the use of the Armadillo open-source C++ library for linear algebra (see Sanderson 2010). Simulations were run on a Linux Debian Wheezy operating system hosted on a Intel I7 PC. The Matplotlib Python library (Hunter 2007) was used to produce plots and animations in all simulations with the three-DoF two-dimensional arm. The 20-DOF hand in the second set of simulations was implemented with the open-source CENSLIB library for 3-D scientific simulations (Mannella 2013) based on the Bullet physics engine (Coumans 2013). Data in the third set of simulations were analysed using the R statistics and graphics program (R Development Core Team 2008).
This section illustrates three sets of simulations of the models described in Sect. 3. The first set of simulations using LOOP_MODEL (Sect. 3) showed that the hypothesized neural organization is able to select, based on the striatal dopaminergic efflux, different dynamics and hence different rhythmic movements given the same sensory contextual information to cortex and a different information about the task to the basal ganglia. The second set of simulations involving LOOP_MODEL showed that the same model could also learn and produce fixed-point movements. Finally, a third set of simulations involving SYSTEM_MODEL (Sect. 3) showed the differential role of basal ganglia and cortex in motor control.
Simulating motor control with a single CSNTC module
The idea described in Sect. 2.3 was first tested by implementing LOOP_MODEL that controls the motor behaviour of a simulated arm. In particular, the aim of this simulation was to show that LOOP_MODEL can select different dynamics given the same sensory contextual information and a different information about the task.
The simulation also showed how dopamine can play a key role in the on/off switching of the basal ganglia selection that leads to the learning of the target task. For simplicity we chose a two-dimensional simulated environment and a three-DoF articulated kinematic arm. Each of the three arm joints were controlled by a distinct read-out unit of the model. The task consisted in reproducing three different periodic behaviours that could be visually interpreted as writing a square, a sideways "8" shape and a moon-like shape (see Fig. 5). On the controller side, this corresponded to learning and reproducing three different sequences of read-out activities based on the selection of one of the three different basal ganglia channels (Fig. 3).
The simulation was subdivided in a learning phase and a test phase. Each phase was composed of several sessions. During a session each of the three behaviours was recalled once in random order, giving rise to three "trials." During each trial a binary signal was sent to one of the striatal channels to bias selection. This binary signal represented information received by the basal ganglia component of the module from cortical or thalamic regions outside the module. A bottom-up context information, formed by a sinusoidal wave was directly sent to the cortical component of the module. This sinusoidal wave was the same throughout all trials in the simulation. It represented information directly coming to the cortical module from other cortical regions. Within each trial, the dopaminergic efflux was switched on after a short interval from the trial onset and was switched off before its end. We also defined a time window, which we called "task window," internal to the dopaminergic efflux interval: learning took place within these task windows. This ensured that the cortical activity only depended on direct cortical input and basal ganglia disinhibition and not on perturbations due to the trial onset. Importantly, there was no reset between sessions, trials, or anywhere else throughout the whole simulation thus testing the capacity of the system dynamics to autonomously handle such transitions. In the initial training phase, the read-out weights were updated via an online learning or batch learning process, in distinct simulations (see Sect. 4.4 for details). The duration of the training phase depended on the kind of learning that was implemented. Online learning (see Sect. 7) consisted in 1000 sessions in which the read-out weights were updated in order to fit the desired trajectory. Batch learning required one session to store the array of cortical activations. Here for simplicity, we only describe the results obtained using the batch learning process (regression). The test phase was composed of three sessions. The first two sessions served to guaranty that the behaviour is stable after it is learned. The error (normalized root-mean-square error—NRMSE) was measured over the three task windows in the three trials, involving the three movements, of the last test session.
Schematic description of the two-dimensional kinematic arm used in the simulations. The three shapes on the top are the target trajectories to be learned. A square, a sideways figure eight and a moon-like shape can be recognized from the top-centre to the top-right of the figure
Simulations of the single CSNTC module architecture (LOOP_MODEL). Course of basal ganglia activity in a CSNTC module with three channels in the transition between the first and the second test trial: The top row shows the input signals reaching the three channels from other cortices (outside the CSNTC module) are shown. The input signal to the green channel is initially higher than the others. In the middle of the course of activity the input signal to the red channel becomes the highest. 0 Da activity is low. The network is in a low-energy state. Changing the input signals does not affect basal ganglia activity. 1 As soon as Da activity becomes high, activity in StrD1 grows while the corresponding activity StrD2 get steady low. 2 This change produces inhibition of the highly activated channel in the GPi layer. 3, 4 The network reaches a new equilibrium where activity in the highly activated channel is in an up state throughout layers StrD1, STN, Tha, and Ctx. This equilibrium persists even when the input signal goes off and only a lowering of Da activity interrupts it. 5 Activations in StrD1 revert to a down state, while those of StrD2 become lower and with temporary peaks. 6 Differences between channels fade back to low values in the GPi. 7 StrD1, STN, Tha, and Ctx revert to down-state activity. Acronyms Inp input signal; Da dopamine efflux; StrD1 D1R-expressing striatal populations; StrD2 D2R-expressing striatal populations; STN subthalamic nucleus; GPi internal globus pallidus; GPe external globus pallidus; Tha thalamus; Ctx cortex (colour figure online)
Simulations of the single CSNTC module architecture (LOOP_MODEL). Cortical activity during three trials of a test session. a Raster plot of the activity of the units in the cortical component. The first half of rows on the top shows the activity of the units connected in loop with the three thalamic channels. The graph clearly shows the switching from a down state to an up state of each subgroup of cortical units when the related thalamic loop is disinhibited. The last 20 % of rows on the bottom show the activation of the set of units that is reached by the cortico-cortical input (see e), whereas the remaining units are not reached by any input. b Activation of the three read-out units during the testing time window. The bold black lines stress the target output that had to be learned. Their duration denotes the learning time window. c Striatal dopaminergic efflux. Dopamine is set at a high level during each trial and at a low level between trials. d Cortical input to the three channels of the striatum. Gaussian noise is added to each signal. e Sinusoidal input reaching a set of units of the cortical module
Figure 6 shows basal ganglia activity in the test phase, focusing on the transition between the end of a trial and the beginning of the following one. This transition can be described in relation to the dopaminergic concentration in the striatum:
High dopamine: If the concentration of dopamine at the striatal synapses moves from a low level to a high level, the activity of all D2R-expressing Str populations stabilizes at low values, while the activity of the selected D1R-expressing Str population starts growing (Fig. 6, point 1). This change produces a selective inhibition of the highly activated channel in the GPi layer, while a similar selective inhibition is removed in the GPe layer (Fig. 6, point 2). The overall increase in GPe activity produces a temporary deactivation of the STN layer. As a consequence, the overall activity of GPi is lowered allowing the dishinibition of a thalamo-cortical loop. Lock-in: As long as disinhibition of the thalamo-cortical loop persists, cortical increased activation excites the Str and the STN (see Fig. 6, point 3). Str, STN and cortical neurons belonging to the selected channel switch to an up state of activation, in a feedback loop reaction, and selection becomes locked-in (see Fig. 6, point 4).
Low dopamine: If the concentration of dopamine at the striatal synapses moves from a high level to a low level, the D2R-expressing population is free to react to inputs and to inhibit the GPe (see Fig. 6, point 5). This activity breaks the equilibrium within the GPe-STN loop (see Fig. 6, point 6). Unlock: The level of activity of SNpr-GPi neurons cannot be reliably maintained below threshold anymore, the thalamus becomes inhibited, and cortical activity turns back to a down state, thus unlocking the network (see Fig. 6, point 7).
Within the model, all these dynamical events require a background activity in the cortical layer to happen. Without this, there is no thalamic activity, and thus the recurrent activity within the loops is null.
Simulations of the single CSNTC module architecture (LOOP_MODEL) showing the model capacity of generalization over scaling and translation. Each column of graphs shows the behaviour of the controlled 2D arm in case of the selection of one of the three basal ganglia channels. Bold light grey curves denote the target trajectories. Bold dark grey curves denote the trajectories expected during the generalization tests. The thinner curves show the trajectories actually performed in the three target tests and in the generalization tests. The top row of graphs a shows the case in which the same trajectory has been learned at three different spatial positions. The bottom row b shows the case in which the same trajectory has been learned at three different scales
Figure 7 shows cortical activity in the test phase of a typical simulation. During each trial, the following events take place:
Direct inputs to the cortex trigger the reservoir activity. Cortical activity stays at low levels if the selection process is not locked-in due to a lack of dopamine in the striatal layer (see Fig. 7a, point 1).
When dopamine efflux increases, inputs to the basal ganglia from other cortices bias the competition so that one of the channels is disinhibited (see Fig. 7a, points 2, 3).
Activity of the cortical neurons in loop with the disinhibited thalamic region is amplified (see Fig. 7a, point 3).
The presence of a highly activated neural population within the reservoir when a channel is locked-in has consequences on the whole cortical activity. As a result the cortical dynamics during the three task windows are different from each other, even though the sinusoidal signal activating the cortex is the same. Thus, when selection is steadily locked-in, the behaviour of the network is a well-determined temporal function of its inputs. Consequently, the weights to a read-out unit can be modified so that its activation follows a desired behaviour. Figure 7b shows the activity of the three read-out units in a test done after such a learning. It can be seen that the same read-out unit is capable of decoding the three dynamics of the cortical network into three distinct temporal patterns of activity. A video of the test phase of this simulation is given in Online Resource 1. Figure 1 in Online Resource 4 shows the behaviour of the model in the case in which it learns and reproduces four different motor trajectories instead of three to show how the model can learn a large number of patterns.
Simulations of the single CSNTC module architecture (LOOP_MODEL) showing the capacity of the model to learn and perform discrete movements. The three graphs show the trajectories of the arm while reaching each of the three the target postures (white and red). The top-left of each graph shows a plot of the modification in time of the angles of the three arm joints (colour figure online)
We also performed some tests to show how LOOP_MODEL is able to generalize learned motor trajectories over different features of the movement, for example scale or translation. In these tests, during the initial phase each target trajectory was learned in three different positions (see Fig. 8a, light grey curves) or at three different scales (see Fig. 8b, light grey curves). A further input signal was added to the sinusoidal signal going to the reservoir component of the model. This additional input signal was a constant signal whose amplitude varied based on the amount of translation or scaling of the trajectory. In the following test phase, a generalization test was added to the tests of the three learned trajectories. In this generalization test, the amplitude of the constant input signal did not correspond to any of the three amplitudes that were experienced during the learning phase but was rather a value between two of them. The results of these tests show that the model performs a motor trajectory that is translated or scaled in proportion to the level of the constant signal along a continuum that generalizes over the three learning samples (see Fig. 8a, b, dark grey curves). In this simulation, we used simpler shapes than before as target trajectories. This was done to clearly decouple the scaling factor from of the translation factor. Figure 2 in Online Resource 4 shows the same simulation with the previously described more complex trajectories.
Simulating end-point motor control
A further simulation was implemented to show how LOOP_MODEL can learn not only rhythmic (limit-cycle attractor) movements but also discrete (fixed-point attractor) movements. All settings were as those of the first simulation with the only difference consisting in the use of three fixed-point target trajectories instead of the ones described in Fig. 5. Figure 9 shows the results of a typical test session. The three plots represent the movements of the arm to reach three different final targets. On the top-left of each plot a graph shows the activations of the read-out units in the three joints of the arm. It can be seen that, after learning, the model is able to produce the desired posture.
To test how the model scaled to real scenarios in motor control, we further tested it as a controller of a 20-DoF dynamic hand in a 3D physics simulator. The task was similar to the ones just described and consisted in moving the hand so to reach one of three different postures based on the task information. The model could easily learn all the three desired postures and reproduce them based on the related input. Learning of the read-out units was implemented via the online learning rule described in Sect. 4.4. A video of the simulation is given in Online Resource 2.
Simulating the interaction between high-level and primary motor modules
The third set of simulations implemented SYSTEM_MODEL described in Sect. 2.3. The aim of this set of simulations was to illustrate how selection implemented by the basal ganglia maintains the task information throughout cortico-cortical pathways. Furthermore, it illustrates how a simple unsupervised cortical–striatal learning process is sufficient to allow the reduction of the dimensionality of cortical input to the striatum and extract information about the task (see Sect. 2.3). This architecture was tested, as the previous one, as a controller of a three-DoF kinematic arm acting in a two-dimensional simulated environment. Similarly to the previous case, each of the three joints of the arm were controlled by a distinct read-out unit of the model. Also the task was the same. Two types of information reached the controller. These two sources were intended to reproduce the difference between low-level sensory information and high-level task information. A first input carried the information about the trial task, that is about the trajectory to perform. This information represented the modulation by the prefrontal cortices, here abstracted as a binary vector signal (depicted on the left of Fig. 4). In particular, nine channels subdivided in three groups conveyed the signal about which of the three motor actions had to be carried out. Each channel in a group reached the same unit of the striatum of the high-level motor CSNTC module. Each channel also reached the cortical part of the same CSNTC module in a distributed way, with randomly chosen weights. The reason for this abstraction was that we were interested, as in the previous set of simulations, in reproducing the effect of a task-related high-level coarse-grained information on the selection of the cortical dynamics. A second input represented the information arriving to the somatosensory cortex from sensors. We abstracted this information as a sinusoidal signal (see the bottom-right of Fig. 4). This sinusoidal signal reached the somatosensory module in a distributed way, with randomly chosen connection weights. The reason for this abstraction was that, as above, we were interested, as in the previous set of simulations, in reproducing the effect of a sensory-related low-level fine-grained information on the maintenance of the cortical dynamics.
Learning happened at two levels. One learning process involved the connections going from the high-level motor cortical module to the striatum of the primary motor CSNTC module. A second learning process involved the connections going from the primary motor cortical module to its read-out units (the top-centre of the figure). Learning the connections between external input (from prefrontal cortex) and the high-level motor striatum was instead abstracted by implementing hardwired connections, as in this case unsupervised learning is not sufficient. Indeed, information about the desired categories to be acquired is not contained in the prefrontal information, and further motivational information would be needed. Thus in this case a reward-based learning process, the study of which was out of the scope of this work, should have been implemented.
Simulations of the system-level architecture composed of two CNSTC modules (SYSTEM_MODEL). Performance of the model in the execution of the three tasks during the SAME test condition in each of the three kinds of simulations. The grid on the left shows the trajectories in the SAME test conditions, while the one on the right shows the trajectories in the DIFF test conditions. Within each grid the left column shows the performance in the BASELINE simulations. In both cases the trajectories of the arm follow the target with a very small error. Within each grid the centre column shows the performance in the PARTIAL_LESION simulations. In both cases, the error increases. All trajectories are centred on the target shapes. Within each grid the right column shows the performance in the FULL_LESION simulations. In both cases the shape of the trajectories is completely lost in the reproduction. In the SAME condition, the only information maintained is the position of the target shape in space
The simulation was divided in three phases:
First training phase Cortico-striatal weights from the high-level motor cortical module to the primary motor striatum were updated using the unsupervised learning rule described in Sect. 4.5. During this phase, random noise was added to the striatal input of the primary motor module so that selection could happen randomly at the beginning. This phase lasted 30 sessions.
Second training phase The read-out weights from the primary motor cortical module to the subcortical actuators were updated in a supervised manner, as described in Sect. 4.4. This phase lasted three sessions in the bach learning version, and 1000 sessions in the online version.
Test phase The behaviour was tested in two different conditions. In one condition (SAME), the temporal pattern of binary vector signals reaching the higher-level motor module was the same as in the training phases. In the other condition (DIFF), the last bit within each group was switched on to encode the desired trajectory, instead of the first bit of the same group as done in the previous phases (see Fig. 4, on the left). Thus the binary vector signals in the two conditions were orthogonal, and their belonging to the same groups of information could be optimally detected only through reward-based clustering (here abstracted with hardwired connections) at the level of the striatum of the high-level CNSTC module.
Three simulations were preformed to show the function played by the different components of the model.
BASELINE All connections in the architecture were intact. This represented the control condition of the experiment.
PARTIAL_LESION The cortico-striatal connection between the high-level motor module and the primary motor module was lesioned before the learning processes. This condition tested the hypothesis for which the task information coming from the high-level motor module must be passed to the striatum of the primary motor module in order to optimize the selection of the right cortical dynamics.
FULL_LESION Both the cortico-striatal connections between the prefrontal/associative input and the high-level motor module and the cortico-striatal connection between the high-level motor module and the primary motor module were lesioned before the learning processes. This condition tested the hypothesis for which in the model the task information is almost completely lost at the level of primary motor control when it is not filtered by the basal ganglia.
Simulations of the system-level architecture composed of two CNSTC modules (SYSTEM_MODEL). NRMSE means of BASELINE, MOTOR and FULL simulation groups. Top Means and standard errors of the BASELINE and MOTOR simulations are compared in the SAME and DIFF test conditions. Bottom Means and standard errors of the BASELINE and FULL simulations are compared similarly in the SAME and DIFF test conditions. Each set of simulations was composed of 100 simulations with different random number generator seeds. Note the different y-axis scale of the two graphs
Each kind of simulation was repeated 100 times, each time setting a different random number generator seed so that each simulation could be considered a test on a different individual.
In the case of the BASELINE simulation, when the binary vector signal about the task was the same as in the training phase (SAME condition), the resulting behaviour was a correct reproduction of the requested behaviour (Fig. 10, left column). Furthermore, in the DIFF test condition, the measured error (NRMSE) was only slightly higher than the one in the SAME condition (see Fig. 11).
In the case of the PARTIAL_LESION simulation, the resulting behaviour during the test phase was partially impaired (Fig. 10, centre columns of graphs in the SAME and DIFF blocks). The overall information about the position in space of the target trajectory shape was maintained, as well as a partial amount of information about the shape itself. In this case, the difference between the measured errors in the SAME and DIFF conditions was higher (Fig. 11). Indeed, a two-way ANOVA revealed a significant effect of the interaction between the presence of motor lesions (LESION) and the two conditions (TEST) (TEST \(F(1,393)=7.53\), \(p=0.006\); LESION \(F(1,393)=796.301\), \(p<0.001\); \(TEST\times LESION\) \(F(1,393)=18.7\), \(p<0.001\)).
In the case of the FULL_LESION simulations, the resulting behaviour during the test phase was destructively altered (Fig. 10, right column). The overall information about the position in space of the target trajectory shape was still maintained in the SAME condition, but almost no information about the shape itself was retained. In the DIFF condition, no information was maintained at all, even information regarding the position in space of the target trajectory shape. In this case, the difference between the measured errors in the SAME and DIFF conditions became dramatic (Fig. 11). A two-way ANOVA revealed a significant effect of the interaction between the presence of motor lesions (LESION) and the two conditions (TEST) (TEST \(F(1,393)=2.71\), \(p<0.001\); LESION \(F(1,393)=7.03\), \(p< 0.001\); \(TEST\times LESION\) \(F(1,393)=10.872\),\(p<0.001\)).
A video showing the SAME test conditions for the three simulation types is given in Online Resource 3.
The simulations presented here show that the disinhibitory process implemented by the basal ganglia can be a reliable mechanism for the selection of the internal dynamics within a single cortical module. In particular, the simulation in Sect. 5.1 shows that basal ganglia disinhibition is sufficient to select differential cortical dynamics. These dynamics can be read out to control the activity of motor actuators, exploiting the ability of dynamical reservoirs to learn sequences and generalize the relations between the input and output spaces, as shown by the ability to generalize motor trajectories over translation and scale.
The simulation in Sect. 5.1 also shows that the control of this selection can be suitably modulated by striatal dopamine. In the simulation, the dopaminergic efflux regulates the efficacy in the initiation, termination and lock-in of basal ganglia selection. As a result, during the test phase the cortical dynamics are switched to a "neutral" state when striatal dopamine level is low. When striatal dopamine level is set to a higher level, the class of cortical dynamics is determined by selection in the basal ganglia so that the correct read-out signals are produced. Such a role of dopamine in motor control is consistent with what is observed in Parkinsonian patients. Indeed, these patients show deficits in learning new motor abilities (Muslimovic et al. 2007; Abbruzzese et al. 2009; Espay et al. 2011), as well as in skilled movements such as handwriting and graphical tasks (Rosenblum et al. 2013; Tucha et al. 2006), due to a reduced efficacy of basal ganglia action caused by low levels of dopamine. Notably, micrographia, a peculiar handwriting deficit shown by Parkinsonian patients (McLennan et al. 1972; Kim et al. 2005; Jankovic 2008; Ma et al. 2013) has been qualitatively reproduced by the model described here when the external afferents to the striatal layers are partly or totally lesioned (second set of simulations, Fig. 10).
Simulations also show that basal ganglia disinhibition can play several roles. First, basal ganglia disinhibition potentiates how cortical modules sparsify the input in time and space. Sparsification in time and space is a fundamental property of reservoirs. Thanks to this property reservoir, networks allow a linear solution of problems that are originally nonlinearly separable. In the model, the striatum filters the coarse-grained information of the input. Based on this coarse information, basal ganglia disinhibition persistently enhances the dynamics of differential cortical sub-populations (Fig. 7). This focussed enhancement amplifies the sparsification processes of the cortical module. Based on this strengthened sparsification, the cortical module can learn multiple radically different mappings between input and output signals while limiting possible interference effects. The third set of simulations illustrates this enhancing effect of basal ganglia disinhibition. The reproduction error measured in the SAME test condition of the PARTIAL_LESION simulation is significantly higher than the error in the SAME test condition of the BASELINE simulation. This is a clear evidence that the primary motor cortical module without the disinhibition by the primary motor basal ganglia fails to sufficiently sparsify the input so as to suitably map it to the target movement trajectory. Instead, the motor cortex with the enhancement of the basal ganglia can learn all the target trajectories without interference in correspondence to the coarse information related to the different movements.
Second, basal ganglia disinhibition preserves coarse-grained information throughout cortico-cortical pathways. This property derives from the same enhancement effect discussed above. When a cortical subpopulation is enhanced a strong mark is impressed to the dynamics of the cortical module. This mark can be easily exploited by both cortico-cortical and cortico-striatal learning processes when information traverses multiple CSNTC modules. The third set of simulations shows the importance of this property. The reproduction error measured in the DIFF test condition of the FULL_LESION simulation is dramatically higher than the error in the other simulations, resulting in a completely disrupted behaviour. This dramatic effect is due to the impairment of the striatal activity of the high-level motor module that prevents coarse-grained information about the task to reach the primary motor cortex.
Comparison with other models
The model presented here can be compared with the main computational hypotheses proposed in the literature to describe the functional interaction between cortex and basal ganglia by appealing to dynamical concepts as here. The models described by Wickens et al. (1994), Houk and Wise (1995), Beiser and Houk (1998) and Frank et al. (2001) share two common ideas. First, the activity of a cortical assembly or column is bistable, switching between a lower and a higher state. Second, the basal ganglia select which column to switch on through disinhibition based on a striatal internal competition. Wickens et al. (1994), starting from the Hebbian hypothesis of cell assemblies, proposed that the control of motor programs is implemented by the cortex through the ignition of cortical assemblies. In their model, when cells belonging to an assembly are activated over a threshold, a reaction chain leads to the ignition of all the other cells in the same assembly. This process ends with a stable activation of the whole assembly that is then sufficient to trigger a motor program. The role ascribed to the selection process of the basal ganglia is to differentially amplify the activation of cortical assemblies. In the authors' hypothesis, as cortical assemblies reaching the striatum partially overlap, learning at the level of the cortico-striatal connections connects the activation of assemblies between each other, allowing the triggering of sequences of motor programs. Houk and Wise (1995) described a localistic firing-rate model of the interaction between thalamo-cortical loops and the basal ganglia in which the striatum acts as a context detector, linking motor behaviours to the right contextual patterns. Context is given by both the activity of the cortical column that is in loop with the striatal unit and by the activity of other cortical columns. Striatal functioning is based on a winner-take-all mechanism. The main feature of the model is that a temporary activation of a striatal unit produces a switch to a permanent higher activation of the target cortical column, which thus instantiates a memory of the context detected by the striatum. Beiser and Houk (1998) further investigated the computational hypothesis proposed by Houk and Wise (1995). They showed that a composition of such cortico-basal ganglia loops where cortico-striatal connectivity is randomly generated produces responses that are uniquely coupled to different sequences in the presentation of the cues. Frank et al. (2001) and O'Reilly and Frank (2006) described a computational model of working memory based on the prefrontal cortex and basal ganglia (the PBWM model) that is also built on the two principles described above. In the PBWM model, cortical modules are implemented as attractor networks whose dynamics are modified through an algorithm based on both Hebbian and error-driven learning. Basal ganglia selectively gate inputs to the cortical modules through disinhibition. When a channel is selected, the target cortical module receives external inputs that possibly drive the network to a new attractor state whereas when the channel is inhibited the previous attractor state is maintained. Contrary to the previous models, the cortical networks of the PBWM model can store and learn several attractor states. This feature, combined with the temporal selective gating of the basal ganglia, allows the model to solve complex working memory tasks. This model, as some of the others described above, is meant to describe working memory in the prefrontal cortex more than motor control in the motor cortical areas, although, as also noted by the authors, working memory and motor control rely on similar principles. This contiguity also emerges on the anatomical level, where the microstructure of the interaction between the prefrontal cortex and the ventro-medial basal ganglia is the same as that between the motor cortices and the dorsal basal ganglia. Notwithstanding their power, none of the four models above give a full account of the dynamic nature and integration of high-level motor control (selection, initiation, modulation and termination of movements) and the very implementation of the motor programs.
A different model was proposed by Dominey (1995) in which the first idea of a cortical dynamical reservoir (see Sect. 2.1) was proposed in the implementation of a prefrontal cortex module that controls the basal ganglia disinhibition to the superior colliculus for saccade generation. In this model the temporal dynamics of the cortical module allows the system to learn, through reinforcement learning, to control oculomotor behaviour. Differently from the model presented here, however, selection in the basal ganglia is not implemented and the re-entrant interaction between cortex and the basal ganglia is not reproduced.
The model presented here is coherent with the described works because it implements both context detection by the basal ganglia, here in a biological plausible way, and maintenance of cortical states through the release of thalamo-cortical loops. However, our model has also important novelties with respect to the described models. First, it merges the property of maintaining a memory of the current state (Wickens et al. 1994; Houk and Wise 1995; Beiser and Houk 1998; Frank et al. 2001), with the property of producing an internal dynamics in response to the input (Dominey 1995), so that the evidence of complex neural activity can be reconciled with the role of frontal cortex in working memory, in line with the duality of electrophysiological data in which both maintenance of activity patterns (Georgopoulos et al. 1982; Scott 2003) and complex temporal dynamics (Hatsopoulos et al. 2007; Afshar et al. 2011; Churchland et al. 2012) are found in the same cortical circuits. Second, it implements the interaction between dopamine-based basal ganglia selection and cortical activity in a plausible biological way. Thus our model explains in detail how motor programs are selected and performed.
We described a model that proposes a hypothesis on the mechanisms of interaction between cortex and the basal ganglia. The model was built by integrating reservoir computing as a model of cortex, and cumulative competition leading to disinhibition as a model of the basal ganglia. The model shows that selection of the basal ganglia can control cortical activity by drastically changing its dynamics. In particular, selection substantially improves the sparsification processes within cortex. It also explains how cortical activity can transiently maintain information while producing complex temporal patterns of activation. This is made possible by the basal ganglia imposing specific dynamics to the selected cortical subpopulations.
Notwithstanding these strengths the model does not explain at least two important issues, representing two possible starting points for future work. First, all simulations presented here use supervised learning to update cortical read-out connections, so the model does not explain how such connections might be acquired with learning processes typical of cortex, in particular Hebbian learning (Arai et al. 2011; Koch et al. 2013) and possibly trial-and-error learning (Hoerzer et al. 2014). We restricted our implementation to supervised learning because of its large use in reservoir computing, and the consequent availability of algorithms to solve technical problems, given that our focus was on the system-level interaction between the basal ganglia and cortex and not on learning processes. Unsupervised and error-driven learning in reservoir computing are starting to be studied only recently (see Legenstein et al. 2009; Hoerzer et al. 2014 for an promising solution to reward-driven learning).
The second issue that the model does not face involves the effects of closed-loop interactions with the environment on learning. Realistic motor control operates within a closed-loop system involving the brain, the body and the environment where the motor acts exerted by the animal generate a continuous feedback from the environment. This feedback is readily integrated by the brain to control and modulate the motor acts themselves. For example, the activity of the primary motor cortex is continuously modulated by somatosensory information (here abstracted with a sinusoidal input), or by motor efferent copies, during movement performance. Modelling the effects of this modality of interaction with the environment is very important as it produces relevant effects on the nature of motor control and performance. Some solutions have been developed with the aim to manage online feedback in reservoirs networks while updating the read-out weights (Steil 2004; Sussillo and Abbott 2009). Simulating the model with a more realistic input will be a goal of future research.
An interesting issue that can be investigated with the model is the relation between learning equilibrium-points and learning complex motor trajectories. Are these two different possible modalities of motor learning or are they interrelated? Do they share the same neural substrates? Section 5 highlighted that the model, exploiting the properties of dynamical reservoirs, can learn to reach an equilibrium-point posture (Feldman 1986; Bizzi et al. 1992; Caligiore et al. 2014) alongside learning rhythmic complex trajectories. Furthermore, the learning of constant read-out activities should be easier to achieve by a reward-driven learning algorithm. Thus, we speculate that initial learning of a complex trajectory might be guided by a reward-based acquisition of few intermediate key points relevant for the whole motor trajectory. These via points might then scaffold the acquisition of the final accurate trajectory, in particular on the basis of cortico-cortical learning processes and possibly the contribution of cerebellum (Wolpert et al. 1998; Shadmehr and Krakauer 2008; Caligiore et al. 2013).
These studies also focus on the idea that learning to select sequences of channels allows the acquisition of complex motor actions.
Abbruzzese G, Trompetto C, Marinelli L (2009) The rationale for motor learning in Parkinson's disease. Eur J Phys Rehabil Med 45(2):209–214. http://www.ncbi.nlm.nih.gov/pubmed/19377414
Afshar A, Santhanam G, Yu BM, Ryu SI, Sahani M, Shenoy KV (2011) Single-trial neural correlates of arm movement preparation. Neuron 71(3):555–564. doi:10.1016/j.neuron.2011.05.047
Alexander GE, DeLong MR, Strick PL (1986) Parallel organization of functionally segregated circuits linking basal ganglia and cortex. Annu Rev Neurosci 9:357–381. doi:10.1146/annurev.ne.09.030186.002041
Amari S (1977) Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 27(2):77–87. doi:10.1007/BF00337259
Arai N, Müller-Dahlhaus F, Murakami T, Bliem B, Lu MK, Ugawa Y, Ziemann U (2011) State-dependent and timing-dependent bidirectional associative plasticity in the human sma-m1 network. J Neurosci 31(43):15,376–15,383. doi:10.1523/JNEUROSCI.2271-11.2011
Ashby GF, Ennis JM, Spiering BJ (2007) A neurobiological theory of automaticity in perceptual categorization. Psychol Rev 114(3):632–656. doi:10.1037/0033-295X.114.3.632
Baldassarre G, Caligiore D, Mannella F (2013) The hierarchical organisation of cortical and basal-ganglia systems: a computationally-informed review and integrated hypothesis. In: Baldassarre G, Mirolli M (eds) Computational and robotic models of the hierarchical organisation of behaviour. Springer, Berlin, pp 237–270
Bar-Gad I, Morris G, Bergman H (2003) Information processing, dimensionality reduction and reinforcement learning in the basal ganglia. Prog Neurobiol 71(6):439–473. doi:10.1016/j.pneurobio.2003.12.001
Barth AL (2002) Differential plasticity in neocortical networks. Physiol Behav 77(4–5):545–550. doi:10.1016/S0031-9384(02)00932-0
Beiser DG, Houk JC (1998) Model of cortical-basal ganglionic processing: encoding the serial order of sensory events. J Neurophysiol 79:3168–88. http://www.ncbi.nlm.nih.gov/pubmed/9636117
Berns GS, Sejnowski TJ (1998) A computational model of how the basal ganglia produce sequences. J Cogn Neurosci 10(1):108–121. http://www.ncbi.nlm.nih.gov/pubmed/9526086
Bizzi E, Hogan N, Mussa-Ivaldi FA, Giszter S (1992) Does the nervous system use equilibrium-point control to guide single and multiple joint movements? Behav Brain Sci 15:603–613. http://www.ncbi.nlm.nih.gov/pubmed/23302290
Bogacz R, Gurney K (2007) The basal ganglia and cortex implement optimal decision making between alternative actions. Neural Comput 19(2):442–477. doi:10.1162/neco.2007.19.2.442
Buonomano DV, Merzenich MM (1998) Cortical plasticity: from synapses to maps. Ann Rev Neurosci 21:149–186. doi:10.1146/annurev.neuro.21.1.149
Buys EJ, Lemon RN, Mantel GW, Muir RB (1986) Selective facilitation of different hand muscles by single corticospinal neurones in the conscious monkey. J Physiol 381:529–549. http://www.ncbi.nlm.nih.gov/pubmed/3625544
Caligiore D, Pezzulo G, Miall RC, Baldassarre G (2013) The contribution of brain sub-cortical loops in the expression and acquisition of action understanding abilities. Neurosci Biobehav Rev 37:2504–2515. doi:10.1016/j.neubiorev.2013.07.016
Caligiore D, Parisi D, Baldassarre G (2014) Integrating reinforcement learning, equilibrium points and minimum variance to understand the development of reaching: a computational model. Psychol Rev 121(3):389–421. doi:10.1037/a0037016
Cheney PD, Fetz EE (1985) Comparable patterns of muscle facilitation evoked by individual corticomotoneuronal (cm) cells and by single intracortical microstimuli in primates: evidence for functional groups of cm cells. J Neurophysiol 53(3):786–804. http://www.ncbi.nlm.nih.gov/pubmed/2984354
Churchland MM, Cunningham JP, Kaufman MT, Ryu SI, Shenoy KV (2010) Cortical preparatory activity: representation of movement or first cog in a dynamical machine? Neuron 68(3):387–400. doi:10.1016/j.neuron.2010.09.015
Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV (2012) Neural population dynamics during reaching. Nature 487(7405):51–56. doi:10.1038/nature11129
Ciancio AL, Zollo L, Guglielmelli E, Caligiore D, Baldassarre G (2013) The role of learning and kinematic features in dexterous manipulation: a comparative study with two robotic hands. Int J Adv Robot Syst. doi:10.5772/56479
Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297. doi:10.1023/A:1022627411411
Coumans E (2013) Bullet quickstart. 2nd edn. https://github.com/svn2github/bullet/blob/master/trunk/docs/BulletQuickstart.pdf
Dominey PF (1995) Complex sensory-motor sequence learning based on recurrent state representation and reinforcement learning. Biol Cybern 73(3):265–274. http://www.ncbi.nlm.nih.gov/pubmed/7548314
Dominey PF (2013) Recurrent temporal networks and language acquisition—from corticostriatal neurophysiology to reservoir computing. Front Psychol 4:500. doi:10.3389/fpsyg.2013.00500
Erlhagen W, Schoner G (2002) Dynamic field theory of movement preparation. Psychol Rev 109:545–571. doi:10.1037/0033-295X.109.3.545
Espay AJ, Giuffrida JP, Chen R, Payne M, Mazzella F, Dunn E, Vaughan JE, Duker AP, Sahay A, Kim SJ, Revilla FJ, Heldman DA (2011) Differential response of speed, amplitude, and rhythm to dopaminergic medications in Parkinson's disease. Mov Disord 26(14):2504–2508. doi:10.1002/mds.23893
Evarts EV (1968) Relation of pyramidal tract activity to force exerted during voluntary movement. J Neurophysiol 31(1):14–27. http://www.ncbi.nlm.nih.gov/pubmed/4966614
Feldman AG (1986) Once more on the equilibrium-point hypothesis (\(\lambda \) model) for motor control. J Mot Behav 18:17–54. http://www.ncbi.nlm.nih.gov/pubmed/15136283
Fiore VG, Mannella F, Mirolli M, Latagliata EC, Valzania A, Cabib S, Dolan RJ, Puglisi-Allegra S, Baldassarre G (2014) Corticolimbic catecholamines in stress: a computational model of the appraisal of controllability. Brain Struct Funct 1–15. doi:10.1007/s00429-014-0727-7
Frank MJ, Loughry B, O'Reilly RC (2001) Interactions between frontal cortex and basal ganglia in working memory: a computational model. Cogn Affect Behav Neurosci 1(2):137–160. http://www.ncbi.nlm.nih.gov/pubmed/12467110
Fu M, Zuo Y (2011) Experience-dependent structural plasticity in the cortex. Trends Neurosci 34(4):177–187. doi:10.1016/j.tins.2011.02.001
Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT (1982) On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. J Neurosci 2(11):1527–1537. http://www.ncbi.nlm.nih.gov/pubmed/7143039
Georgopoulos AP, Schwartz AB, Kettner RE (1986) Neuronal population coding of movement direction. Science 233(4771):1416–1419. doi:10.1126/science.3749885
Golub MD, Yu BM, Schwartz AB, Chase SM (2014) Motor cortical control of movement speed with implications for brain–machine interface control. J Neurophysiol 112(2):411–429. doi:10.1152/jn.00391.2013
Graybiel AM (1998) The basal ganglia and chunking of action repertoires. Neurobiol Learn Mem 70(1–2):119–136. doi:10.1006/nlme.1998.3843
Graziano MSA, Aflalo TN (2007) Mapping behavioral repertoire onto the cortex. Neuron 56(2):239–251. doi:10.1016/j.neuron.2007.09.013
Gurney K, Prescott T, Redgrave P (2001) A computational model of action selection in the basal ganglia. i. a new functional anatomy. Biol Cybern 84:401–410. doi:10.1007/PL00007984
Gurney K, Prescott TJ, Wickens JR, Redgrave P (2004) Computational models of the basal ganglia: from robots to membranes. Trends Neurosci 27(8):453–459. doi:10.1016/j.tins.2004.06.003
Haber SN (2003) The primate basal ganglia: parallel and integrative networks. J Chem Neuroanat 26(4):317–330. doi:10.1016/j.jchemneu.2003.10.003
Hatsopoulos N, Joshi J, O'Leary JG (2004) Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles. J Neurophysiol 92(2):1165–1174. doi:10.1152/jn.01245.2003
Hatsopoulos NG, Xu Q, Amit Y (2007) Encoding of movement fragments in the motor cortex. J Neurosci 27(19):5105–5114. doi:10.1523/JNEUROSCI.3570-06.2007
Heiberg T, Kriener B, Tetzlaff T, Casti A, Einevoll GT, Plesser HE (2013) Firing-rate models capture essential response dynamics of LGN relay cells. J Comput Neurosci 35(3):359–375. doi:10.1007/s10827-013-0456-6
Hoerzer GM, Legenstein R, Maass W (2014) Emergence of complex computational structures from chaotic neural networks through reward-modulated hebbian learning. Cereb Cortex 24(3):677–690. doi:10.1093/cercor/bhs348
Houk JC, Wise SP (1995) Distributed modular architectures linking basal ganglia, cerebellum, and cerebral cortex: their role in planning and controlling action. Cereb Cortex 5(2):95–110. doi:10.1093/cercor/5.2.95
Humphries MD, Gurney KN (2002) The role of intra-thalamic and thalamocortical circuits in action selection. Network 13:131–156. doi:10.1080/net.13.1.131.156
Humphries MD, Stewart RD, Gurney KN (2006) A physiologically plausible model of action selection and oscillatory activity in the basal ganglia. J Neurosci 26(12):921–942. doi:10.1523/JNEUROSCI.3486-06.2006
Hunter JD (2007) Matplotlib: a 2D graphics environment. Comput Sci Eng 9(3):90–95. doi:10.1109/MCSE.2007.55
Ijspeert A, Crespi A, Ryczko D, Cabelguen J (2007) From swimming to walking with a salamander robot driven by a spinal cord model. Science 315:1416–1420
Ijspeert AJ (2008) Central pattern generators for locomotion control in animals and robots: a review. Neural Netw 21(4):642–653. doi:10.1016/j.neunet.2008.03.014
Jaeger H (2002) Adaptive nonlinear system identification with echo state networks. In: Advances in neural information processing systems. MIT Press, Cambridge, MA, pp 593–600. http://papers.nips.cc/paper/2318-adaptive-nonlinear-system-identification-with-echo-state-networks.pdf
Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667):78–80. doi:10.1126/science.1091277
Jaeger H, Lukosevicius M, Popovici D, Siewert U (2007) Optimization and applications of echo state networks with leaky-integrator neurons. Neural Netw 20(3):335–352. http://www.ncbi.nlm.nih.gov/pubmed/17517495
Jankovic J (2008) Parkinson's disease: clinical features and diagnosis. J Neurol Neurosurg Psychiatry 79(4):368–376. doi:10.1136/jnnp.2007.131045
Kakei S, Hoffman DS, Strick PL (1999) Muscle and movement representations in the primary motor cortex. Science 285(5436):2136–2139. doi:10.1126/science.285.5436.2136
Kim EJ, Lee BH, Park KC, Lee WY, Na DL (2005) Micrographia on free writing versus copying tasks in idiopathic Parkinson's disease. Parkinsonism Relat Disord 11(1):57–63. doi:10.1016/j.parkreldis.2004.08.005
Koch G, Ponzo V, Lorenzo FD, Caltagirone C, Veniero D (2013) Hebbian and anti-hebbian spike-timing-dependent plasticity of human cortico-cortical connections. J Neurosci 33(23):9725–9733. doi:10.1523/JNEUROSCI.4988-12.2013
Legenstein RA, Chase SM, Schwartz AB, Maass W (2009) Functional network reorganization in motor cortex can be explained by reward-modulated hebbian learning. In: Advances in neural information processing systems 22: 23rd annual conference on neural information processing systems 2009. Proceedings of a meeting held 7–10 December 2009, Vancouver, British Columbia, Canada., pp 1105–1113. http://books.nips.cc/papers/files/nips22/NIPS2009_0211.pdf
Lukovsevivcius M, Jaeger H (2009) Reservoir computing approaches to recurrent neural network training. Comput Sci Rev 3(3):127–149. doi:10.1016/j.cosrev.2009.03.005
Luppino G, Rizzolatti G (2000) The organization of the frontal motor cortex. News Physiol Sci 15(19):219–224
Ma HI, Hwang WJ, Chang SH, Wang TY (2013) Progressive micrographia shown in horizontal, but not vertical, writing in Parkinson's disease. Behav Neurol 27(2):169–174. doi:10.3233/BEN-120285
Maass W, Natschläger T, Markram H (2002) Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 14(11):2531–2560. http://www.neurocolt.com/tech_reps/2001/113.pdf
Mannella F (2013) CENSLIB—computational embodied neuroscience simulation library. Online documentation. http://censlib.sourceforge.net/
Mathai A, Smith Y (2011) The corticostriatal and corticosubthalamic pathways: two entries, one target. so what? Front Syst Neurosci 5:64. doi:10.3389/fnsys.2011.00064
Mattia M, Pani P, Mirabella G, Costa S, Giudice PD, Ferraina S (2013) Heterogeneous attractor cell assemblies for motor planning in premotor cortex. J Neurosci 33(27):11,155–11,168. doi:10.1523/JNEUROSCI.4664-12.2013
McLennan JE, Nakano K, Tyler HR, Schwab RS (1972) Micrographia in Parkinson's disease. J Neurol Sci 15(2):141–152. doi:10.1016/0022-510X(72)90002-0
Middleton FA, Strick PL (2000) Basal ganglia and cerebellar loops: motor and cognitive circuits. Brain Res Rev 31(2–3):236–250. doi:10.1016/S0165-0173(99)00040-5
Mink JW (1996) The basal ganglia: focused selection and inhibition of competing motor programs. Prog Neurobiol 50(4):381–425. doi:10.1016/S0301-0082(96)00042-1
Muslimovic D, Post B, Speelman JD, Schmand B (2007) Motor procedural learning in Parkinson's disease. Brain 130(Pt 11):2887–2897. doi:10.1093/brain/awm211
Nordlie E, Tetzlaff T, Einevoll GT (2010) Rate dynamics of leaky integrate-and-fire neurons with strong synapses. Front Comput Neurosci 4:149. doi:10.3389/fncom.2010.00149
Oja E (1982) A simplified neuron model as a principal component analyzer. J Math Biol 15(3):267–273. doi:10.1007/BF00275687
O'Reilly RC, Frank MJ (2006) Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia. Neural Comput 18(2):283–328. doi:10.1162/089976606775093909
Orlovsky GN, Deliagina TG, Grillner S (1999) Neuronal control of locomotion: from mollusc to man. Oxford University Press, New York
Parent A, Hazrati LN (1995) Functional anatomy of the basal ganglia. i. the cortico-basal ganglia-thalamo-cortical loop. Brain Res Rev 20(1):91–127. doi:10.1016/0165-0173(94)00007-C
Ponzi A, Wickens J (2010) Sequentially switching cell assemblies in random inhibitory networks of spiking neurons in the striatum. J Neurosci 30(17):5894–5911. doi:10.1523/JNEUROSCI.5540-09.2010
R Development Core Team (2008) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org, ISBN 3-900051-07-0
Redgrave P, Prescott TJ, Gurney K (1999) The basal ganglia: a vertebrate solution to the selection problem? Neuroscience 89(4):1009–1023. doi:10.1016/S0306-4522(98)00319-4
Rigotti M, Rubin DBD, Wang XJ, Fusi S (2010) Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses. Front Comput Neurosci 4:24. doi:10.3389/fncom.2010.00024
Romanelli P, Esposito V, Schaal DW, Heit G (2005) Somatotopy in the basal ganglia: experimental and clinical evidence for segregated sensorimotor channels. Brain Res Rev 48(1):112–128. doi:10.1016/j.brainresrev.2004.09.008
Rosenblum S, Samuel M, Zlotnik S, Erikh I, Schlesinger I (2013) Handwriting as an objective tool for Parkinson's disease diagnosis. J Neurol 260(9):2357–2361. doi:10.1007/s00415-013-6996-x
Sanderson C (2010) Armadillo: An open source C++ linear algebra library for fast prototyping and computationally intensive experiments. Tech. rep., NICTA. http://arma.sourceforge.net/armadillo_nicta_2010.pdf
Scott SH (2003) The role of primary motor cortex in goal-directed movements: insights from neurophysiological studies on non-human primates. Curr Opin Neurobiol 13(6):671–677. doi:10.1016/j.conb.2003.10.012
Sergio LE, Hamel-Pâquet C, Kalaska JF (2005) Motor cortex neural correlates of output kinematics and kinetics during isometric-force and arm-reaching tasks. J Neurophysiol 94(4):2353–2378. doi:10.1152/jn.00989.2004
Shadmehr R, Krakauer JW (2008) A computational neuroanatomy for motor control. Exp Brain Res 185:359–381. doi:10.1007/s00221-008-1280-5
Shine JM, Shine R (2014) Delegation to automaticity: the driving force for cognitive evolution? Front Neurosci 8:90. doi:10.3389/fnins.2014.00090
Steil JJ (2004) Backpropagation-decorrelation: online recurrent learning with O (N) complexity. In: Neural networks, 2004. Proceedings. 2004 IEEE international joint conference on, IEEE, vol 2, pp 843–848. doi:10.1109/IJCNN.2004.1380039
Steil JJ (2007) Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning. Neural Netw 20(3):353–364. doi:10.1016/j.neunet.2007.04.011
Stender T (2007) A generalization of imaginary parts of eigenvalues for matrices: Chain rotation numbers. Linear Algebra Appl 426(1):53–70. doi:10.1016/j.laa.2007.03.035
Sussillo D, Abbott LF (2009) Generating coherent patterns of activity from chaotic neural networks. Neuron 63(4):544–557. doi:10.1016/j.neuron.2009.07.018
Tucha O, Mecklinger L, Thome J, Reiter A, Alders GL, Sartor H, Naumann M, Lange KW (2006) Kinematic analysis of dopaminergic effects on skilled handwriting movements in Parkinson's disease. J Neural Transm 113(5):609–623. doi:10.1007/s00702-005-0346-9
Turner RS, Desmurget M (2010) Basal ganglia contributions to motor control: a vigorous tutor. Curr Opin Neurobiol 20(6):704–716. doi:10.1016/j.conb.2010.08.022
Vogel CR (2002) Computational methods for inverse problems, chap 1. SIAM, Philadelphia , pp 1–12. doi:10.1137/1.9780898717570.Ch1
Wang W, Chan SS, Heldman DA, Moran DW (2010) Motor cortical representation of hand translation and rotation during reaching. J Neurosci 30(3):958–962. doi:10.1523/JNEUROSCI.3742-09.2010
Wang XJ (2008) Decision making in recurrent neuronal circuits. Neuron 60(2):215–234. doi:10.1016/j.neuron.2008.09.034
Wickens J, Hyland B, Anson G (1994) Cortical cell assemblies: a possible mechanism for motor programs. J Mot Behav 26(2):66–82. doi:10.1080/00222895.1994.9941663
Wolpert DM, Miall RC, Kawato M (1998) Internal models in the cerebellum. Trends Cogn Sci 2:338–347. doi:10.1016/S1364-6613(98)01221-2
This research has received funds from the European Commission under the 7th Framework Programme (FP7/2007-2013), ICT Challenge 2 "Cognitive Systems and Robotics", project "IM-CLeVeR - Intrinsically Motivated Cumulative Learning Versatile Robots", Grant Agreement No. ICT-IP-231722. We thank Daniele Caligiore, Vieri giuliano Santucci, Valerio Sperati, Giovanni Pezzulo and Bruno Castro da Silva for feedback and discussion on this work.
Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council (CNR-ISTC-LOCEN), Via San Martino della Battaglia 44, 00185, Rome, Italy
Francesco Mannella
& Gianluca Baldassarre
Search for Francesco Mannella in:
Search for Gianluca Baldassarre in:
Correspondence to Francesco Mannella.
Supplementary material 1 (mpg 8196 KB)
Supplementary material 2 (mpg 28118 KB)
Supplementary material 4 (pdf 282 KB)
Appendix 1: Optimizing the weights of leaky-integrator reservoirs
Generating suitable connection weights of a reservoir network to optimize the learning of the read-out weights for a given task is not always easy. In particular, there is not a systematic way to modulate the temporal variability of the network response. The richness of the temporal response of a reservoir network to an input is the founding element of its computational expressiveness. Indeed, networks with richer dynamics can reproduce in their read-out units nonlinear temporal trajectories with an high definition. In other words, the more the states that the network visits in time are dissimilar from one another (low correlation between states), the more the output function that the network can learn to reproduce can be complex. Here we describe a heuristic algorithm that we have developed, and used in the models presented here, to improve the temporal variability of echo state reservoirs. In a dynamical system of type \(\dot{{\mathbf {x}}} = {\mathbf {W}}{\mathbf {x}}\), the properties of the matrix \({\mathbf {W}}\) determine its temporal dynamics. In particular, the real and imaginary components of the eigenvalues of \({\mathbf {W}}\) determine, respectively, the amount of infinitesimal contraction/expansion or infinitesimal rotation in the phase space of the system (Stender 2007). Thus, a matrix with eigenvalues having high imaginary and low real components produces dynamics with high infinitesimal rotations and a low amount of contraction/expansion. Infinitesimal rotations in the dynamics correspond to low correlation between successive states; that is, richer dynamics. It is possible to build a matrix with these properties starting from a random square matrix \({\mathbf {M}}\in \mathfrak {R}^{n\times n}\). This matrix \({\mathbf {M}}\) is equivalent to the sum of two matrices \({\mathbf {M}}_\mathrm{sim}+{\mathbf {M}}_\mathrm{skew}\), where \({\mathbf {M}}_\mathrm{sim}=\left( {\mathbf {M+M^T}}\right) /2\) is a symmetric matrix with all nonzero eigenvalues being pure real and \({\mathbf {M}}_\mathrm{sim}=\left( {\mathbf {M-M^T}}\right) /2\) is a skew-symmetric matrix with all nonzero eigenvalues being pure imaginary. Given this property, we can build a matrix \({\mathbf {M}}_\mathrm{rot}=\psi \cdot {\mathbf {M}}_\mathrm{sim}+(1-\psi )\cdot {\mathbf {M}}_\mathrm{skew}\), where the parameter \(\psi \) is the proportion between the amplitude of real versus imaginary eigenvalues of \({\mathbf {M}}_\mathrm{rot}\). By setting a small value of \(\psi \) (in particular \(0<\psi <0.5\)) one can obtain a matrix that determines an higher amount of rotation and a lower amount of contraction/expansion than the original matrix. We can now use this matrix to built the matrix of weights of an echo state network of n units by normalizing it to a matrix \(\mathbf {M}_{norm}\) so that \( \left( 1-\epsilon \right) < \rho \left( \left( \delta t/ \tau \right) \cdot \mathbf {M}_{norm} + \left( 1 - \left( \delta t/\tau \right) \right) \cdot I \right) < 1\), where \( \rho (.)\) is the spectral radius (Jaeger et al. 2007). Figure 12 shows how a reservoir with such a modification has a richer response to an impulse during its fading dynamics.
For each row of graphs, the left graphs represent the eigenvalues of the weight matrix. a An echo state network where the weight matrix is built with \(\psi = 0.2\) and \(\epsilon = 0.0001\). Notice that the high variability of its dynamics while fading. b A standard echo state network (\(\psi = 0.5, \epsilon = 0.0001\))
Appendix 2: Parameters of the simulations
Numerical integration of the units of the model
Basal ganglia units
\(dt =\mathtt{0.001 }\)
\({\tau } =\mathtt{0.005 }\)
\({th} =\mathtt{0.0 }\)
\({\alpha } =\mathtt{1.0 }\)
\({bl_{D1}} =\mathtt{0.1 }\)
\({da_{D1}} =\mathtt{0.5 }\)
\(\hbox {Channels} =\mathtt{3 }\)
Thalamic units
\({dt} =\mathtt{0.001 }\)
\(\hbox {n} =\mathtt{3 }\)
Cortical units
\(n =\mathtt{300 }\)
\(\hbox {Sparseness} =\mathtt{1.0 }\)
\({\epsilon ^*} =\mathtt{0.0001 }\)
Intrinsic connections of the basal ganglia
\(\hbox {s}_{\mathrm{D1}}\)
GPi \(\mathtt{-0.1 }\) 3.0 \(\mathtt{-3.0 }\) .
GPe . 2.0 . \(\mathtt{-2.5 }\)
STN \(\mathtt{-1.5 }\) . . .
Connections between the components of the CSNTC module
\(\hbox {s}_{\mathrm{D1}}\) . . 0.3 0.8
\(\hbox {s}_{\mathrm{D2}}\) . . 0.25 0.8
STN . . 1.0 .
Tha \(-\) 30.0 . 0.4 .
c . 0.6 . .
Offline learning
Read-out weights—regression
\({\lambda ^2} =\mathtt{0.000001 }\)
Read-out weights—BPDC
\({\beta } =\mathtt{0.00001 }\)
\({\eta } =\mathtt{0.2 }\)
Cortico-striatal weights—Oja's rule
\({\eta _{sc}} =\mathtt{0.05 }\)
\({k_{WTA_{Str}}} =\mathtt{1}\)
\({k_{WTA_{Ctx}}} =\mathtt{30 }\)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Mannella, F., Baldassarre, G. Selection of cortical dynamics for motor behaviour by the basal ganglia. Biol Cybern 109, 575–595 (2015) doi:10.1007/s00422-015-0662-6
Motor action
Cortical dynamics
Reservoir computing | CommonCrawl |
Cutoff frequency of RLC band pass and band stop filter
I have found in a website that the cutoff frequencies of a series RLC bandpass and bandstop filters are:
$$ f_c = \sqrt{\left(\frac{R}{2L}\right)^2+\frac{1}{LC}}\pm\frac{R}{2L}. $$
This is the link of the website and I am asking this because I have searched it a lot and couldn't find it elsewhere.
What is the formula for calculating the cutoff frequency of a series RLC band filter?
voltage ac band-pass cutoff-frequency
ErikR
Jun Seo-HeJun Seo-He
\$\begingroup\$ Do you have no access to any technical textbook (instead of websites) ? \$\endgroup\$
– LvW
\$\begingroup\$ Jun, if you google "filter RLC topology bandpass bandstop" I think you'll find a number. That doesn't mean they will teach you with a direct, custom-to-you-personally way. You only will get that from interactions from real people who can present and then listen and then respond to your questions and issues. But you cannot get that from websites. I suppose that's why you are here. But this really isn't a site where we can engage in a personal-tutor, back and forth dialogue. It could be done but it's not allowed. FInally, there are four topologies for BP and BS RLC filters. Not 1 or 2. \$\endgroup\$
\$\begingroup\$ Jun. I just looked. You've got lots of questions here since early May. Not a single one of them has attracted an answer. But you have tried to write sufficient initial detail in several cases to, I think, almost attract an answer. And I've seen you respond in comments and edit your questions Together, these things suggest a daily schedule mindset. You write what is in front of you, you deal with it for a day or two, but then outside forces drive you to the next hurdle and then your attention on the prior issue vaporizes and then you write a new question about the next hurdle. \$\endgroup\$
\$\begingroup\$ Jun, I think you have perhaps two reasonable options: (1) reduce your coursework sufficiently that you can give yourself enough schedule time to acquire the concepts ahead of you; or, (2) if you have any leisure time available then shift it into pushing lots harder on what is in front of you so that you can "catch up" and maintain the pace. I fear it's just otherwise never going to work. You have to face the facts and make adjustments. Repeating failed behavior patterns is not rational. You need to re-evaluate your plans, I think. (I may give this question your one shot, though.) \$\endgroup\$
\$\begingroup\$ @jonk some of my questions are for my electrical engineering class and some other questions are for pure hobby. \$\endgroup\$
– Jun Seo-He
RLC Passband and Stopband
Your question doesn't explicitly take note of the fact that there are four distinct RLC filters that are either band-pass or band-stop. But I'd like to list them for those interested in a slower pace:
simulate this circuit – Schematic created using CircuitLab
The diagrams with the capacitor and inductor in parallel are called in-parallel bandpass or in-parallel bandstop. The diagrams with the capacitor and inductor in series are called in-series bandpass or in-series bandstop.
The in-parallel arrangement has infinite (in theory) impedance at its resonant frequency. The in-series arrangement has zero (in theory) impedance also at its resonant frequency. Looking at #1 above, this means that all of the input gets to the output, so this is a bandpass. From #2 above, this means that the input impedance is zero, so again this is a passband. From #3 above, this means the input impedance is infinite, so none of the input gets to the output and this must be a stopband. Finally from #4 above, the series impedance is zero and effectively grounds-out the input so none of the input gets to the output and this also must be a stopband.
Returning to your website
Your website concludes the following two equations for case #2, above: the in-series bandpass case.
$$\begin{align*} \omega_{c_1}&=-\frac12\frac{R}{L}+\sqrt{\left(\frac12\frac{R}{L}\right)^2+\frac1{L\,C}} \\\\ \omega_{c_2}&=\frac12\frac{R}{L}+\sqrt{\left(\frac12\frac{R}{L}\right)^2+\frac1{L\,C}} \end{align*}$$
Let's pick a specific case, using reasonable part values, in order to test (or demolish) the above website equations for \$\omega_{c_1}\$ and \$\omega_{c_2}\$. I'll use \$R=1\:\text{k}\Omega\$, \$L=2.2\:\text{mH}\$, and \$C=220\:\text{nF}\$.
Let's put this into a spice program (LTspice) and see what it shows us:
(The above image can be clicked in order to see it in more detail.)
From the above, using LTspice, I find numerical values of \$f_{c_1}=716.37682 \:\text{Hz}\$ and \$f_{c_2}=73.066561 \:\text{kHz}\$.
Let's now compute the results using the website's equations:
$$\begin{align*} \omega_{c_1}&=-\frac12\cdot\frac{1\:\text{k}\Omega}{2.2\:\text{mH}}+\sqrt{\left(\frac12\cdot\frac{1\:\text{k}\Omega}{2.2\:\text{mH}}\right)^2+\frac1{2.2\:\text{mH}\cdot 220\:\text{nF}}} \\\\ &\approx 4.501 \:\text{k}\frac{\text{rad}}{\text{s}} & (f_{_0}\approx 716.3384\:\text{Hz}) \\\\ \omega_{c_2}&=\frac12\cdot\frac{1\:\text{k}\Omega}{2.2\:\text{mH}}+\sqrt{\left(\frac12\cdot\frac{1\:\text{k}\Omega}{2.2\:\text{mH}}\right)^2+\frac1{2.2\:\text{mH}\cdot 220\:\text{nF}}} \\\\ &\approx 459.05 \:\text{k}\frac{\text{rad}}{\text{s}} & (f_{_0}\approx 73.0595\:\text{kHz}) \end{align*}$$
Remarkably close results. I think we can say that your website has the right calculations.
I want anyone reading this to read and think for yourself, but trust nothing you read. Verify everything, relentlessly. Don't trust anything you read, even when you've read it from 100 different sources and every one of them says the same thing.
Can you imagine what would have happened in science if Ted Maiman (1960) had believed all of the experimental results by many different scientists, including the results of his own team when he asked them to replicate earlier results, regarding the viability of the ruby laser? Everyone was saying it simply could not reach the critical point of lasing. When he finally examined the experimental setups used to reach those conclusions he realized some flaws in thinking and was then able to perform improved experimental designs and demonstrate that a ruby could, in fact, lase.
So trust nothing. Least of all yourself. (It's probably easiest for you to fool yourself!)
There's another feature of the cutoff frequencies. It must be the case that \$\omega_{_0}=\sqrt{\omega_{c_1}\cdot \omega_{c_2}}\$ and also that \$f_{_0}=\sqrt{f_{c_1}\cdot f_{c_2}}\$. We know that \$\omega_{_0}=\frac1{\sqrt{L\,C}}=45.\overline{45}\:\frac{\text{rad}}{\text{s}}\$ (\$f_{_0}\approx 7.2343\:\text{kHz}\$.) I'll leave it to you to test out these ideas.
I think we have taken the time to verify the results of your website.
I also very much encourage you to perform your own calculations of the above. Perhaps I made a mistake in performing the formula, as I interpreted it from the website. Or perhaps I didn't properly transfer the formula here and got that wrong. Verify everything you see. Even when I seem to be verifying something someone else said! Leave no stone unturned.
Q and \$\zeta\$
Bandpass filters, like all 2nd order filters, will have an important variable that is a description of the shape of their behavior. (Their shapes vary and it helps to know which shape you are looking at.) This is either \$Q\$ or \$\zeta\$. Their relationship is \$Q=\frac1{2\,\zeta}\$ or \$\zeta=\frac1{2\,Q}\$.
\$\zeta\$ is the damping factor. It's special because when \$\zeta=1\$, then the system is critically-damped and that's one particular shape. If \$\zeta\gt 1\$, then the system is over-damped and that's another set of particular shapes that gradually change in particular ways away from the critically-damped shape. If \$\zeta\lt 1\$, then the system is under-damped and that's yet another set of particular shapes that gradually change in particular ways, again away from the critically-damped shape. Together, with \$\zeta\$ you can specify all of the possible shapes that exist. Just one parameter tells you all of that.
\$Q\$ is just another way of saying similar things. If you go to this Wikipedia page on the Q factor and bandwidth, you will see a statement that says, "The 2-sided bandwidth relative to a resonant frequency of \$f_{_0}\:\text{Hz}\$ is \$\frac{f_{_0}}{Q}\$." This is one of the uses for \$Q\$: \$\frac{f_{_0}}{Q}=f_{c_2}-f_{c_1}\$.
When I chose the values \$R=1\:\text{k}\Omega\$, \$L=2.2\:\text{mH}\$, and \$C=220\:\text{nF}\$, I chose them in such a way that \$Q=0.1\$ and \$\zeta=5\$.
This would be a good time to now spend a moment and verify what the above Wikipedia page says, too. Double-check their computation suggestion and see if it really works, given the values already calculated above.
2nd order bandpass, general form
All of the above topologies are 2nd order, as you've got two energy-storage devices in each of them.
All 2nd order transfer functions -- and I mean all of them -- fit a single general structure:
$$\mathcal{H}\left(s\right)=\frac{a_2 s^2 + a_1 s + a_0}{b_2 s^2 + b_1 s + b_0}$$
That may look a bit daunting to start, especially when you realize that \$s=\sigma + j\,\omega\$. But there are some simplifications to help you out.
All of the above co-efficients are real-valued. They are not complex-valued. This isn't mathematics and when taking some complex analysis course. This is electronics. So you need to put your electronics hat on when reading these things.
You can divide up the above fraction into the following three terms, which can be separately considered:
$$\mathcal{H}\left(s\right)=\frac{a_2 s^2}{b_2 s^2 + b_1 s + b_0}+\frac{a_1 s}{b_2 s^2 + b_1 s + b_0}+\frac{a_0}{b_2 s^2 + b_1 s + b_0}$$
In fact, those three terms are, respectively, a highpass, a bandpass, and a lowpass.
We are discussing bandpass and bandstop filters. So this either means your transfer function will be like the middle term above (bandpass) or else like the sum of the first and last terms above (bandstop.)
And since your question is specifically about the in-series bandbass, I'll ignore the rest.
Derivation of standard form
For a bandpass, we are interested in this:
$$\mathcal{H}\left(s\right)=\frac{a_1 s}{b_2 s^2 + b_1 s + b_0}\tag{1}$$
Before continuing with that, you'll often see people say that \$s=j\,\omega\$. It's really the case that \$s=\sigma+j\,\omega\$. (At least, so far.) Multiplication in the complex domain involves two things (in our feeble human minds, anyway; but in an alien who can think in complex numbers as a single concept it's only one thing): scaling and rotation. The \$\sigma\$ value is actually part of a factor: \$e^{\sigma\,t}\$. When \$\sigma=0\$ then the factor stays \$1\$ for all time and everyone is happy. If \$\sigma\lt 0\$ then the factor shrinks (scales downward) with time and this means the circuit might hick-up or burble a bit, but given enough time it will settle down. And again, everyone can be happy. But if \$\sigma\gt 0\$ then we have trouble. Stuff goes to heck, given enough time. And that's not a happy circumstance. This is one of the reasons that folks like the left-hand complex plane and feel less comfortable when things are playing out over in the right-hand complex plane. So for limiting ourselves to just looking at the frequency behavior, we'll often just set \$\sigma=0\$ because we aren't interested in the scaling aspects of the problem, only the frequency aspects when assuming no scaling is going on.
If you substitute \$s=j\,\omega\$ into the above equation, then you have:
$$\mathcal{H}\left(j\,\omega\right)=\frac{j\,a_1 \,\omega}{\left(b_0-b_2 \omega^2\right) + j\,b_1 \,\omega}\tag{2}$$
Just think of the denominator now as having two sides of a right triangle (the imaginary part is orthogonal to the real part) and that the hypotenuse is the magnitude. (The magnitude of the numerator is all in one direction and is just the factor after \$j\$.)
As \$\omega\to 0\$, you just have \$\frac{0}{b_0}=0\$. And as \$\omega\to\infty\$, \$b_0\$ isn't going to matter and you just have \$\frac{j\,a_1}{-b_2 \omega + j\,b_1 }\$ and the hypotenuse of the denominator will be entirely determined by \$\omega\to\infty\$ and again the result is \$\frac{j\,a_1}{-b_2 \omega + j\,b_1 }\to 0\$. So this is certainly a bandpass.
In between, there is a place where the real part in the denominator goes to zero. As that web page mentioned, this is when \$b_0=b_2 \omega^2\$. That's when \$\omega=\sqrt{\frac{b_0}{b_2}}\$. It doesn't matter what \$b_0\$ is. It doesn't matter what \$b_2\$ is. Whatever they are, that's going to be a special value of \$\omega\$. We call this special value, \$\omega_{_0}\$. It is super-important.
What happens when \$\omega=\omega_{_0}\$? Well, we've already said that \$\omega_{_0}\$ causes the real part of the denominator of \$\mathcal{H}\left(j\,\omega\right)\$ to go to zero. So then the equation becomes: \$\frac{j\,a_1 \,\omega_{_0}}{j\,b_1 \,\omega_{_0}}=\frac{a_1}{b_1}\$. This is now also a special value. We call it the gain and label it \$A\$ (or \$K\$ or \$h\$ [in Sallen & Key's paper] or pretty much anything anyone is feeling like using, that day.)
So we've uncovered two interesting values related to bandpass filters: \$A=\frac{a_1}{b_1}\$ and \$\omega_{_0}=\sqrt{\frac{b_0}{b_2}}\$. (Do keep in mind that we set \$\sigma=0\$ to find them. But they are still very special.)
Now, look back at the denominator in equation (1) above.
We could just extract \$b_2\$ out in order to get \$b_2\left[s^2+\frac{b_1}{b_2}\,s+\frac{b_0}{b_2}\right]\$. But, hmm, there's that interesting fraction now: \$\frac{b_0}{b_2}\$. We know that is the same thing as \$\omega_{_0}^2\$. So let's plug that back in and get \$b_2\left[s^2+\frac{b_1}{b_2}\,s+\omega_{_0}^2\right]\$. That seems like it could go somewhere. But we aren't there, yet.
At this point, it's time to consider factoring that quadratic equation. That might help us. There is a very special set of values that would be really nice when solving a quadratic equation and this is when \$b_1^2=4\,b_2\,b_0\$. (Obviously, because of \$\frac{-b_1\pm\sqrt{b_1^2-4\,b_2\,b_0}}{2\,b_2}\$.) This suggests \$b_1=2\sqrt{b_2\,b_0}\$, in this special circumstance. But what about other circumstances?
Well, let's create a new constant, \$\zeta\$, and use it like this: \$b_1=2\zeta\,\sqrt{b_2\,b_0}\$. This means that even if the two sides aren't equal, then there's some kind of factor, \$\zeta\$, that we can plug in to make them equal. So, solving we find \$\zeta=\frac{b_1}{2\sqrt{b_2\,b_0}}\$. And we know for a fact that when \$\zeta=1\$ then it is also the case that \$b_1^2=4\,b_2\,b_0\$ and therefore that the solution is very simple because the quadratic equation's square-root term now goes to zero and there's only one resulting value from it. So, in some fashion, we know that \$\zeta=1\$ is also quite special.
But let's think just a little bit more about when \$\zeta\ne 1\$. When \$\zeta\lt 1\$ then this means we had to reduce the term \$2\sqrt{b_2\,b_0}\$ in order to make things equal. Clearly, this means that \$b_1\lt 2\sqrt{b_2\,b_0}\$. And that means that the square-root will be imaginary and non-zero. Okay. That's interesting. And when \$\zeta\gt 1\$ then this means we had to increase the term \$2\sqrt{b_2\,b_0}\$ in order to make things equal. Clearly, this then means that \$b_1\gt 2\sqrt{b_2\,b_0}\$. And that means that the square-root will be real and non-zero. So, \$\zeta\$ is a very special value that informs us immediately something quite interesting about the quadratic solution, too!
So we've uncovered three interesting values related to bandpass filters: \$A=\frac{a_1}{b_1}\$ and \$\omega_{_0}=\sqrt{\frac{b_0}{b_2}}\$ and \$\zeta=\frac{b_1}{2\sqrt{b_2\,b_0}}\$.
Recall that \$b_1=2\zeta\,\sqrt{b_2\,b_0}\$? Follow along:
$$\begin{align*} \mathcal{H}\left(s\right)&=\frac{a_1 s}{b_2 s^2 + b_1 s + b_0} \\\\ &=\frac{a_1 s}{b_2\left( s^2 + \frac{b_1}{b_2} s + \frac{b_0}{b_2}\right)} \\\\ &=A\frac{\frac{b_1}{b_2} s}{ s^2 + \frac{b_1}{b_2} s + \frac{b_0}{b_2}} \\\\ &=A\frac{ \frac{2\zeta\, \sqrt{b_2\,b_0} }{ b_2 } s} { s^2 + \frac{2\zeta\,\sqrt{b_2\,b_0}}{b_2} s + \frac{b_0}{b_2}} \\\\ &=A\frac{ 2\zeta\,\sqrt{\frac{b_0}{b_2}} s} { s^2 + 2\zeta\,\sqrt{\frac{b_0}{b_2}} s + \frac{b_0}{b_2}} \\\\ \mathcal{H}\left(s\right)&= A\frac{ 2\zeta\,\omega_{_0} s} { s^2 + 2\zeta\,\omega_{_0} s + \omega_{_0}^2}\tag{3} \end{align*}$$
This is an important result. We've created some special values that have meaning to us and now have found a way of combining both the numerator's and denominator's coefficients in a way that completely replaces them with these new, special values. Those old constants are totally gone. And instead of staring at seemingly random, meaningless coefficients that may mysteriously relate to each other in obscure ways, we've replaced all that with new, very meaningful values which help us interpret the equation. The work was worth every moment!
Keep in mind that all three of these values are built entirely from the earlier coefficients. The expressions may seem a bit odd or arbitrary. But as you can see from the above work, they are anything but arbitrary. They are quite purposeful and meaningful and they show you how these earlier coefficients were actually related to each other. This is quite important to recognize and appreciate!
So: \$A\$ is the gain, \$\omega_{_0}\$ is when the real part of the denominator goes to zero, leaving only an imaginary part, and \$\zeta\$ is exactly \$1\$ when the quadratic solution only has one resulting magnitude and is critically damped.
The reason I go through this is to explain why equation (3) above is so important. It's called a standard form for the 2nd order bandpass transfer function for a reason.
But there is another standard form, which I may as well now show:
$$\mathcal{H}\left(s\right)=A\frac{ 2\zeta \left(\frac{s}{\omega_{_0}}\right)} { \left(\frac{s}{\omega_{_0}}\right)^2 + 2\zeta \left(\frac{s}{\omega_{_0}}\right) + 1}\tag{4}$$
You will encounter both of these.
The meaning of a cutoff frequency for a 2nd order bandpass, like this, is when it is at half-power. So long as the filter has \$\zeta>1\$, it's over-damped and will have two distinct shoulders as shown in the LTspice picture, earlier. These are the two cutoff frequncies.
Half-power means \$\frac{v_\text{out}}{v_\text{in}}=\frac1{\sqrt{2}}\$. (I assume you already know why.) So in this case to find the cutoff frequencies we need a transfer function and we need to set its magnitude equal to \$\frac1{\sqrt{2}}\$ and then solve it for \$\omega\$. (The magnitude of a complex-valued transfer function is found by multiplying it by its complex conjugate and then taking the square-root.)
While I want you to try to put things into standard form, as above, it's more important for this question that you focus on creating the transfer function for schematic #2 and then develop an expression for its magnitude value. Then set this equal to \$\frac1{\sqrt{2}}\$ and see if you can solve it for \$\omega\$.
The transfer function will be \$\frac{R}{R+Z_L+Z_C}\$, of course. And your web page does correctly help you out there. So I won't duplicate it. But you need to multiply it by its complex conjugate. You don't need to take the square root of that, as you can just square \$\frac1{\sqrt{2}}\$ to get \$\frac12\$ and use that as the assigned value. Then solve the resulting equation for \$\omega\$. See if you can arrive at the same expression on that web site.
You know the procedure. Follow it through.
Summary about standard form
The Laplace transform was invented by a mathematician, Pierre-Simon Laplace, for mathematicians and then systematized by Oliver Heaviside to simplify the solutions for differential equations describing physical processes. It's a fantastically general and powerful tool and it is used in myriad specializations. There are few natural processes for which it doesn't apply, since almost everything in nature is about locality where the change in things depends in some way upon the amount of things. A subset of this powerful tool applies well in electronics, too.
Here's a summary of the genius behind the standard form for 2nd order transfer functions (at least as applied to bandpass.) These are uncovered in the following order, as we proceed:
When reducing \$\mathcal{H}\left(s\right)\$ the to \$\mathcal{H}\left(j\,\omega\right)\$, so as to exclude scaling behaviors and focus upon frequency behaviors, we discovered that there was a special value of \$\omega\$ that caused the real part of the denominator to go to zero, leaving only the imaginary part. We call this special value, \$\omega_{_0}\$ (though of course it is also called many other things.)
When reducing \$\mathcal{H}\left(s\right)\$ the to \$\mathcal{H}\left(j\,\omega\right)\$ and when setting \$\omega=\omega_{_0}\$, we then also discovered that there is a special value, \$A\$, which is the gain of the transfer function. (There is a more general way of approaching this result, staying in terms of \$s\$. But the bandpass case allows a simpler approach, so I took it.)
In returning to the general form, \$\mathcal{H}\left(s\right)\$, and keeping in mind that our coefficients are real and not complex and then applying the standard quadratic solution to it, we uncovered another very useful idea where the quadratic solution's square-root term goes to zero. This is kind of like a dividing line that separates diverging behaviors as things move away from the critically-damped case in either of two directions (over-damped vs under-damped.) We call the measure of the diversion away from the critically damped case, \$\zeta\$. \$\zeta=1\$ for the exact critically-damped point, \$\zeta<1\$ for under-damped cases where the quadratic solution involves complex-valued roots (there will be a damped frequency, too, in this case), and \$\zeta>1\$ for over-damped cases where the quadratic solution involves real-valued roots and where the existence of low and high cutoffs now emerge (for bandpass.)
We were able to take the general form of:
$$\mathcal{H}\left(s\right)=\frac{a_1 s}{b_2 s^2 + b_1 s + b_0}$$
Where the coefficients were arbitrary and difficult to interpret and replace four coefficients with just three uniquely interesting and useful parameters, \$A=\frac{a_1}{b_1}\$, \$\omega_{_0}=\sqrt{\frac{b_0}{b_2}}\$, and \$\zeta=\frac{b_1}{2\sqrt{b_2\,b_0}}\$, to produce a far more meaningful result:
$$\begin{align*} \mathcal{H}\left(s\right)&= A\cdot\frac{ 2\zeta\,\omega_{_0} s} { s^2 + 2\zeta\,\omega_{_0} s + \omega_{_0}^2}\\\\ &=A\cdot\frac{ 2\zeta \left(\frac{s}{\omega_{_0}}\right)} { \left(\frac{s}{\omega_{_0}}\right)^2 + 2\zeta \left(\frac{s}{\omega_{_0}}\right) + 1} \end{align*}$$
This is no mean feat. It's brilliance.
There's a final, more philosophical note I want to leave with you. In paraphrase, "I hear and I forget, I see and I remember, I do and I Understand".
When someone hands you an answer on a platter, for example the concepts of mass, volume, and density (the ratio of mass to volume), you don't have any ideas about what led to it. You are just told, "do this and you will get useful answers." So you do that and you do get useful answers. But you've no concept of the struggles that humans went through to uncover the concept of mass as distinct from gravitational attraction or the large number of project failures when people didn't understand the concept of density. You don't know that for some time people thought sharp things sink because they cut water and that blunt things float because they don't. Etc. You've no clue, no deep understanding. All that has happened is that you were handed a tool. And you know how to use it. But it goes no deeper than that (because you were never asked to discover that tool for yourself.) At least, at first.
When you are handed the idea of a resonant frequency, \$\omega_{_0}\$, or a damping factor, \$\zeta\$, or a quality factor, \$Q\$, all of these more as on a silver platter than by doing the hard things to get there, then you learn but you do not understand that something.
How did we arrive at this wonderful place we are being taught?
I hope just a little of that how has leaked out in the above. We do stand on the shoulders of giants. And we should give some conscious nod of appreciation for what we've been given in helping us understand the world around us just a little better.
jonkjonk
\$\begingroup\$ This could have been an introductory chapter of a book, if it isn't already (I wouldn't be surprised to see you among the authors). +1 \$\endgroup\$
– a concerned citizen
\$\begingroup\$ @Carl What about Barrie Gilbert? He invented a great deal for Tektronix. And he didn't have a degree of any kind. He felt sheepish about that and wanted one. He eventually was awarded an honorary degree, which pleased him greatly. But the point is the same. Many companies hire for capability. \$\endgroup\$
\$\begingroup\$ @Carl There a few things that guided me. (1) Work on what you love; and, (2) Only on things you would do for free, anyway (if you don't have another reason other than money, then you will never be truly good at it); and, (3) Only work with people you care about and who care about you and who are willing to share their views of the world around them (you don't have the time in your life to waste on people you don't like or who won't have your back when it matters); and, (4) Never stop self-education -- it should be such a disease that you are reading books on the steering wheel of your car! \$\endgroup\$
\$\begingroup\$ @Carl You keep and nurture those who are able to help you and where you can also help them in other ways and where there is a good possibility that the relationship will have still greater benefits in the future for both. (If the relationship isn't mutually beneficial, it won't survive. So it has to work for both. Not just one or the other.) What I so dislike about US "capitalism" is that it has developed more towards short-term, in-and-out, plug-and-play, everyone-is-a-replaceable-cog business model. It's antithetical to what's healthy. Avoid short-term like the plague. \$\endgroup\$
\$\begingroup\$ @MituRaj Yes. One difference in 1974 is the ICs were not cheap compared to incomes. My dad died when I was 7 and my childhood was public school and having to work berry and vegetable fields when work was available, to survive. I lived in a house without walls for years. So my income was very little. No Youtube. Also, no protoboard stuff. I wire-wrapped, instead. And finally, I had zero education at 19. I had to read TI's databook and just think for myself and experiment. I think the pain of these struggles helped deepen things. Stuff handed on a platter doesn't stay as well. \$\endgroup\$
Not the answer you're looking for? Browse other questions tagged voltage ac band-pass cutoff-frequency or ask your own question.
Deriving 2nd order passive low pass filter cutoff frequency
Why is center frequency of a bandpass filter is given by the geometric average of the two cutoff frequencies?
Determining cut offs frequencies of band-pass filter
The Cutoff frequency of bandpass filter
2nd order passive low pass filter cutoff frequency with additional paralel resistor before the second capacitor
Second Order Passive RC Low Pass Filter Doesn't Have -6dB at cutoff Frequency in LTSpice
Regards Band Pass Filter with AmpOp
Band pass filter bandwidth problem
Construction of a Band reject filter | CommonCrawl |
Tensor in signal processing
I saw a lot of use of tensor in signal processing. What is the intuition behind it? Is it simply a common representation for audio (1D), image (2D) and video (3D) signals?
image-processing
Dzung NguyenDzung Nguyen
This has little to do with intuition. Tensors are rigorously defined mathematical objects, and in general simple arrays don't qualify as tensors. Specifically, signals are not tensors.
In the language of mathematics, and there the field of differential geometry, a tensor is a linear function with several arguments. It is therefore also called a multi-linear form. It is used to describe properties of manifolds at a single point.
A point on a (smooth) manifold comes with a tangent space that contains all possible "direction" at that point, where a direction is really just a tangent vector at that point. This tangent space at a single point is a vector space and it has the same dimension as the manifold.
The tangent space has a natural so called dual space, which contains the linear functions that map tangent space vectors to real numbers. This is the cotangent space, and it's also a vector space of the dimension of the manifold.
A tensor is now (roughly) a linear function that maps any number of dimensions between the tangent space, the cotangent space and the real numbers.
Tensors typically don't come as singles but in so called bundles, which are the disjoint union of the tensors of all points of the manifold. For example if you have a vector at every point of the manifold then the whole structure is called a vector bundle. Similarly, tensor bundles associate a tensor with every point on the manifold, usually in a smooth way.
In signal processing you can encounter tensors, but usually they are called differently. For example in video processing the image is your manifold (or rather, a function on a 2-dimensional manifold giving the brightness or color for each point) and the velocity field that describes the local motion is a rank 1 tensor field (or bundle) on that manifold.
Tensors also show up a lot in volume data processing, where tensors can describe local properties of the volume data. For example if you have a doppler ultrasonic tomography image, then the velocity data in each voxel is a tensor field as would be mechanical stress in tomographic material analysis.
For a 1-dimensional signals tensors often come in the form of derivative operators. For example if you have a signal $s(t)$, then the operator $\frac{\partial}{\partial t}$ is a tensor field of rank 1, or just a vector field. More generally, if you multiply that operator with a smooth function $g(t)$, then all possible vector fields on the manifold the signal lives on are of the form $g(t)\frac{\partial}{\partial t}$
To sum up, tensors are a description of differential properties of manifolds. If you want to understand them, you'll have to study differential geometry.
JazzmaniacJazzmaniac
$\begingroup$ good answer, can you elaborate on the mapping of the tangent space vectors to real numbers? what does this real valued signal represent? $\endgroup$ – geometrikal Dec 20 '13 at 4:02
$\begingroup$ @geometrikal: You can imagine this map as a generalization of the directional derivative of a function on the manifold and the construction of the gradient of that function. The directional derivative of a scalar function F at a point p will result in a real (or maybe complex) number that depends on the direction represented by a vector field at that same point p. Instead of evaluating the derivative of the function every time you change that direction v, you can also introduce the gradient of F at p and simply evaluate the scalar product <grad F,v>. $\endgroup$ – Jazzmaniac Dec 20 '13 at 11:18
$\begingroup$ Modern differential geometry calls the linear function <grad F, * > a covector that live in the cotangent space at p. This abstraction allows you to remove the need of having a scalar product defined, because whatever linear product is used is already contained in the cotangent vector. These covectors are also called "differentials" and usually written dF. $\endgroup$ – Jazzmaniac Dec 20 '13 at 11:20
I think the question is not related to the tensor fields. There is a growing area in signal processing, namely tensor factorizations. Here tensor is simply a multiway array - generalization of the matrix. So, yes, a video is a tensor with dimensions $m\times n \times T$ here $T$ is a frame count.
Tensor related methods are not restricted to the video processing because in many data analysis problems, data comes in tensor form, for instance think of a tensor encodes the relations between $user \times user \times product$.
You can see an audio processing application of the tensor factorization from here.
DenizDeniz
$\begingroup$ You're right that tensor product and tensor factorizations are important in signal processing. But contrary to common belief the objects they work on or result in are not automatically tensors. In fact, the typical tensor product works on ordinary vector spaces. So your conclusion that a video is a tensor is not really correct. At least from a mathematical stand point. $\endgroup$ – Jazzmaniac Dec 20 '13 at 11:11
$\begingroup$ Yes, true, in this context tensor is just a name, from a rigorous point of view a video is not a tensor of course. But in this area, terminology goes like this in practice. $\endgroup$ – Deniz Dec 20 '13 at 15:21
Not the answer you're looking for? Browse other questions tagged image-processing or ask your own question.
Can a digital image be considered as a tensor ? or have a tensor like structure?
How do I implement the Structure tensor (in Matlab)?
What happens when we Fast Forward the Video
Convolution in Image Processing?
Image and video processing in matlab
Intuition behind the Gaussian Filter in Image Processing
Structure Tensor vs. Hessian Matrix
Polyphase FIR image downscaling
What is a periodic signal in image processing?
What is energy in image processing? | CommonCrawl |
Probability Distributions
7 Answered Questions for the topic Probability Distributions
Answered Questions All Questions Unanswered Questions
Newest Active Followers
Probability Distributions Statistics
Find the last 3 terms in the expansion of (3𝑥 + 𝑥2)5 . Simplify your answer.
Follows • 1
Expert Answers •1
Probability Distributions Probability
In a box of smarties there are eight different colours which normally occur in equal proportions. Rachael is given 24 smarties, and orange ones are her favourite. Assume they come from a very large... more
A B C Total Male 7 8 15 30 Female 5 20 14 39 Total 12 28 29 69 Giving a test to a group of students, the grades and gender are summarized above. What percent of the students... more
A B C Total Male 10 20 2 32 Female 16 3 5 24 Total 26 23 7 56 A test was given to a group of students. The grades and gender are summarized aboveIf one student is chosen at... more
A B C Total Male 7 6 2 15 Female 11 17 19 47 Total 18 23 21 62 Giving a test to a group of students, the grades and gender are summarized aboveIf one student was chosen at... more
Probability Distributions Actuarial Science Probability Probability Theory
What does it mean that the probability density function is proportional to a function?
I'm studying for SOA/CAS Exam P and I have a problem that says that $X$ is a continuous and positive random variable whose probability density function is proportional to: $$\\frac{1}{(1+x)^5}$$... more
Probability Distributions Please Help Solve
What is P(4 ≤ X ≤ 6)? Explain. If P(X ≤ a) = 0.2, what is a? Explain.
~Questions 1 & 2~Here is a Probability Distribution.Look at the image below to answer the question, please.Thanks in Advance! :)click on the link... more
Actuarial Science Please Help Solve
Nick S.
Vitaliy V.
Matthew L.
5.0 (1,399)
Probability tutors
Discrete Math tutors
Boolean Algebra tutors
GRE Math tutors
Competition Math tutors
Statistics Graduate Level tutors | CommonCrawl |
GraphSLAM: why are constraints imposed twice in the information matrix?
I was watching Sebastian Thrun's video course on AI for robotics (freely available on udacity.com). In his final chapter on GraphSLAM, he illustrates how to setup the system of equations for the mean path locations $x_i$ and landmark locations $L_j$.
To setup the matrix system, he imposes each robot motion and landmark measurement constraint twice. For example, if a robot motion command is to move from x1 by 5 units to the right (reaching x2), i understand this constraint as
$$-x_2+x_1= -5$$
However, he also imposes the negative of this equation $$x_2-x_1=5$$ as a constraint and superimposes it onto a different equation and i'm not sure why. In his video course, he briefly mentions that the matrix we're assembling is known as the "information matrix", but i have no why the information matrix is assembled in this specific way.
So, I tried to read his book Probabilistic Robotics, and all i can gather is that these equations come from obtaining the minimizer of the negative log posterior probability incorporating the motion commands, measurements, and map correspondences, which results in a quadratic function of the unknown variables $L_j$ and $x_i$. Since it is quadratic (and the motion / measurement models are also linear), the minimum is obviously obtained by solving a linear system of equations.
But why are each of the constraints imposed twice, with once as a positive quantity and again as the negative of the same equation? Its not immediately obvious to me from the form of the negative log posterior probability (i.e. the quadratic function) that the constraints must be imposed twice. Why is the "information matrix assembled this way? Does it also hold true when the motion and measurement models are nonlinear?
$\begingroup$ Ummm, isn't a quadratic solved with .... + or - the square root of ... - just a wild guess. $\endgroup$ – Spiked3 Feb 27 '15 at 22:57
$\begingroup$ The answer to your question is explained in sections 5.3 and 5.4 of this paper. You can see that it does hold true for linearized systems. I suspect he didn't include it in the video because it is beyond the scope of the course to do the derivation. I'd like to note that GraphSLAM is just one approach to graph-based SLAM algorithms. The "generic" approach is explained in "A Tutorial on Graph-Based SLAM" by Grisetti et al.. $\endgroup$ – kamek Mar 1 '15 at 2:52
$\begingroup$ @kamek: thanks for the link. Judging by the linked paper, it seems that the video also makes an unstated assumption that the covariances are all 1 in magnitude... Otherwise the terms in the equations would be scaled... Does this make sense to you as well? $\endgroup$ – Paul Mar 1 '15 at 6:08
$\begingroup$ @Paul Yes, the terms added to the information matrix are scaled by the covariance of the measurement. The reason why the constraints are "added twice" is because you can think of the information matrix as being a table where each row and each column is an entry in the state. Obviously when there is a measurement that links two entries (e.g., a motion measurement between pose a and pose b), it is added "twice" to the information matrix, once at (pose a, pose b), and another at (pose b, pose a). Hope this helps. $\endgroup$ – kamek Mar 1 '15 at 22:07
$\begingroup$ @kamek: That sounds plausible, but i'd really like to see how it comes to this from minimizing the quadratic. Filling in the gap between the obtaining the minimizer of the quadratic function and assembly of the information matrix is what I'm really interested in. $\endgroup$ – Paul Mar 2 '15 at 22:17
After painstakingly trying to find someone on the internet with more experience on this subject to help me out (to no avail), I finally gave up and decided to take matters into my own hands and figure it out myself! As it turns out, the constraints are imposed twice as a direct result of applying the chain rule for derivatives when obtaining the gradient of the negative log posterior belief function equal to zero (which is equivalent to finding the maximum of the belief).
Unfortunately, there's no easy way to demonstrate this other than going through the math one step at a time.
Problem Setup
To help explain, let me setup an example to work with. For simplicity, lets assume that the robot moves in only on direction (in this case, the x-direction). In one dimension, the covariance matrices for motion and sensor data are simply the variances $\sigma^2_{motion}$ and $\sigma^2_{sensor}$. Again, for simplicity, let's assume that $\sigma^2_{motion}=\sigma^2_{sensor}=1$.
Now, let's assume that the robot starts at the point $x_0=0$ and then executes two motion commands in this following order:
Move forward by 10 units ($x_1 = x_0 + 10$)
Let's also assume that the robot world only contains one landmark $L_0$ which lies somewhere in the 1D world of the robot's motion. Suppose that the robot senses the following distances to the landmark from each of the three positions $x_0, x_1, x_2$:
At $x_0$: The robot sensed Landmark $L_0$ at a distance of 9 units ($L_0-x_0=9$)
At $x_2$: The robot sensed Landmark $L_0$ at a distance of 21 units ($L_0-x_1=8$)
(These numbers may look a little strange, but just take them as a given for this exercise).
Belief Function
So, each of the relative motion and measurement constraints contributes a Gaussian function to the "posterior belief" function. So, with the information assumed above, we can write the belief function as the product of gaussians as follows:
$$Belief = C e^{-\frac{(x_0-0)^2}{2\sigma^2}}e^{-\frac{(x_1-x_0-10)^2}{2\sigma^2}}e^{-\frac{(x_2-x_1-14)^2}{2\sigma^2}} * e^{-\frac{(L_0-x_0-9)^2}{2\sigma^2}}e^{-\frac{(L_0-x_1-8)^2}{2\sigma^2}}e^{-\frac{(L_0-x_2-21)^2}{2\sigma^2}}$$
Note that $C$ is a constant, but we won't really need to know the exact value of $C$. Recall that we assume all the variances $\sigma^2=1$, so we obtain
$$Belief = C e^{-\frac{(x_0-0)^2}{2}}e^{-\frac{(x_1-x_0-10)^2}{2}}e^{-\frac{(x_2-x_1-14)^2}{2}} * e^{-\frac{(L_0-x_0-9)^2}{2}}e^{-\frac{(L_0-x_1-8)^2}{2}}e^{-\frac{(L_0-x_2-21)^2}{2}}$$
Negative Log Posterior
Our main goal is to find the values of $x_0,x_1,x_2,L_0$ that maximize this function. However, we can make some transformations to the "belief" function that enable us to find the maximum very easily. First, finding the maximum of the $Belief$ is equivalent to finding the maximum of $log(Belief)$, which allows us to exploit the properties of logarithms which gives us:
$$log(Belief)= log(C) - \frac{1}{2}(x_0-0)^2-\frac{1}{2}(x_1-x_0-10)^2-\frac{1}{2}(x_2-x_1-14)^2 -\frac{1}{2}(L_0-x_0-9)^2-\frac{1}{2}(L_0-x_1-8)^2-\frac{1}{2}(L_0-x_2-21)^2$$
Also, finding the maximum of a function $f(x)$ is equivalent to finding the minimum of the function $-f(x)$. So we can restate this problem as finding the minimum of
$$F\equiv-log(Belief)= -log(C) + \frac{1}{2}(x_0-0)^2+\frac{1}{2}(x_1-x_0-10)^2+\frac{1}{2}(x_2-x_1-14)^2 +\frac{1}{2}(L_0-x_0-9)^2+\frac{1}{2}(L_0-x_1-8)^2+\frac{1}{2}(L_0-x_2-21)^2$$
To find the minimum, we take the partial derivative of the $F$ function with respect to each of the variables: $x_0, x_1, x_2,$ and $L_0$:
$F_{x_0}= (x_0 - 0) - (x_1 - x_0 - 10) - (L_0-x_0-9) = 0$
$F_{x_1}= (x_1 - x_0 - 10) - (x_2-x_1-14)- (L_0-x_1-8) = 0$ $F_{x_2}= (x_2 - x_1 - 14) - (L_0-x_2-21) = 0$
$F_{L_0}= (L_0-x_0-9) + (L_0-x_1-8)+ (L_0-x_2-21) = 0$
Notice that the 1st and second equations impose the first relative motion constraint $x_1=x_0+10$ twice: the first equation with a negative sign as a result of the chain rule for derivatives and the second equation with a positive sign (also as a result of the chain rule). Similarly, the second and third equation contain the second relative motion constraint, with opposite signs as a result of applying the chain rule for derivatives. A similar argument can be said for the measurement constraints in their corresponding equations. There's no inherent explanation for why it MUST necessarily work out this way... It just happens to have this structure in the end after working out the math. You may notice that only the initial position constraint $(x_0-0)$ is imposed only once because its quadratic term $\frac{1}{2}(x_0-0)^2$ only features a single variable inside the parentheses, so it is impossible for this term to appear in the gradient of $F$ with respect to any other variable other than $x_0$.
It was not apparent to me just by looking at the structure of the belief function that the gradient takes on this form without working through the details explicitly. Of course, I've made a number of simplifying assumptions along the way, including avoiding the problem of "data association" and assuming linear expressions for the motion constraints and measurement constraints. In the more general version of GraphSLAM, we do not necessarily assume this and the algorithm becomes more complicated. But in the linear case (that is, with linear motion and measurement constraints), the gradients of the negative log posterior belief function leads to imposing the motion and measurement constraints twice (once with a positive sign, once with a negative sign), each of which is also weighted by the corresponding covariance. There appears to be no inherent or more fundamental reason why it must necessarily work out this way based upon higher principles... It's just a bunch of trivial calculus that happens to work out this structured way.
Not the answer you're looking for? Browse other questions tagged slam or ask your own question.
SLAM : Why is marginalization the same as schur's complement?
Pose-Graph-SLAM: How to create edges if only IMU-odometry is given?
Calculate the uncertainty of a 6-dof pose for graph-based SLAM
Why use GraphSLAM?
GraphSlam Matrix Subtraction doubt
GraphSLAM equation doubt
SEIF ,online version of Graph slam create doubt in Motion Update state
SEIF slam: Effect on information matrix when there is no landmarks
smooth robot 3D trajectory using graph optimization | CommonCrawl |
Binomial Expansion Formula (2023)
What is Binomial Expansion, and How does It work? Formula for the Binomial Theorem Properties of the Binomial Expansion Binomial Theorem General Term Mathematical Form of the General Term of Binomial Expansion Important Terms involved in Binomial Expansion Binomial Theorem and Pascal's Triangle: Properties of Binomial Theorem Binomial Expansion Formula Practical Applications Binomial Expansion Example Problems Fun Facts Solved Examples Videos
The Binomial Theorem is a quick way to multiply or expand a binomial statement. The intensity of the expressiveness has been amplified significantly. Multiplication of such statements is always difficult with large powers and phrases, as we all know. However, binomial expansions and formulas are extremely helpful in this area. The Binomial Theorem and the Binomial Theorem Formula will be discussed in this article. Let's start with a few examples to learn the concept.
What is Binomial Expansion, and How does It work?
The binomial theorem is a mathematical expression that describes the extension of a binomial's powers. According to this theorem, the polynomial (x+y)n can be expanded into a series of sums comprising terms of the type an xbyc.
The exponents b and c are non-negative integers, and b + c = n is the condition. In addition, depending on n and b, each term's coefficient is a distinct positive integer.
For n = 4, consider the following:
(x+y)⁴=x⁴+4x³y+6x²y²+4xy³+y⁴
It is self-evident that multiplying such phrases and their expansions by hand would be excruciatingly uncomfortable. Thankfully, someone has devised a formula for this growth, which we can employ with ease.
Formula for the Binomial Theorem
The formula for the Binomial Theorem is written as follows:
\[(x+y)^n=\sum_{k=0}^{n}(nc_r)x^{n-k}y^k\]
Also, remember that n! is the factorial notation. It reflects the product of all whole numbers between 1 and n in this case.
The following are some expansions:
(x+y)1=x+y
(x+y)2=x²+2xy+y²
(x+y)3=x³+3x²y+3xy²+y³
(Video) A-Level Maths: D1-01 [Binomial Expansion: Introducing and Linking Pascal's Triangle and nCr]
(x+y)n
Properties of the Binomial Expansion
In total, there are n+1 terms.
xn is the initial term, while isyn is the last term.
The exponent of x declines by 1 from term to term as we progress from the first to the last. While the exponent of y grows by one, the exponent of x grows by one. In addition, the total of both exponents in each term is n.
MLS expansion: Ranking the remaining potential markets | Goal.com USSmall business expansion loans from $25K
We can simply determine the coefficient of the following phrase by multiplying the coefficient of each term by the exponent of x in that term and dividing the product by the number of that term.
Binomial Theorem General Term
Binomial Expansion is one of the methods used to expand the binomials with powers in algebraic expressions. In algebra, a binomial is an algebraic expression with exactly two terms (the prefix 'bi' refers to the number 2). If a binomial expression (x + y)n is to be expanded, a binomial expansion formula can be used to express this in terms of the simpler expressions of the form ax + by + c in which 'b' and 'c' are non-negative integers. The value of 'a' completely depends on the value of 'n' and 'b'. This section gives a deeper understanding of what is the general term of binomial expansion and how binomial expansion is related to Pascal's triangle.
Mathematical Form of the General Term of Binomial Expansion
Any binomial of the form (a + x) can be expanded when raised to any power, say 'n' using the binomial expansion formula given below.
( a + x )n = an + nan-1x + \[\frac{n(n-1)}{2}\] an-2 x2 + …. + xn
The above stated formula is more favorable when the value of 'x' is much smaller than that of 'a'. This is because, in such cases, the first few terms of the expansions give a better approximation of the expression's value. The expansion always has (n + 1) terms. The general term of binomial expansion can also be written as:
\[(a+x)^n=\sum ^n_{k=0}\frac{n!}{(n-k)!k!}a^{n-k}x^k\]
Note that the factorial is given by
N! = 1 . 2 . 3 … n
0! = 1
Important Terms involved in Binomial Expansion
The expansion of a binomial raised to some power is given by the binomial theorem. It is most commonly known as Binomial expansion. Various terms used in Binomial expansion include:
General term
(Video) Binomial Expansion - tutorial 1 | ExamSolutions
Middle term
Independent term
To determine a particular term
Numerically greatest term
Ratio of consecutive terms also known as the coefficients
Binomial Theorem and Pascal's Triangle:
Pascal's triangle is a triangular pattern of numbers formulated by Blaise Pascal. The binomial expansion of terms can be represented using Pascal's triangle. To understand how to do it, let us take an example of a binomial (a + b) which is raised to the power 'n' and let 'n' be any whole number. For assigning the values of 'n' as {0, 1, 2 …..}, the binomial expansions of (a+b)n for different values of 'n' as shown below.
(a + b)0 =
a2 + 2ab + b2
a3 +3a2b + 3ab2 + b3
a4 + 4a3b + 6a2b2 + 4ab3 + b4
a5 + 5a4b + 10a3b2 + 10a2b3 + 5ab4 + b5
With this kind of representation, the following observations are to be made.
Each expansion has one term more than the chosen value of 'n'.
In each term of the expansion, the sum of the powers is equal to the initial value of 'n' chosen.
The powers of 'a' start with the chosen value of 'n' and decreases to zero across the terms in expansion whereas the powers of 'b' start with zero and attains value of 'n' which is the maximum.
The coefficients start with 1, increase till half way and decrease by the same amounts to end with one.
Properties of Binomial Theorem
There are numerous properties of binomial theorems which are useful in Mathematical calculations. The few important properties of binomial coefficients are:
Every binomial expansion has one term more than the number indicated as the power on the binomial.
Exponents of each term in the expansion if added gives the sum equal to the power on the binomial.
(Video) Binomial Theorem Expansion, Pascal's Triangle, Finding Terms & Coefficients, Combinations, Algebra 2
The powers of the first term in the binomial decreases by 1 with each successive term in the expansion and the powers on the second term increases by 1.
It is important to note that the coefficients form a symmetrical pattern.
Binomial Expansion Formula Practical Applications
Binomial expansions are used in various mathematical and scientific calculations that are mostly related to various topics including
Kinematic and gravitational time dilation
Electric quadrupole pole
Determining the relativity factor gamma
Are Algebraic Identities Connected with Binomial Expansion?
The answer to this question is a big YES!! A few algebraic identities can be derived or proved with the help of Binomial expansion. The following identities can be proved with the help of binomial theorem.
(x + y)2 = x2 + 2xy + y2
(x - y)2 = x2 - 2xy + y2
(x + y)3 = x3 + 3x2y + 3xy2 + y3
(x - y)3 = x3 - 3x2y + 3xy2 - y3
Binomial Expansion Example Problems
1. Evaluate (3 + 7)3 Using Binomial Theorem.
(Video) Binomial theorem | Polynomial and rational functions | Algebra II | Khan Academy
The binomial expansion formula is given as:
(x+y)n = xn + nxn-1y + n(n−1)2! xn-2y2 +…….+ yn
In the given problem,
x = 3 ; y = 7 ; n = 3
(3 + 7)3 = 33 + 3 x 32 x 7 + (3 x 2)/2! x 31 x 72 + 73
= 27 + 189 + 441 + 343
(3 + 7)3 = 1000
The number of terms in a binomial expansion of a binomial expression raised to some power is one more than the power of the binomial expansion.
Isaac Newton takes the pride of formulating the general binomial expansion formula.
Binomial theorem can also be represented as a never ending equilateral triangle of algebraic expressions called the Pascal's triangle.
1. Find the first four terms of the expansion using the binomial series: \[\sqrt[3]{1+x}\]
First, we will write expansion formula for \[(1+x)^3\] as follows:
\[(1+x)^n=1+nx+\frac{n(n-1)}{2!}x^2+\frac{n(n-1)(n-2)}{3!}x^3+.......\]
Put value of n=\frac{1}{3}, till first four terms:
\[(1+x)^\frac{1}{3}=1+\frac{1}{3}x+\frac{\frac{1}{3}(\frac{1}{3}-1)}{2!}x^2+\frac{\frac{1}{3}(\frac{1}{3}-1)(\frac{1}{3}-2)}{3!}x^3\]
(Video) Binomial expansion formula for positive integer powers : tutorial 1 : ExamSolutions
Thus expansion is:
\[(1+x)^\frac{1}{3}=1+\frac{1}{3}x-\frac{x^2}{9}+\frac{5x^3}{81}\]
1. All of Binomial Expansion in 25 Minutes!! | Chapter 8 | A-Level Pure Maths Revision
(The GCSE Maths Tutor)
2. 23 - The Binomial Theorem & Binomial Expansion - Part 1
(Math and Science)
3. Binomial Theorem - General Formula | Don't Memorise
(Don't Memorise)
4. Binomial Expansion - An alternative formula | ExamSolutions
(ExamSolutions)
5. Using binomial expansion to expand a binomial to the fourth degree
(Brian McLogan)
6. Binomial Expansion Find a Specific Term
(Mario's Math Tutoring)
Hockey Stick Cross Necklace - Wooden Cross | Etsy | Clothes pin crafts, Cross crafts : This slap shot hockey necklace pendant features a tarnish resistant sterling silver (0.925 silver) design made up of 2 pairs of hockey sticks that run .
What Size Nails to Use for Deck Boards, Framing, Railings?
The Best Apps to Watch Live Sports on Your Amazon Fire Stick
Top : 25 Meilleurs Sites de Streaming Sport Gratuit sans compte (édition 2022)
Zadie Fortnite Skin Guide - Fort Fanatics
Author: Nathanael Baumbach
Name: Nathanael Baumbach
Address: Apt. 829 751 Glover View, West Orlando, IN 22436
Job: Internal IT Coordinator
Hobby: Gunsmithing, Motor sports, Flying, Skiing, Hooping, Lego building, Ice skating
Introduction: My name is Nathanael Baumbach, I am a fantastic, nice, victorious, brave, healthy, cute, glorious person who loves writing and wants to share my knowledge and understanding with you. | CommonCrawl |
Kinematics in One Dimension Questions
1) A car accelerates uniformly from rest to a velocity of $101\,{\rm km/h}$ east in $8.0\,{\rm s}$.
A car accelerates uniformly from rest to a velocity of $101\,{\rm km/h}$ east in $8.0\,{\rm s}$. What is the magnitude of its acceleration?
The given data are
\begin{eqnarray*}
\text{initial velocity} , v_0 &=& 0 \\
\text{final velocity} , v &=& 101\,{\rm Km/h} \\
&=& 101\,\frac{1000\,{\rm m}}{3600\,{\rm s}} \\
&=& 28.06\,{\rm m/s} \\
\text{time interval , t} &=& 8\,{\rm s} \\
\text{acceleration , } &=& ?
\end{eqnarray*}
The relevant kinematic equation which relates those together is $v=v_0+a\,t$. So
v &=& v_0+a\,t \\
28.06 &=& 0+a\,(8) \\
\Rightarrow a &=& \frac{28.06-0}{8} \\
&=& 3.51\,{\rm m/s^{2}}
2) A car slows down uniformly from $30.0\,{\rm m/s}$ to rest in $7.20\,{\rm s}$.
A car slows down uniformly from $30.0\,{\rm m/s}$ to rest in $7.20\,{\rm s}$. How far did it travel while decelerating?
First of all, collect the given data in the interval of accelerating
\text{initial velocity} &=& 30\,{\rm m/s} \\
\text{final velocity} &=& 0 \\
\text{overall time} &=& 7.20\,{\rm s} \\
\text{distance} &=& ?
One can solve this problem with two, direct and indirect, ways. In one way, first find the acceleration of the car and then use other kinematic equations to determine the desired quantity. So, the acceleration is obtained as
0 &=& 30+a\,(7.2)\\
\Rightarrow a &\cong& -4.17\,{\rm m/s^{2}}
The minus sign indicates that the acceleration is in the negative $x$-direction.
Now substitute the acceleration in one of the kinematic equations which relate those given data and have a missing value of distance, therefore
v^{2}-v_0^{2} &=& 2a(x-x_0)\\
0^{2}-(30)^2 &=& 2(-4.17)(x-0) \\
\Rightarrow x &\cong& 108\,{\rm m}
where one can choose the initial position, $x_0$ as $0$.
In this kind of problems, since the acceleration is constant so we can use an special equation which is $a$ free as following
x-x_0 &=& \frac{v+v_0}{2}\,t \\
x -0 &=& \frac{0+30}{2}\,7.2 \\
&=& 108\,{\rm m}
Note the subtle difference between these two ways. In the first approach, we have an approximate solution but in the second one is exact. To get an exact distance in the first solution, we must determine the car's acceleration with all decimal digits!
3) An object uniformly accelerates at a rate of $1.00\,{\rm m/s^{2}}$ east. While accelerating at this rate
An object uniformly accelerates at a rate of $1.00\,{\rm m/s^{2}}$ east. While accelerating at this rate, the object is displaced $417.2\,{\rm m}$ east in $27.0\,{\rm s}$. What is the final velocity of the object?
the given data is
\text{acceleration},\ a &=& 1.00\,{\rm m/s^{2}} \\
\text{displacement},\ x &=& 417.2\,{\rm m} \\
\text{overall time},\ t &=& 27\,{\rm s} \\
\text{final velocity},\ v &=& ?
In all of standard kinematic equations the initial velocity $v_0$ is ubiquitous. Here, the initial velocity is not given so we can use an special equation which is $v_0$ free i.e. $x-x_0=vt-\frac{1}{2}\,a\,t^{2}$ where $v$ is the velocity at time $t$. Therefore,
x-x_0 &=& vt-\frac{1}{2}\,a\,t^{2} \\
417.2 - 0 &=& v\,(27)-\frac{1}{2}\,(1)(27)^{2} \\
\Rightarrow v &=& \frac{417.2+364.5}{27}\\
&=& 15.0\,{\rm m/s} \qquad \text{East}
To find the direction of vector quantities such as displacement,velocity and acceleration, one should adopt a positive direction and then compare the sign of desired quantities with that direction. Here, we can choose the east direction as positive so the final velocity which is obtained with positive sign is toward the east.
4) An object accelerates uniformly from rest at a rate of $1.9\,{\rm m/s^{2}}$ west for $5.0\,{\rm s}$. Find:
An object accelerates uniformly from rest at a rate of $1.9\,{\rm m/s^{2}}$ west for $5.0\,{\rm s}$. Find:
(a) the displacement
(b) the final velocity
(c) the distance traveled
(d) the final speed
The given data,
\text{Initial velocity},\,v_0 &=& 0\\
\text{Acceleration},\,a &=& 1.9\,{\rm m/s^{2}} \\
\text{Time interval},\,t &=& 5\,{\rm s} \\
(a) Use the following equation,
x-x_0 &=& \frac{1}{2}\,at^{2}+v_0 t \\
x- 0 &=& \frac{1}{2}\,(1.9)(5)^{2}+0(5) \\
x &=& 23.75\,{\rm m/s}
In the second line, for convenience, we simply adopt the initial position ($x_0$) at time $t=0$ as $0$.
(b) the equation below gives the velocity at the end of time interval
v &=& v_0+at \\
v &=& 0+(1.9)(5)\\
&=& +9.5\,{\rm m/s} \qquad {West}
(c) In a straight line motion, if the velocity and acceleration have the same signs, the speed of the moving object increases. Here, by establishing a coordinate system and choosing west as positive direction, we can see the acceleration and velocity are in the same direction. Therefore, the object moves west without any changing direction. In this type of motions, which object does not change its direction, the displacement (vector) and distance traveled is the same. Thus, as calculated in (a), the total distance is approximately $24\,{\rm m}$.
(d) As reasoning of (c), since the direction of the motion does not changes so the magnitude of vector quantities shows the value of scalar ones. Here, the magnitude of the final velocity ($v=9.5\,{\rm m/s}$) is equal to the final speed.
5) A ball is thrown upwards with a speed of $24\,{\rm m/s}$. Take the acceleration due to gravity to be $10\,{\rm m/s^{2}}$.
A ball is thrown upwards with a speed of $24\,{\rm m/s}$. Take the acceleration due to gravity to be $10\,{\rm m/s^{2}}$.
(a) When is the velocity of the ball $12.0\,{\rm m/s}$ ?
(b) When is the velocity of the ball $-12.0\,{\rm m/s}$?
(c) What is the displacement of the ball at those times?
(d) What is the velocity of the ball $1.50\,{\rm s}$ after launch?
(e) What is the maximum height reached by the ball?
The kinematic equations of free falling motions are same as the horizontal straight-line motion but with some modifications. Here, the motions is in the vertical direction (the $y$ direction) and the acceleration is always downward with the magnitude of $a_y =-g=-10\,{\rm m/s^{2}}$.
Now, applying the above changes to the following kinematic equation in the horizontal direction, we obtain
v_x &=& v_{0x}+ a_y t \\
v_y &=& v_{0y} + (-g)t \\
12 &=& 24 + (-10)t \\
\Rightarrow t &=& 1.2\,{\rm s}
Recall that velocity is a vector, so in these equations its sign is important. Therefore,
-12 &=& 24+(-10)t \\
(c) The only equation which involves a relation between displacement and time is $y_1 -y_0 = \frac{1}{2}\,a_y t^{2}+v_{0y}t$. To solve the kinematic problems, we should first establish a coordinate system. Here, we place the origin of that coordinate system at the ground where the thrower is located. Using $v_{0y}=24\,{\rm m/s}$ and $y_0 = 0$, we have
y_1 -y_0 &=& \frac{1}{2}\,a_y t^{2}+v_{0y}t \\
y_1 -y_0 &=& \frac{1}{2}\,(-g) t^{2}+v_{0y}t \\
y_1 - 0 &=& \frac 12\, (-10)(1.2)^{2}+ (24)(1.2) \qquad \text{at time $t=1.2\,{\rm s}$} \\
\Rightarrow y_1 &=& 21.6\,{\rm m} \\
\text{And}\\
The amount of displacement in two cases are equal! This shows that the ball is in the same height relative to the ground at times $1.2\,{\rm s}$ and $3.6\,{\rm s}$. Such a thing is possible when the object has the same velocity in different directions, as shown in the figure below.
(d) Use the following equation
v_y &=& v_{0y} + a_y t \\
v_y &=& 24+(-10)(1.5) \\
\Rightarrow v_y &=& +9\,{\rm m/s}
(e) Choose the initial and final points at the beginning and the end of upward flight. First, find the time at which the ball reaches its maximum height,
v_{yf} &=& v_{0y}+a_y t \\
0 &=& 24+(-10)t_{\max} \\
\Rightarrow t_{\max} &=& 2.4\,{\rm s}
$v_{yf}$ is the object's velocity at the end of climbing journey where it is zero.
Now that the maximum time is found, substitute it into the following equation to find the corresponding maximum height.
y_{\max} -y_0 &=& \frac{1}{2}\,(-g) t_{\max}^{2}+v_{0y}t_{\max} \\
y_{\max} -0 &=& \frac 12 (-10)(2.4)+24(2.4) \\
y_{\max} &=& 28.8\,{\rm m}
6) A stone is thrown vertically upwards with an initial speed of $10.0\,{\rm m/s^{2}}$ from a cliff that is $50.0\,{\rm m}$ high.
A stone is thrown vertically upwards with an initial speed of $10.0\,{\rm m/s^{2}}$ from a cliff that is $50.0\,{\rm m}$ high.
(a) When does it reach the bottom of the cliff?
(b) What speed does it have just before hitting the ground?
(c) What is the total distance traveled by the stone?
Take the acceleration due to gravity to be $10\,{\rm m/s^{2}}$.
Place the origin of the coordinate system where the stone is thrown, so $y_0=0$. In kinematic problems, one should specify two points and apply the kinematic equation of motion to those.
(a) Label the bottom of the cliff as $\textcircled{c}$. Therefore, given the initial velocity and the height of cliff, one can use the following kinematic equation which relates those to the fall time.
y - y_0 &=& \frac 12\, a_y t^{2}+v_{0y} t \\
y_{\textcircled{c}}-y_0 &=& \frac 12\,(-g)t^{2}+v_{0y}t \\
(-50)- 0 &=& \frac 12\,(-10)t^{2}+10t
Since the landing point is $50\,{\rm m}$ below the origin so its coordinate is $-50\,{\rm m}$. Rearranging above, we get a quadratic equation, $t^{2}-2t-10 =0$, whose solution gives the fall time.
Note : for a quadratic equation $ax^{2}+bx+c=0$, the values of $x$ which are the solution of it are given by the following relation
\[ x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a} \]
Therefore, using above relation we can get the fall time as
\begin{gather*}
t^{2}-2t-10 =0 \\
t=\frac{-(-2)\pm \sqrt{(-2)^{2}-4(1)(-10)}}{2(1)} \\
\Rightarrow t=4.31\,{\rm s}
\end{gather*}
(b) Substituting the fall time, computed in part (a), in the equation $v=v_0 +a_y t$ OR using the equation $v^{2}-v_0^{2}=2a_y (y-y_0)$, we can obtain the velocity at the moment of hitting to the ground.
v &=& v_0+a_y t \\
v_{\textcircled{c}} &=& v_{0y}+(-g)t \\
v_{\textcircled{c}} &=& 10+(-10)(4.31) \\
v_{\textcircled{c}} &=& -33.1\,{\rm m/s}\\
v^{2}-v_0^{2} &=& 2a_y (y-y_0)\\
v_{\textcircled{c}}^{2}-v_{0y}^{2} &=& 2(-g)(y_{\textcircled{c}}-y_0) \\
v_{\textcircled{c}}^{2}-(10)^{2} &=& 2(-10)(-50-0) \\
v_{\textcircled{c}}^{2} &=& -33.1\,{\rm m/s}
(c) Applying the equation $v^{2}-v_{0y}^{2}=2(-g)(y-y_0)$ to find the distance traveled during climbing, then twice that value yield the total distance to the thrown point. Now add the cliff's height to find the total distance traveled by the object.
v_{\textcircled{b}}^{2}-v_{0y}^{2} &=& 2(-g)(y_{\textcircled{b}}-y_0) \\
0-(10)^2 &=& 2(-10)(y_{\textcircled{b}}-0) \\
\Rightarrow y_{\textcircled{b}} &=& 5\,{\rm m}
\text{total distance traveled} &=& 2y_{\textcircled{b}}+\text{cliff's height}\\
&=& 2(5)+50\\
&=& 60\,{\rm m}
where $\textcircled{b}$ is the highest point reached by the object.
7) A rock is thrown vertically down from the roof of $25.0\,{\rm m}$ high building with a speed of $5.0\,{\rm m/s}$.
A rock is thrown vertically down from the roof of $25.0\,{\rm m}$ high building with a speed of $5.0\,{\rm m/s}$.
(a) When does the rock hit the ground?
(b) With what speed does it hit the ground?
(a) First establish a coordinate system whose origin placed at the thrown point ($y_0 =0$). Now, use the equation $y-y_0=\frac 12\,(-g)t^{2}+v_{0y}t$, which relates fall time and displacement together, to find the desired value. Note that, since the initial velocity is downward and the rock hits at a point $25\,{\rm m}$ below the origin so they come with a minus in equations.
y-y_0 &=& \frac 12\,(-g)t^{2}+v_{0y}t \\
(-25)-0 &=& \frac 12\,(-10)t^{2}+(-5)t
In the end, we get a quadratic equation ,$t^{2}+t-5=0$, whose solutions give the fall time. Using the standard way of solution of quadratic equations, we have
t^{2}+t-5=0 \\
t=\frac{-(1)\pm \sqrt{(1)^{2}-4(1)(-5)}}{2(1)} \\
In above, for a quadratic equation $ax^{2}+bx+c=0$, we find the solutions as $x=\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}$.
(b) Apply, $v=v_{0y}+(-g)t$ and substitute the time computed in (a) into it OR use, $v_y^{2}-v_{0y}^{2}=2(-g)(y-y_0)$. Therefore,
v_y &=& v_{0y}+(-g)t \\
v_y &=& (-5)+(-10)(1.79) \\
\Rightarrow v_y &=& -22.9\,{\rm m/s}
v_y^{2}-v_{0y}^{2} &=& 2(-g)(y-y_0) \\
v_y^{2} - (-5)^{2} &=& 2(-10)(-25-0) \\
v_y^{2} &=& 525 \\
\Rightarrow v_y &=& \pm 22.9\,{\rm m/s}
Note that the square roots has two root but since the velocity vector of the rock points downward so we have to choose the negative i.e. $v_y =-22.9\, {\rm m/s}$.
8) A window is $1.50\,{\rm m}$ high. A stone falling from above passes the top of the window with a speed of $3.00\,{\rm m/s}$
A window is $1.50\,{\rm m}$ high. A stone falling from above passes the top of the window with a speed of $3.00\,{\rm m/s}$. When will it pass the bottom of the window? (Take the acceleration due to gravity to be $10\,{\rm m/s^{2}}$.)
The stone is fallen from the upper edge of the window so place origin of coordinate system at this point ($y_0 =0$). Since the vector of initial velocity is downward and the window's bottom edge is located $1.5\,{\rm m}$ below the origin, so we set $v_{0y}=-3\,{\rm m/s} , y=-1.5\,{\rm m}$ in kinematic equations.
(-1.5) - 0 &=& \frac 12\,(-10)t^{2}+(-3)t \\
-1.5 &=& -5t^{2}-3t
After rearranging above equation, we arrive at $t^{2}+0.6t-0.3=0$ whose solution is obtained as
t^{2}+0.6t-0.3=0 \\
t=\frac{-0.6 \pm \sqrt{(0.6)^{2}-4(1)(-0.3)}}{2(1)} \\
t_1 = 0.342\,{\rm s} \\
t_2 = -0.924\,{\rm s}
The above quadratic equation has two roots but the physical solution is the one with positive sign. The negative one indicates a time before we dropped the stone! Thus, we choose the positive solution i.e. $t=0.324\,{\rm s}$.
9) A ball is tossed with a velocity of $10\,{\rm m/s}$ directly vertically upward from the window located $20\,{\rm m}$ above the ground.
A ball is tossed with a velocity of $10\,{\rm m/s}$ directly vertically upward from the window located $20\,{\rm m}$ above the ground. Knowing that the acceleration of the ball is constant and equal to $9.81\,{\rm m/s^{2}}$ downward, determine:
(a) the velocity $v$ and elevation $y$ of the ball above the ground at any time $t$.
(b) the highest elevation reached by the ball and the corresponding value of $t$.
(c) the time when the ball will hit the ground and the corresponding velocity.
(a) First, such as all kinematic problems, establish a coordinate system whose origin is placed at the point where the ball is tossed. In this point, we set $v_{0y}=+10\,{\rm m/s}$ and $y_0=0$. The positive is due to the upward direction of the initial velocity's vector.
The velocity of a falling object at any later time $t$ is given by
\[ v_y = v_{0y}+(-g) t \]
Where the vertical constant acceleration $a_y$ is replaced by the always downward free-falling acceleration $-g$. Thus, substituting the numerical values in above, we get
\[ v_y = 10-9.81t \]
The displacement at that given time interval is obtained as
\[ y-y_0 = \frac 12\,(-g)t^{2}+v_{0y}t \]
putting the values, gives
y-0 &=& \frac 12\,(-9.81)t^{2}+(10)t \\
y &=& -4.905t^{2}+10t
Note that the equation above gives the distance at any time relative to the throw's point.
(b) At the highest elevation, the vertical velocity of an falling object is always zero i.e. $v_y=0$. Therefore, using the above equation for the velocity at any time $t$, we have
v_y &=& v_{0y}+(-g) t \\
0 &=& +10 - 9.81t \\
\Rightarrow t &=& 1.019\,{\rm s}
Now, substitute this time value into the equation of distance at any time
y-0 &=& \frac 12\,(-9.81)(1.019)^{2}+(10)(1.019) \\
y &=& 5.096\,{\rm m}
Adding the height of windows, we can obtain the elevation from the ground at any time i.e. total distance $= 20+5.096=25.09\,{\rm m}$.
(c) The ball hit the ground where its coordinate is $20\,{\rm m}$ below origin that is we should set, $y=-20\,{\rm m}$ in the distance equation above and solving for the time.
(-20) - 0 &=& \frac 12(-9.81)t^{2}+10t \\
Rearranging above, we get a quadratic equation, $4.905t^{2}-10t-20=0$, whose $t$ solutions are obtained as
4.905t^{2}-10t-20=0 \\
t=\frac{-(10) \pm \sqrt{(-10)^{2}-4(4.905)(-10)}}{2(4.905)} \\
\Rightarrow t =
\cases{
t_1 = 3.281\,{\rm s} \cr
The negative time refers to a time before the ball is thrown! which is obviously incorrect. Thus, we choose the correct positive fall time, $t_1=3.281\,{\rm s}$.
The velocity at the moment of hitting to the ground is obtained by equations, $v_y^{2}-v_{0y}^{2}=2(-g)(y-y_0)$ or $v_y = v_{0y}+(-g)t$. Note that in the latter you should put the time fall computed previously back into it. Therefore,
v_y^{2}-(10)^{2} &=& 2(-9.81)(-20-0) \\
v_y^{2} &=& 492.4 \\
\Rightarrow v_y &=& \pm 22.19\,{\rm m/s}
The $\pm$ shows that there is two mathematical solutions which should be chosen by the physical reasoning. Since at the moment of hitting to the ground, the ball's vector velocity is downward so the correct sign is negative and thus, $v_y=-22.19\,{\rm m/s}$.
10) A $3.0\,{\rm Kg}$ ball is thrown vertically into the air with an initial velocity of $15\,{\rm m/s}$.
A $3.0\,{\rm Kg}$ ball is thrown vertically into the air with an initial velocity of $15\,{\rm m/s}$. The maximum height of the ball is
(a) $12\,{\rm m}$
(b) $11.5\,{\rm m}$
(c) $10.0\,{\rm m}$
(d) $9.5\,{\rm m}$
(e) $11\,{\rm m}$
Place the origin of the coordinate system at the ball's thrown point so $y_0 =0$. Apply the following kinematic equation to find the maximum height where the vertical velocity is zero, $v_y = 0$,
0 - (15)^{2} &=& 2(-9.8)(h_{\max}-0) \\
\Rightarrow h_{\max} &=& 11.47\,{\rm m}
The correct answer is (b) which is near the above result.
11) An object starts from rest with an acceleration of $2.0\,{\rm m/s^{2}}$ that lasts for $3.0\,{\rm s}$. It then reduces its acceleration to $1.0\,{\rm m/s^{2}}$ that last for $5.0$ additional seconds.
An object starts from rest with an acceleration of $2.0\,{\rm m/s^{2}}$ that lasts for $3.0\,{\rm s}$. It then reduces its acceleration to $1.0\,{\rm m/s^{2}}$ that last for $5.0$ additional seconds. The velocity at the end of the $5.0\,{\rm s}$ interval is,
(a) $2\,{\rm m/s}$
(b) $3\,{\rm m/s}$
(c) $4\,{\rm m/s}$
(d) $5\,{\rm m/s}$
(e) $11\,{\rm m/s}$
The motion described has two stages. In stage one, find the velocity at the end of $2,\,{\rm s}$ which is considered as the initial velocity for the second stage. Therefore,
v &=& v_0 + a_x t \\
v_1 &=& 0 + 2(3) \\
\Rightarrow v_1 &=& 6\,{\rm m/s}
where $v_0$ and $v_1$ are the initial velocity and velocity at later time $t=2\,{\rm s}$. Now, repeat this process for the second stage
v &=& v_0 + a_x t\\
v_2 &=& v_1 + a_x t \\
v_2 &=& 6 + (1)(5) \\
&=& 11\,{\rm m/s}
Thus, the object's velocity at the end of $5$ seconds is $11\,{\rm m/s}$.
12) An object initially traveling at a velocity of $2.0\,{\rm m/s}$ west accelerates uniformly at a rate of $1.3\,{\rm m/s^{2}}$ west.
An object initially traveling at a velocity of $2.0\,{\rm m/s}$ west accelerates uniformly at a rate of $1.3\,{\rm m/s^{2}}$ west. During this time of acceleration, the displacement of the object is $15\,{\rm m}$. Find:
(a) the final velocity
(b) the final speed
\text{Initial velocity},\,v_0 &=& 2.0\,{\rm m/s}\\
\text{Displacement},\,(x-x_0) &=& 15\,{\rm m} \\
Recall that the difference between velocity and speed is in their definitions. Velocity is a vector quantity whose magnitude appears in all kinematic equations but the speed is scalar which depends on the total distance of the moving body. Since the object moving along a straight line without any change of direction, so at the end of a given time interval, its speed and velocity is the same. Therefore,
v^{2} - v_0^{2} &=& 2a_x (x-x_0) \\
v^{2} -(2)^2 &=& 2(1.3)(15) \\
v^{2} &=& 43 \\
\Rightarrow v &=& \sqrt{43}\\
&=& 6.55\,{\rm m/s}\qquad \text{West}
Thus,
(a) final velocity is $6.55\,{\rm m/s}$ toward west.
(b) final speed is $6.55\,{\rm m/s}$.
13) A bungee cord is $11.0\,{\rm m}$ long. What will be the velocity of a bungee jumper just as the cord begins to stretch?
A bungee cord is $11.0\,{\rm m}$ long. What will be the velocity of a bungee jumper just as the cord begins to stretch?
The initial velocity of a bungee jumper is usually zero since it is at rest just before the falling. Here, the cord's unstretched length can be thought of as the vertical displacement, $y-y_0$, of the jumper. Thus, apply the following kinematic equation to the vertical direction and find the final velocity just before the cord is stretched.
v^{2} -v_{0y}^{2} &=& 2a_y (y-y_0) \\
v^{2} -v_{0y}^{2} &=& 2(-g) (y-y_0) \\
v^{2} -0 &=& 2(-9.81)(-11) \\
\Rightarrow v &=& 14.7\,{\rm m/s}
The displacement is set to be negative since we placed the origin of the coordinate system at the jumper's falling point i.e. $y_0 = 0$. Therefore, the cord's end is located $y = -11\,{\rm m}$ below the origin. The $\pm$ indicates physically the direction of velocity. Since the it is toward the falling direction, so the correct sign is minus.
14) How long will it take a cross-country skier traveling $5.0\,{\rm km/h}$ to cover a distance of $3.50\,{\rm km}$?
How long will it take a cross-country skier traveling $5.0\,{\rm km/h}$ to cover a distance of $3.50\,{\rm km}$?
Since during this distance, the velocity of the skier is uniform so its acceleration is zero and we should use the following equation, which is the definition of average velocity, to find the movement time as
v &=& \frac{\Delta x}{\Delta t} \\
\Rightarrow t-t_0 &=& \frac{x-x_0 }{v} \\
&=& \frac{3.5\,{\rm Km}}{5\,{\rm Km/h}} \\
&=& 0.7\,{\rm h}\\
&=& 0.7 \times 3600\,{\rm s} \\
&=& 2520\,{\rm s}
In the second, the definition of $\Delta$ is used and the initial values of $x_0 , t_0$ set to zero.
15) If a stone is thrown vertically upward with a velocity of $9.0\,{\rm, m/s}$, what is its
If a stone is thrown vertically upward with a velocity of $9.0\,{\rm, m/s}$, what is its
(a) Displacement after $1.5\,{\rm s}$?
(b) Velocity after $1.5\,{\rm s}$?
(a) First, adopt a coordinate system whose origin, for simplicity, is placed at the throw's point i.e. in equations, set $y_0 =0$. The given values are that of initial velocity and elapsed time and the displacement is the only unknown quantity, so the only kinematic equation which relates those together is following
y-y_0 &=& \frac 12 \,(a_y)t^{2}+v_{0y}t \\
&=& \frac 12 \,(-g)t^{2}+v_{0y}t \\
&=& \frac 12 \, (-10)(1.5)^{2}+(9)(1.5) \\
&=& 2.25\,{\rm m}
(b) Using the equation $v_y = v_{0y}+(-g)t$, one can find the corresponding velocity at later time $t$.
v_y &=& v_{0y} +(-g)t \\
&=& 9 + (-9.81)(1.5) \\
&=& - 5.71\,{\rm m/s}
The negative indicates that the direction of the stone's velocity is downward.
16) A stone is thrown vertically upward and it returns to the thrower $3.2\,{\rm s}$ later.
A stone is thrown vertically upward and it returns to the thrower $3.2\,{\rm s}$ later.
(a) What is the stone's maximum displacement?
(b) What is the velocity of the stone when it is released by the thrower?
(a) Since the stones returned to its initial position so the total displacement is zero and the elapsed time $t=3.2\,{\rm s}$ is the total flight time ($t_{tot}$). Due to the lack of air resistance, half of the total flight time get the time ($t_{top}$) when the object reaches its maximum height. Therefore,
\[ t_{top} = \frac 12 \,t_{tot} \]
Here, the initial velocity, which is ubiquitous in kinematic equations, is not given so we can use the following special equation which is $v_0$-free as
y - y_0 &=& v_y t - \frac 12\, (-g)t^{2} \\
H - y_0 &=& v_{top} t_{top} - \frac 12\, (-g)t_{top}^{2} \\
H - 0 &=& 0(1.6) - \frac 12 \, (-9.81)(1.6)^{2} \\
&=& 12.55\,{\rm m}
In second line, we labeled the maximum height and the corresponding velocity as $H$ and $v_{top}$. Velocity at the maximum distance is always zero i.e. $v_{top} = 0$. In addition, we placed the origin of coordinate system at throw's point so $y_0 = 0$.
(b) Using the equation $v_y = v_{0y}+(-g)t$ and substituting the known values of maximum height, $v_{top}=0$ , $t_{top}$, we can find the unknown stone's initial velocity as
v_{top} &=& v_{0y}+(-g)t_{top} \\
0 &=& v_{0y} + (-9.81)(1.6) \\
\Rightarrow v_{0y} &=& +15.7\,{\rm m/s} \qquad \text{Upward}
17) A car is traveling north of a city street. It starts from rest at a stop light and accelerates uniformly at a rate of $1.3\,{\rm m/s^{2}}$ until it reaches the speed limit of $14\,{\rm m/s}$.
A car is traveling north of a city street. It starts from rest at a stop light and accelerates uniformly at a rate of $1.3\,{\rm m/s^{2}}$ until it reaches the speed limit of $14\,{\rm m/s}$. The car will travel at this velocity for $3.0$ minutes and will then decelerate at a uniform rate of $1.6\,{\rm m/s^{2}}$ until it comes to stop at the next stop light. How far apart are the two lights?
This problem has three stages. In the stage of uniform acceleration $I$, the given values are
\text{initial velocity}, \, v_{0I} &=& 0 \\
\text{acceleration}, \, a_I &=& 1.3\,{\rm m/s^{2}} \\
\text{final velocity}, \, v_I &=& 14\,{\rm m/s} \\
With these, the only kinematic equation which relates them together and gives the unknown distance is $v^{2}-v_0^{2} = 2a\Delta x$. Therefore,
v_I^{2}-v_{0I}^{2} &=& 2a_I \Delta x_I \\
(14)^{2} - 0 &=& 2(1.3)\Delta x_I \\
\Rightarrow \Delta x_I &=& 75.38\, {\rm m}
where $\Delta x_I$ is the distance traveled in first stage.
After the initial uniform acceleration motion, the car has uniform motion at constant velocity. In this stage $II$, we use the definition of average velocity to find the distance traveled as
\Delta x_{II} &=& v \times \Delta t \\
&=& 14 \times (3\times 60 \,{\rm s}) \\
&=& 2520\,{\rm m}
In the last stage, same as before, we have
\text{initial velocity}, \, v_{0III} &=& 14\,{\rm m/s} \\
\text{acceleration}, \, a_{III} &=& -1.6\,{\rm m/s^{2}} \\
\text{final velocity}, \, v_{III} &=& 0 \\
Note that, the speed limit in the stage $II$ is used as the initial velocity of second stage and the minus shows the deceleration behavior of the motion. Thus, the distance traveled in this stage is obtained as
v_{III}^{2}-v_{0III}^{2} &=& 2a_{III} \Delta x_{III} \\
0 - (14)^{2} &=& 2(-1.6)\Delta x_{III} \\
\Rightarrow \Delta x_{III} &=& 61.25\, {\rm m}
Therefore, the total distance traveled by the car between the stop lights are
\[ \Delta x_I + \Delta x_{II} + \Delta x_{III} = 2656.63\,{\rm m} \]
18) A car moves along the $x$ axis as following figure. Its position at instant $t_1=2\,{\rm s}$ is at $A$. It is positioned at $B$ at time $t_2=5\,{\rm s}$ then returns and passes through the point $C$ at $t_3=10\,{\rm s}$.
A car moves along the $x$ axis as following figure. Its position at instant $t_a=2\,{\rm s}$ is at $A$. It is positioned at $B$ at time $t_b=5\,{\rm s}$ then returns and passes through the point $C$ at $t_c=10\,{\rm s}$.
a) What is the average speed and velocity of the car between the times $t_a$ and $t_b$?
b) Now find the above quantities in the time interval $t_a$ and $t_c$.
a) We denote the position of car at any moment of time as $x_i(t)$ where subscript $i$ labels the location of the car. The distance ($d$) and magnitude of displacement ($|\vec{d}|$) in interval $t_a$ and $t_b$ are
d &=& x_B(t_b)-x_A(t_a)\\
&=&16-(-8)=24\,{\rm m}\\
|\vec{d}| &=& x_B(t_b)-x_A(t_a)\\
&=& 16-(-8)=24\,{\rm m}
Since in the interval $t_a$ and $t_b$ the motion is as straight-line and the direction of the movement is constant (not changed) so $d=|\vec{d}|$ . The average speed $v$ and velocity $|\vec{v}|$ are also the same
v=|\vec{v}_{av}| &=&\frac{\text{distance or displacement}}{\text{time interval}}\\
&=& \frac{24}{5-2}=8\,{\rm m/s}
b) If one connect the initial and final positions together, the displacement vector $\vec{d}$ is obtained which, in subsequent step, dividing its magnitude $|\vec{d}|$ by the whole time interval $\Delta t=t_c-t_a$ get the average velocity $|\vec{v}_{av}|$.
\Delta \vec{x}_{AC} =\vec{d} &=& x_C-x_A\\
&=& 4-(-8)=12\,{\rm m}\\
\therefore \quad |\vec{v}_{av}| &=& \frac{\vec{x}_{AC}}{\Delta t}\\
&=& \frac{12}{10-2}=\frac{3}{2}\,{\rm \frac m s }
But since in interval $t_a$ to $t_c$ ,the entire path, the motion has a turning point ($B$) so distance is not equal with the magnitude of displacement.
\text{distance} &=& |(x_B-x_A)|+|(x_C-x_B)|\\
&=& \left|16-(-8)\right|+\left|4-16\right|=36\,{\rm m}\\
\text{average speed} &=& \frac{\text{distance}}{\text{total time}}\\
&=& \frac{36}{10-2}=4.5\,{\rm \frac m s}.
Recall that distance is a scalar quantity so we should pick out its absolute value.
Category : Kinematics in One Dimension
MOST USEFUL FORMULA IN One Dimension:
Horizontal motion with constant acceleration:
\[v_x=v_0+a_xt\]
\[x=\frac 1 2 a_x t^2 +v_0t+x_0\]
\[v^2-v_0^2=2a\Delta x\]
Free falling motion:
\[v_y=v_0-gt\]
\[y=-\frac 1 2 gt^2+v_0t+y_0\]
\[v^2-v_0^2=-2g\Delta y\]
Number Of Questions : 18
Kinematics in One Dimension
Harmonic motions
Laws of motion
Rotational Motions
Vectors and Coordinate Systems
Momentum and Collision
Kinematics in Two Dimensions
Welcome to Physexams
One of the most important goals that we all strive to reach is to have a good future which can be achieved merely through knowledge and science. Mathematics and sciences are the foundations of technology and development in each society. The more time and energy is spent on these areas, the more prosperous the society will be. The most important way of improving and thinking correctly at the beginn ...
PhysExams.com
info[at]physexams.com
© 2015 All rights reserved. by Physexams.com | CommonCrawl |
Why is $\mathbb{Z}_4\times\mathbb{Z}_6 / \langle (0,2)\rangle$ isomorphic to $\mathbb{Z}_4\times\mathbb{Z}_2$?
The subgroup generated by $\langle(0,2)\rangle$ has order $3$ and is given by $H=\{(0,2), (0,4), (0,6)\}$, so there must be $8$ elements in factor group.
In my book, it says that "$\mathbb{Z}_6$ factor is collapsed by a subgroup of order $3$, giving a factor group in the second factor of order $2$ isomorphic to $\mathbb{Z}_2$."
I don't understand this explanation. Can anyone help me?
abstract-algebra group-theory
hjhjhj57
hhohho
$\begingroup$ Any element of the quotient group is a coset whose representative can be taken to be (a,b) where b is 0 or 1, since we can always adjust an element in the full group by an element of the subgroup to get something into this form. I like the book's explanation. The quotient leaves the first factor alone and in the second factor we're taking Z_6/<2>, which is isomorphic to Z_2. $\endgroup$ – John Brevik Mar 14 '15 at 22:31
$\begingroup$ Is there reason why the identity element of $H$ is written as $(0,6)$ and not $(0,0)$? $\endgroup$ – Karl Mar 14 '15 at 22:56
$\begingroup$ other than my mistake, no $\endgroup$ – hho Mar 14 '15 at 23:08
Note that $\Bbb Z_4 \times \Bbb Z_6$ has $24$ elements, and $H = \{(0,0),(0,2),(0,4)\}$ has $3$, so $(\Bbb Z_4 \times \Bbb Z_6)/H$ has $8$ elements.
Note as well that $H = \{0\} \times \langle 2\rangle$, so it seems plausible that:
$(\Bbb Z_4 \times \Bbb Z_6)/H \cong (\Bbb Z_4/\{0\}) \times (\Bbb Z_6/\langle 2\rangle)$
But rather than prove the general theorem this is a special case of, let's just exhibit a surjective abelian group homomorphism:
$\phi: \Bbb Z_4 \times \Bbb Z_6 \to \Bbb Z_4 \times \Bbb Z_2$
with kernel $H$.
Specifically, let $\phi(a,b) = (a,b\text{ (mod }2))$, This is clearly onto, and we see at once that $H \subseteq \text{ker }\phi$.
On the other hand, if $\phi(a,b) = (0,0)$, we must have $a = 0$ (since $\phi$ is just the identity map on the first coordinate), and $b$ must be even, that is $b = 0,2,4$. This shows that $\text{ker }\phi \subseteq H$, and thus the two sets are equal.
So by the Fundamental Isomorphism Theorem, $(\Bbb Z_4 \times \Bbb Z_6)/H \cong \Bbb Z_4 \times \Bbb Z_2$.
A word about the explanation you were given: a homomorphism essentially "shrinks" its kernel to an identity. Given that all cosets of a subgroup are "the same size", the size of the kernel is "the shrinkage factor" (if the kernel had two elements, the size of the quotient group would be half the size of the original group). Note how $\phi$ in what I wrote above acts on each factor group of our direct product: it does nothing to $\Bbb Z_4$, and it identifies all the "even" elements of $\Bbb Z_6$-there are $3$ of these, so we get a cyclic subgroup of order $2$ (because $6/3 = 2$) in the quotient.
David WheelerDavid Wheeler
Not the answer you're looking for? Browse other questions tagged abstract-algebra group-theory or ask your own question.
Compute (Z4 * Z6) / <(0,2)>
How to find all elements of $\mathbb{Z}_{4} \times\mathbb{Z}_{4}/\langle(1,1)\rangle$?
Calculate the Factor Group $(\mathbb{Z}_4 \times \mathbb{Z}_6)/\langle(0,2)\rangle$
Compute this factor group: $\mathbb Z_4\times\mathbb Z_6/\langle (0,2) \rangle$
Classify $\mathbb{Z}\times\mathbb{Z}/\langle(2,2)\rangle$
$\mathbb{Z} \times \mathbb{Z}/\langle (1,2)\rangle$ is isomorphic to $\mathbb{Z}$?
Finding subgroups of $G=\displaystyle\normalsize{\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/4\mathbb{Z}}\LARGE_{/}\large_{\langle(1,0)\rangle}$
$(\mathbb{Z}_2\times \mathbb{Z}_4)/\langle (0,1)\rangle$ and $(\mathbb{Z}_2\times \mathbb{Z}_4)/\langle (1,2) \rangle$ isomorphisms
Order of coset is greater than number of cosets in $\mathbb{Z}_2\times\mathbb{Z}_4/\langle(0,1)\rangle$
What's the meaning of factors 'collapsing' in quotients like $\mathbb{Z}_4\times\mathbb{Z}_6/\langle(0,1)\rangle$?
Prove $\langle k \rangle \big/ \langle n\rangle$ is isomorphic to $\mathbb{Z}_{(n/k)}$. | CommonCrawl |
Maharaja Nim
Preprint, 2013
We relax the hypothesis of a recent result of A. S. Fraenkel and U. Peled on certain complementary sequences of positive integers. The motivation is to understand to asymptotic behavior of the impartial game of \emph{Maharaja Nim}, an extension of the classical game of Wythoff Nim. In the latter game, two players take turn in moving a single Queen of Chess on a large board, attempting to be the first to put her in the lower left corner, position $(0,0)$. Here, in addition to the classical rules, a player may also move the Queen as the Knight of Chess moves, still taking into consideration that, by moving no coordinate increases. We prove that the second player's winning positions are close to those of Wythoff Nim, namely they are within a bounded distance to the half-lines, starting at the origin, of slope $\frac{\sqrt{5}+1}{2}$ and $\frac{\sqrt{5}-1}{2}$ respectively. We encode the patterns of the P-positions by means of a certain \emph{dictionary process}, thus introducing a new method for analyzing games related to Wythoff Nim. Via Post's Tag productions, we also prove that, in general, such dictionary processes are algorithmically undecidable.
Impartial game
Dictionary process
Approximate linearity
Wythoff Nim
Complementary sequences
Game complexity
Urban Larsson
Chalmers, Matematiska vetenskaper, Matematik
Johan Wästlund | CommonCrawl |
Retirement Planning IRA
How to Convert a Non-deductible IRA Into a Roth IRA
By Barclay Palmer
Converting to a Roth IRA
Conversions: The Basics
The Conversion Formula
A Conversion Example
Where It Gets Tricky
Converting a Non-deductable IRA to a Roth IRA
Undoubtedly, the Roth IRA has some subtantial advantages over a traditional IRA. For example, the Roth IRA offers tax-free withdrawals of contributions and earnings upon retirement, and the required minimum distribution (RMD) is not applicable. Fortunately, traditional IRAs can be converted to Roth IRAs.
At one point, there were restrictions on conversions. However, in 2010, Congress eliminated the $100,000 income limit on Roth IRA conversions. This means that traditional IRA owners in all tax brackets can convert their accounts. Basically, individuals can convert their traditional IRA contributions to a Roth IRA with one caveat; a portion of the amount converted is subject to income tax.
Within certain income limitations (which may change annually), taxpayers in lower tax brackets can receive an IRA contribution deduction on their federal tax returns for deposits made to traditional IRAs.
When your traditional IRA balance is composed of deductible and nondeductible contributions, any amount distributed or converted from a traditional IRA is pro-rated to include a taxable and nontaxable portion of the assets.
When you have an IRA that contains normal contributions, non-deductible contributions, and earnings, the rules of conversions are more complex.
First, a review of the basics. Within certain income limitations (these may change annually), taxpayers in lower tax brackets can receive an IRA contribution deduction on their federal tax returns for deposits made to traditional IRAs. Taxpayers with incomes above IRS limits can still contribute to IRAs; however, they are not entitled to an IRA deduction on their tax return. These nondeductible contributions form the cost basis of the account. Therefore, upon withdrawal, they are not taxed. Taxpayers with these contributions must file Form 8606 with the tax return.
(IRS Form 8606 is used to help determine the taxable portion of a distribution or conversion and must be filed in the distribution year.)
Note that a recharacterization (the reversal of an IRA conversion, such as from a Roth IRA back to a traditional IRA) was made illegal by the Tax Cuts and Jobs Act of 2017.
When an individual's traditional IRA balance is composed of deductible and nondeductible contributions, any amount distributed or converted from the traditional IRA is pro-rated to include a taxable and nontaxable portion of the assets.
The following formula is used to calculate the nontaxable amount:
Non-Taxable Amount=TDCTIB × DCwhere:TDC=Total deductible contributionTIB=Total IRA balanceD=DistributionC=Conversion amount\begin{aligned} &\textit{Non-Taxable Amount}=\dfrac{TDC}{TIB} \ \times\ \dfrac{D}{C}\\ &\textbf{where:}\\ &TDC = \text{Total deductible contribution}\\ &TIB = \text{Total IRA balance}\\ &D = \text{Distribution}\\ &C=\text{Conversion amount}\\ \end{aligned}Non-Taxable Amount=TIBTDC × CDwhere:TDC=Total deductible contributionTIB=Total IRA balanceD=DistributionC=Conversion amount
As an example, if an individual has traditional IRA nondeductible contributions of $8,000 that have grown to $100,000, the taxable amount would be:
(8,000÷100,000)×8,000=640(8,000\div100,000)\times8,000=640(8,000÷100,000)×8,000=640
Of the $8,000 that is converted, $7,360 would be taxable:
($8,000−640=$7,360)(\$8,000 - 640 = \$7,360)($8,000−640=$7,360)
This rule applies even if the deductible amounts and nondeductible amounts are held in separate traditional IRAs. Also note that if someone has multiple traditional IRAs, their total balances must be combined in the formula above to determine the amount that can be excluded from income (i.e., the amount that is nontaxable).
What if all of a person's IRA savings are composed of nondeductible IRA contributions? If so, they can convert their entire nondeductible IRA to a Roth IRA and will only have to pay taxes on the earnings.
For example, Susan Smith is in a 30% tax bracket this year, and she only has one IRA worth $100,000. The IRA is composed of $90,000 in nondeductible contributions and $10,000 in earnings. If she decides to convert the entire IRA to a Roth, she would only have to pay taxes on the earnings portion ($10,000). At a 30% tax rate, she would owe $3,000 in taxes to convert the entire $100,000 to a Roth.
If Smith had no earnings in this IRA, the entire $100,000 (all nondeductible contributions) could be converted with no tax liability. When earnings are present, the owner must consider if it would be more beneficial to pay the due taxes now, considering that the future benefit would be tax-free.
For an IRA that contains normal contributions, nondeductible contributions, and earnings, the rules of conversions are more complex. It would be fantastic if the nondeductible contributions could be singled out and only that portion be converted to the Roth tax-free. However, IRS rules prevent this strategy. Here is a look at the special tax treatment of partial conversions for owners with multiple IRA accounts or IRAs with both deductible and nondeductible contributions.
John Doe, a 30% taxpayer, has a traditional IRA worth $200,000 on Dec. 31, 2020, of which $100,000 is nondeductible contributions. Doe wants to convert $100,000 of this IRA to a Roth. Because Doe has $100,000 of non-deductible contributions in this traditional IRA, the assumption might be that he could convert the $100,000 of nondeductible contributions tax-free. Unfortunately, the IRS has a special formula that must be followed for an IRA with normal contributions.
Tax-Free Percentage=TND(YV+C)where:TND=Total non-deductible contributionsYV=Sum of year end value of all IRA accountsC=Conversion amount\begin{aligned} &\textit{Tax-Free Percentage}=\dfrac{TND}{(YV + C)}\\ &\textbf{where:}\\ &TND = \text{Total non-deductible contributions}\\ &YV = \text{Sum of year end value of all IRA accounts}\\ &C=\text{Conversion amount}\\ \end{aligned}Tax-Free Percentage=(YV+C)TNDwhere:TND=Total non-deductible contributionsYV=Sum of year end value of all IRA accountsC=Conversion amount
Thus, given the example above, John Doe would calculate the following:
$100,000 ÷ ($200,000 + $100,000)=$100,000 ÷ $300,000\begin{aligned} &\text{\$100,000}\ \div\ (\text{\$200,000}\ +\ \text{\$100,000})=\\ &\text{\$100,000}\ \div\ \text{\$300,000}\\ &\text{Tax-Free Amount of Conversion}\ = \ 33\% \ (\text{or }\text{\$33,333}) \end{aligned}$100,000 ÷ ($200,000 + $100,000)=$100,000 ÷ $300,000
Therefore, if John converts $100,000 to the Roth, he will have $33,333 ($100,000 x 33.3%) that is not taxed and $66,667 ($100,000 x 66.7%) that will be taxed at his 30% tax rate.
The common misconception is that the nondeductible contributions can be singled out and converted tax-free. Another misconception is that the nondeductible contributions are simply divided by the total value of the IRAs to determine the tax-exempt amount percentage. However, the formula is a little more complex. Understanding the rules will keep the IRS at bay. Consult with your tax professional to ensure that the appropriate forms are filed, and the calculations are accurate.
How Can I Fund a Roth IRA If My Income Is Too High?
Recharacterizing Your IRA Contribution or Roth Conversion
Tax Laws & Regulations
When to File Retirement Plan Tax Form 8606
Too Rich for a Roth? Do This
Avoid Overpaying Taxes on IRA Distributions
Should You Contribute to a Nondeductible IRA?
Form 8606, 'Nondeductible IRAs'
Form 8606, "Nondeductible IRAs," is a tax form distributed by the Internal Revenue Service (IRS) and used by filers who make nondeductible contributions to an IRA.
Roth IRA Conversion Definition
A Roth IRA Conversion is a movement of assets from a Traditional, SEP, or SIMPLE IRA to a Roth IRA, which is a taxable event.
Sneaking in the Backdoor Roth IRA
A backdoor Roth IRA allows taxpayers to contribute to a Roth IRA, even if their income is higher than the IRS-approved amount for such contributions.
The Complete Guide to the Roth IRA
A Roth IRA is a retirement savings account that allows you to withdraw your money tax-free. Learn why a Roth IRA may be a better choice than a traditional IRA for some retirement savers.
Tax Increase Prevention and Reconciliation Act of 2005 (TIPRA)
The Tax Increase Prevention and Reconciliation Act of 2005 (TIPRA) was signed by President George W. Bush and contains revisions to pre-existing tax laws.
An individual retirement account (IRA) is an investing tool individuals use to earn and earmark funds for retirement savings. | CommonCrawl |
Mathematical "proof" of the stability of atoms?
I am trying to find proofs of the stability of an atom, says, for simplicity, the hydrogen atom. There are positive answers and negative answers in various atom models.
The naive "solar system" model of a negatively charged electron orbiting the positively charged nucleus is not stable, it radiates electro-magnetic energy and will collapse.
The Bohr-Sommerfeld atom model seems to make stability a postulate.
The Schroedinger equation seems to give a "proof" of the stability of the hydrogen atom, because we have stable solutions corresponding to bound states.
Does anybody know if the Dirac equation or Quantum Electro-Dynamics can be used to prove the stability of a hydrogen atom?
Many thanks in advance for any references where I can learn more about this.
quantum-mechanics reference-request
$\begingroup$ This might interest you books.google.com/… $\endgroup$ – Yakov Shlapentokh-Rothman Jan 21 '13 at 18:02
$\begingroup$ The book by Lieb and Seiringer is indeed the ultimate reference here. $\endgroup$ – Abdelmalek Abdesselam Jan 21 '13 at 18:20
$\begingroup$ Quick clarification: the accepted answer I believe is for a collection of atoms. But for a single Hydrogen atom, the stability pretty much arises from the Schrodinger solution... you can calculate that the probability the electron will reside inside the nucleus is a nonzero but very small percentage. $\endgroup$ – Chris Gerig Jan 21 '13 at 20:14
$\begingroup$ @Chris Gerig: The probability that the electron is inside the nucleus isn't relevant to the stability of hydrogen. $\endgroup$ – Ben Crowell Jan 22 '13 at 0:35
I think you can find more in Lieb and Seiringer's book "The Stability of Matter in Quantum Mechanics", or see also Freeman Dyson http://www.webofstories.com/play/4415 and the book review http://arxiv.org/abs/1111.0170.
Uwe FranzUwe Franz
$\begingroup$ In particular, the reason for the stability of matter (i.e. why matter doesn't collapse in on itself) is due to the quantum degeneracy pressure (i.e. the Pauli exclusion principle). On the flip-side, if you want to talk about why we can stand on the ground without falling through it, then the dominate cause is electrostatic repulsion (i.e. the electromagnetic force). $\endgroup$ – Chris Gerig Jan 21 '13 at 20:11
$\begingroup$ @Chris Gerig: "the reason for the stability of matter (i.e. why matter doesn't collapse in on itself) is due to the quantum degeneracy pressure (i.e. the Pauli exclusion principle)." I don't think this is accurate. In section I, Lieb proves the stability of an isolated hydrogen atom, where the exclusion principle is irrelevant. If that calculation had come out a different way (say, because we changed the behavior of the electric force), then matter would be unstable for reasons having nothing to do with the exclusion principle. $\endgroup$ – Ben Crowell Jan 22 '13 at 0:44
$\begingroup$ "if you want to talk about why we can stand on the ground without falling through it, then the dominate cause is electrostatic repulsion (i.e. the electromagnetic force)." I don't think this is right either. Neither electromagnetic interactions nor the exclusion principle suffice to explain the normal force between your foot and the ground. You need both, and Lieb is forced to invoke both in section II: "The extra factor $N^{2/3}$ is essential for the stability of matter; if electrons were bosons, matter would not be stable." $\endgroup$ – Ben Crowell Jan 22 '13 at 0:51
$\begingroup$ I didn't say it was the sole cause, you definitely need both, but electric repulsion is the dominant cause. And stability of matter is different from the separation of two pieces of matter. $\endgroup$ – Chris Gerig Jan 22 '13 at 1:40
$\begingroup$ When you talk about the reason we don't fall into the ground, you're referring to the normal force between your foot and the dirt. When you talk about "why matter doesn't collapse in on itself," you're talking about internal normal forces within the matter (plus the fact that the individual atoms don't collapse). In both cases, we're discussing the microscopic explanation for a normal force. The explanation is fundamentally the same in both cases, and in both cases it requires both electrical interactions and the exclusion principle. $\endgroup$ – Ben Crowell Jan 22 '13 at 1:45
The first thing to say is that ordinary matter is actually not stable. Suppose a baseball-sized rock finds itself in the vacuum of outer space in the very distant future, isolated by the universe's accelerating expansion within its own cosmological horizon. Even within the standard model of particle physics, the rock will eventually decay by quantum-mechanical tunneling into more stable forms of matter. Over extremely long time scales, the result is believed to be that it will become a microscopic black hole, which then evaporates into other particles (mostly photons). (You will hear people say that this is the ultimate fate of all matter in the universe, which isn't actually right.) This kind of thing is discussed in Adams and Laughlin.
You asked about the stability of the hydrogen atom in various theories. There are some reasons to believe that the proton is unstable (google "proton decay"), in which case the hydrogen atom isn't actually stable. However, it is stable within specific models. Others have pointed out the Lieb paper, which in section I makes a specific technical argument about one type of stability for individual atoms according to one model. The model is the Schrodinger equation with a pointlike proton.
First off, there are really two things that are required in order to show that hydrogen is stable in this model, and Lieb only focuses on one of them, which is stability against a collapse of the electron's wavefunction so that it becomes bounded within an arbitrarily small distance from the proton.
The other type of stability that has to be demonstrated is stability against the electron's escape. Stability against escape is nontrivial. For example, the interaction between two neutrons is essentially purely attractive, and yet the two-neutron system is believed to be unbound. This is because the range of the force is so short (about $10^{-15}$ m). If the neutrons were to be confined within that distance of one another, they would have to have high kinetic energy, so they would fly apart. The reason hydrogen is bound is that the electrical force is long-range.
For hydrogen's stability against collapse, Lieb's argument is more complicated than it needs to be, because he unrealistically assumes a pointlike proton. Since protons are not really pointlike, compressing the electron to an arbitrarily small space $\epsilon$ near the center of the proton gives an electric field whose energy diverges to infinity like $1/\epsilon$. (If the proton were pointlike, then the external field would go to zero in this limit, so this argument would fail.)
Your question about quantum field theory is an interesting one. I think the nicest way to approach this is to look at the dimensionless and dimensionful quantities that you can form out of the relevant parameters. Most of the interesting physics can be understood in terms of two of these. There is the fine structure constant, $\alpha=ke^2/\hbar c\approx 1/137$, and the Bohr radius, $a_o=\hbar/mc\alpha$, where $m$ is the mass of the electron. In hydrogen, the typical velocity of the electron is $\alpha c$, and since this is small compared to c, you don't really need quantum field theory for hydrogen. The Schrodinger equation, which is nonrelativistic, is an excellent approximation. However, if you make a hydrogenlike atom consisting of a nucleus with atomic number $Z$ plus a single electron, the velocity in units of $c$ is on the order of $Z\alpha$. For large $Z$, this shows that you need relativity, and quantum field theory.
The Bohr radius is the only quantity you can form here with units of length. That suggests, without the need for explicit solution of the Schrodinger equation, that not only does hydrogen not collapse to an arbitrarily small size (as shown by Lieb's argument), but we expect it to reach a certain size which is basically the Bohr radius times some factor of order unity.
Adams and Laughlin, http://arxiv.org/abs/astro-ph/9701131
Lieb, Rev Mod Phys 48 (1976) 553, http://www.pas.rochester.edu/~rajeev/phy246/lieb.pdf
Ben CrowellBen Crowell
Googling on the obvious took me to http://www.pas.rochester.edu/~rajeev/phy246/lieb.pdf
Chris Godsil
Not the answer you're looking for? Browse other questions tagged quantum-mechanics reference-request or ask your own question.
On the periods in the periodic table (or Why is a noble gas stable?) | CommonCrawl |
Journal of Therapeutic Ultrasound
Minimizing eddy currents induced in the ground plane of a large phased-array ultrasound applicator for echo-planar imaging-based MR thermometry
Silke M. Lechner-Greite1,
Nicolas Hehn1,2,
Beat Werner3,
Eyal Zadicario4,
Matthew Tarasek5 &
Desmond Yeo5
Journal of Therapeutic Ultrasound volume 4, Article number: 4 (2016) Cite this article
The study aims to investigate different ground plane segmentation designs of an ultrasound transducer to reduce gradient field induced eddy currents and the associated geometric distortion and temperature map errors in echo-planar imaging (EPI)-based MR thermometry in transcranial magnetic resonance (MR)-guided focused ultrasound (tcMRgFUS).
Six different ground plane segmentations were considered and the efficacy of each in suppressing eddy currents was investigated in silico and in operando. For the latter case, the segmented ground planes were implemented in a transducer mockup model for validation. Robust spoiled gradient (SPGR) echo sequences and multi-shot EPI sequences were acquired. For each sequence and pattern, geometric distortions were quantified in the magnitude images and expressed in millimeters. Phase images were used for extracting the temperature maps on the basis of the temperature-dependent proton resonance frequency shift phenomenon. The means, standard deviations, and signal-to-noise ratios (SNRs) were extracted and contrasted with the geometric distortions of all patterns.
The geometric distortion analysis and temperature map evaluations showed that more than one pattern could be considered the best-performing transducer. In the sagittal plane, the star (d) (3.46 ± 2.33 mm) and star-ring patterns (f) (2.72 ± 2.8 mm) showed smaller geometric distortions than the currently available seven-segment sheet (c) (5.54 ± 4.21 mm) and were both comparable to the reference scenario (a) (2.77 ± 2.24 mm). Contrasting these results with the temperature maps revealed that (d) performs as well as (a) in SPGR and EPI.
We demonstrated that segmenting the transducer ground plane into a star pattern reduces eddy currents to a level wherein multi-plane EPI for accurate MR thermometry in tcMRgFUS is feasible.
Transcranial magnetic resonance (MR)-guided focused ultrasound (tcMRgFUS) has become a promising technology for non-invasive treatment of several types of brain diseases and for functional neurosurgery in particular [1–4]. Unlike established treatment options such as deep brain stimulation, tissue resection, or RF ablation, tcMRgFUS does not require invasive procedures that carry high risks of complications including infection and hemorrhages. Furthermore, in contrast to non-surgical cancer treatment modalities like radiation therapy, tcMRgFUS does not use ionizing radiation. This allows for multiple treatment sessions without increased risks of collateral damage to remaining healthy tissues.
In tcMRgFUS, treatment monitoring is achieved by MR temperature mapping, using techniques that make use of the temperature-dependent proton resonance frequency shift (PRFS) phenomenon [5–9]. Today, only single slice phase images from a gradient echo-based (GRE) sequence are acquired at the location of the hot spot [10] with a temporal resolution of about 3 to 5 s. To increase the safety of tcMRgFUS clinical procedures, the spatial coverage of the temperature maps should be increased and multi-slice thermometry should be employed. This is especially important because of the risk of unintentional heating in areas outside the targeted region, e.g., through acoustic energy absorption in the skull [11, 12] or secondary acoustic foci. Increasing the spatial coverage of MR thermometry, however, may degrade the temporal resolution of temperature monitoring. As such, fast MR thermometry techniques are highly desirable in clinical tcMRgFUS to reduce acquisition time and, thus, increase spatial coverage of the hot spot.
One approach to increase temporal resolution is to use fast imaging sequences such as multi-shot gradient echo EPI for fast multi-plane image tracking and MR thermometry [13–17]. As one of the fastest imaging methods, EPI facilitates very fast 3D hot spot localization when used in MR thermometry. EPI also has other applications related to treatment planning for tcMRgFUS applications. It is the most commonly used MR imaging sequence for brain functional MRI (fMRI) [18] and diffusion-weighted imaging [19], which can provide critical information for pre-surgical treatment planning [20, 21] and post-interventional evaluations [2]. For example, Köhler et al. [13] and Mougenot et al. [14] used multi-shot EPI for fast multi-plane image tracking in MR thermometry where up to six slices could be acquired in the same time frame.
Clinically desirable spatial and temporal resolution can potentially be achieved with EPI. However, EPI is prone to imaging artifacts, such as B 0-inhomogeneity-induced geometric distortions. In addition, the increased activity of switching gradients inherent in this sequence induces undesired eddy currents, which in turn generate secondary magnetic fields that disrupt the carefully constructed arrangement of time-varying magnetic fields for spatial localization of nuclear spins. This induces geometrical distortions in the reconstructed images.
In a typical ultrasound applicator, among other conductive structures, a conductive ground plane is often present. During an EPI sequence, gradient-induced eddy currents may occur in this ground plane, which could cause significant geometrical distortions in the EPI images. Such distortions markedly degrade the spatial fidelity and accuracy of MR temperature mapping during tcMRgFUS procedures (Fig. 1a, b). To date, only eddy currents induced during RF excitation are compensated for in the research area of cryo-ablation [22, 23]. Dragonu et al. [24] proposed real-time geometric distortion correction based on field maps for gradient echo-recalled EPI images by considering off-resonance effects. A similar technique was applied by Samoudi et al. [25] where they added gaps to tungsten collimator geometries for single-photon emission computed tomography to reduce eddy currents induced by switching gradients of the MR system.
Magnitude image of an EPI scan (a) with phase encoding direction in anterior/posterior direction of a sagittal slice of a gel phantom mounted inside the transducer setup (c). The distortions also occur when changing the readout direction from a head to foot direction (b). c Phased-array transducer setup with a dedicated eight-channel phased-array receive coil and water cooling pipes for circulating water through the transducer
For the tcMRgFUS setup described here, the gradient-induced eddy currents distort the readout gradients, resulting in image artifacts and geometric distortions of the object under investigation. The eddy currents, which form primarily on the conductive material that serve as the transducer electrode ground plane, give rise to spatio-temporal variations of static magnetic field in the imaging field-of-view (FOV). These variations in the magnetic field induce non-linear geometric distortion in the MR images, which can significantly degrade image quality for temperature mapping.
Here, we investigated the impact of gradient-induced eddy currents on the accuracy of multi-plane EPI-based temperature maps in the presence of the transducer electrode ground plane of a tcMRgFUS applicator by first performing finite element electromagnetic field (FEM) simulations to calculate the eddy current distribution on the ground plane and the reduction potential of certain segmentation patterns. Inspired by these results, different segmentation patterns were experimentally tested by acquiring (i) spoiled gradient echo scans (SPGR) and (ii) multi-shot EPI scans to consider all influencing aspects of a commercial MR scanner on a transducer mockup model. The results will help the transducer designers to improve MR compatibility of future transducers without a necessary high degree of segmentation. Elements of this work were presented at the 2012 [26] and 2015 [27] ISMRM meetings, respectively, where we propose a re-segmentation of the focused ultrasound transducer array ground plane into a star pattern to enable fast multi-plane temperature tracking.
Eddy currents are currents in conducting structures induced by fast-switching gradient fields that cause time-dependent field disturbances. The eddy currents create a secondary magnetic field that counteracts the desired field. Typically, it is assumed that eddy current dynamics behave reproducibly in space and time. The gradient and eddy current field can be expressed by a truncated spherical harmonics series referred by spherical harmonic decomposition [28]. In a certain region of interest, it is assumed that the eddy current field changes linearly. This assumption is the basis for eddy current compensation called gradient pre-emphasis as implemented in clinical MR scanners [29]. Other eddy current minimization techniques are targeted during gradient coil design phase [26] or by playing out suitably parameterized compensation pulses. For diffusion-weighted EPI for example, compensation of higher-order eddy current terms is mandatory [30]. The goal of eddy current compensation is to minimize the induction of eddy currents in an MR system such as the cryostat of the magnet. However, in tcMRgFUS, a transducer system is placed inside the FOV of the MR system. Hence, any conducting structure present in the transducer will cause additional eddy currents not anticipated by the calibration of the compensation parameterization. The induced eddy currents have certain strength; they also persist for a duration that depends on how fast the gradient field changes, its field strength, and the properties of the conducting material (e.g., conductivity, thickness, and skin depth) that characterize the depth of magnetic field penetration [29]. Here, we investigate the impact of gradient-induced eddy currents in the transducer electrode ground plane, using FEM simulations and phantom experiments.
MR temperature mapping was performed based on the PRFS method [6, 7]. In general, a proton's resonance or Larmor frequency ω 0 is defined by the product of the gyromagnetic ratio γ and externally applied magnetic field B 0. The precise Larmor frequency of a given proton is influenced by its atomic environment because any magnetic spins (electron or other magnetic nuclear spins) near it will generate small magnetic fields that add or subtract from B 0. The change in the resonance frequency due to a change in temperature can be assessed by measuring the change in accrued phase in a series of GRE images. The temperature change ΔT can be extracted by subtracting the phase images φ T during sonication from a baseline image φ T0 that was acquired before sonication according to [31]
$$ \Delta T=\frac{\varphi_T-{\varphi}_{T0}}{\gamma \alpha {B}_0TE}, $$
where α is the PRFS temperature coefficient. The PRFS method is commonly used for MR thermometry because it is a simple, robust MR thermometry method in water-based tissues.
Simulation setup
FEM-based magnetic field simulations using the electromagnetic field simulation software Maxwell3D (Ansys, Canonsburg, USA) were performed to study the eddy currents induced on the surface of the transducer ground plane and to test the role of the copper ground plane in causing the image distortions. Maxwell3D solves for Maxwell equations using the finite element method to solve for static, frequency domain, and time-varying electromagnetic fields. More information on the Maxwell3D software can be found online [32]. The theory is described in detail in [33, 34]. The 3D simulation model included the y-gradient coil of an MR scanner only (GE Signa Excite II 3.0 T, General Electric, Milwaukee, USA) with a maximum gradient strength and slew rate of 49.5 mT/m and 150 T/m/s, respectively (Fig. 2c). Following the basic design of the InSightec ultrasound applicator (Imasonic, Imasonic SAS, Voray sur l'Ognon, France), a 0.25-mm thick copper hemisphere, 30 cm in diameter, represented the ground plane and was modeled at the iso-center of the gradient coil. The magnitudes of the eddy currents induced in a conducting structure vary with the surface area of the structure. Interrupting the flow minimizes the currents and hence the magnetic field distortions. This is typically done in RF shield design [35]. In simulation, five different ground plane segmentation patterns were designed to modify the eddy current flow; these patterns were evaluated with respect to active imaging gradients. The designed patterns were a full copper hemisphere, a segmented hemisphere similar to the real setup, a star pattern, a ring pattern, and a star-ring pattern (hereafter labeled as cases (b), (c), (d), (e), and (f), respectively, in the text and figures). According to the manufacturer, a technical realization of segmenting the transducer into patterns (d), (e), and (f) is possible without any technical restrictions. The different segmented ground plane models are shown in the third row of Fig. 3. The electrical conductivity of the patterns was set to that of copper in the simulations (κ = 5.8 × 107 S/m).
a Picture of the ground plane and a close up where the soldered joints are highlighted with the white arrows. b Picture of a CAD model of the copper layer of the ground plane, illustrating the soldering of seven segments to a continuous surface. The CAD model in b is used in FEM simulations to predict the induced eddy currents on its surface. c Schematic of the FEM model (full shield and primary y-coil) of the depicted MRI system. The hemisphere in the iso-center demonstrates the positioning of the transducer ground plane inside the gradient coil
(Top) Picture showing transducer mockup model containing reference plastic hemisphere (red arrow) and ADNI phantom (green arrow). (Middle) Pictures of different copper patterns attached to the outside of the hemisphere: a reference without copper sheet, b solid copper surface (one segment, average surface of segment ≈1413 cm2), c seven-segment clinical pattern (average surface of segments ≈199 cm2), d star pattern (32 segments, average surface of segments ≈37 cm2), e ring pattern (16 segments, average surface of segment ≈92 cm2), and f a combined star-ring pattern (64 segments, average surface of segment ≈27 cm2). (Bottom) Modeled patterns of FEM simulations with 36 (d), 17 (e), and 54 segments, respectively. Black arrows indicate the points of view of the ground planes when plotting the current densities in Fig. 4
For reference, a simulation was performed in magneto-static solving mode to replicate the situation without the copper ground plane (hereafter labeled as case (a) in the text and figures). The extracted fields inside a FOV with a diameter of 20 cm represent the gradient field without eddy currents (B ref). In a second simulation, the gradient coil model is pulsed in frequency solving mode where the induced eddy currents on the surface of the copper ground plane and the resulting counteracting magnetic fields are considered in the field solution. The extracted fields represent the gradient field with eddy currents (B ec). The frequency solving mode expects at least one frequency for which the model is solved and which goes into the skin depth calculation describing how deep the induced currents penetrate into the conducting structure. Here, a frequency of 700 Hz was chosen. The value was determined by Fourier transforming the time-dependent x- and y-gradient waveforms of the EPI pulse sequence as prescribed in Table 1 and by calculating the full width at half maximum frequency.
Table 1 A typical clinical protocol for SPGR-based temperature mapping and showing SPGR and EPI sequence parameters used for experimental evaluation in this study
For each scenario, current densities were evaluated and the maximum gradient strengths g max calculated from B ref and B ec using spherical harmonic decomposition [28] and were compared to the theoretical gradient strength of 49.5 mT/m. Therefore, the magnetic fields, or more precisely the z component of the magnetic flux density B z was approximated by a truncated spherical harmonic series [33] inside an imaging volume of 20 cm diameter located at the iso-center of the y-gradient coil which is referred to spherical harmonic decomposition. The expansion into spherical harmonics contains spherical coordinates, Legendre polynomials and associated Legendre functions, and spherical harmonic coefficients. In the area of gradient coil design, the spherical harmonic coefficients represent the field strength in T/mn. Here, the magnetic fields were extracted and decomposed into the linear order term represented in Tesla per meter.
The ultrasound applicator of the InSightec ExAblate 4000 Neuro system (InSightec Ltd., Tirat Carmel, Israel) consists of a hemispherical 1024-element phased-array transducer operating at 650 kHz [4]. The transducer setup is interfaced to the MR scanner and is integrated into a patient table that can be docked to the MR system (Fig. 1c). Here, instead of the clinical transducer setup, a transducer mockup was positioned at the iso-center of the MR system, arranged in a manner similar to the real setup. The mockup consisted of a plastic hemisphere (Plexiglas® with 2-cm flange, Zeigis) 30 cm in diameter. In its center, a high-resolution quantification phantom (ADNI, Alzheimer's Disease Neuroimaging Initiative phantom, Magphan, EMR051, Phantom Laboratory, Salem, NY, 2006) [36] was positioned. The superior end of the hemisphere and the ADNI phantom was sealed, and the inside of the hemisphere was filled with demineralized water to represent the water bolus of the clinical setup [4]. The alternative ground plane segmentation patterns previously characterized by FEM simulations were implemented using a thin copper foil (Scotch 1181, 3 M, MN, USA, 0.07 mm thick, 9 mm width) applied to the outside of the plastic hemisphere (Fig. 3).
For all experiments, the mockup phantom was unheated and a room temperature of 22 °C was assumed. For both sequences, the integrated body coil of the MR system was used to transmit the RF signal and receive the MR signal. The different setups were positioned as identically as possible at the iso-center of the MR scanner. Two-dimensional multi-phase SPGR and EPI images were acquired for geometric distortion analysis and temperature maps in the axial, sagittal, and coronal planes were computed. Table 1 shows the respective MR imaging protocol parameters, chosen after considering the trade-off between achieving good EPI image quality and adhering to the SPGR protocol typically used in the clinic. Some parameters were also adapted to simplify phase and frequency sampling.
Geometric distortion quantification
The fine structures of the ADNI phantom enabled the use of regularized non-rigid registration based on a multi-resolution optical flow [37]. This algorithm generates vector maps of the geometrical distortions in all spatial directions in millimeters. In the experiments, the mockup transducer was positioned at the iso-center of the MR system and the SPGR and EPI images were acquired. To scan the next pattern, the mockup phantom was removed from the scanner, the ground plane was changed, and the phantom was positioned at marked positions at the iso-center of the MR scanner again. Despite careful calibration when installing the transducer mockup, changes in the phantom for the different copper patterns caused some minor spatial deviations. For this reason, the EPI images were registered to the significantly less distorted SPGR images of the same assembly, rather than to a standard reference image. The so-obtained geometrical distortion maps still differ due to the variable position of the excited slices; the variation caused by the copper shells should however be more significant.
Prior to registration, all magnitude images were masked to ensure that only (i) the area of the ADNI phantom and water bolus and (ii) regions with a high image intensity gradient are considered. Masks were generated for each scan axis individually and calculated by extracting the image intensity gradient of the magnitude image. A threshold number defines which points of the image intensity gradient were included into the masks. This threshold was selected such that the same amount of voxels could be guaranteed for all patterns. The resulting voxel numbers are listed in the figure captions. The means and standard deviations of the masked distortion maps were calculated to quantitatively compare the geometrical distortions of the different patterns.
Temperature map quantification
For each transducer ground plane setup and imaging plane, five consecutive phase images were acquired in the SPGR and EPI protocols to compute the corresponding temperature map with subsequent image acquisitions. For SPGR and EPI, the fourth phase image was defined as the baseline image φ T0 and subtracted from the fifth phase image φ T according to Eq. (1) to calculate temperature maps with a temperature coefficient of α = −0.01 ppm/°C [38] (see discussion for details on why the fourth phase image has been chosen). As the phantom is unheated, the expected temperature values should be close to zero. The real transducer creates a hot spot in the sub-thalamus of the brain. Hence, certain regions of interest were selected for each copper pattern and imaging plane, respectively, such that (i) high SNR could be guaranteed and (ii) the central regions of the ADNI phantom were covered. The voxels inside the regions were used for the temperature map calculation. This means that for the sagittal and coronal case, voxels in the upper and lower part of the ADNI phantom were omitted, whereas the axial case remains almost unchanged. To avoid plastic structures within the selected regions of interest, voxels with a high image intensity gradient were excluded. Both image registration and temperature map calculations were performed using Matlab (Mathworks Inc., Natick, MA).
Geometrical distortion and temperature map evaluation
To provide an overview of the geometric distortion and temperature map characteristics for each copper pattern, \( \frac{1}{\sigma } \) (σ is the standard deviation) representing the relative SNRs of each temperature map were plotted against the means and standard deviations of the geometrical distortions. Only the standard deviations of the temperature maps were considered because they represent the temperature variability depending on the copper patterns, whereas the means stayed near to 0 °C for most cases. This was done for both EPI and SPGR and separately for the sagittal, axial, and coronal axes. The geometric distortions are illustrated in the form of error bars, representing both the means and standard deviations. With this comparison, a mean near to 0 mm and a small standard deviation, but a high SNR, are favorable.
The top row of Fig. 4 shows the current densities induced on the copper surface for each of the five simulated scenarios (b) to (f). The simulation images are scaled from 0 to 40 kA/m2 and only one fourth of the model is displayed owing to the symmetry boundary conditions used to accelerate FEM simulation times. The specified gradient strength of 49.5 mT/m as in the reference scenario was compared with the gradient strengths achieved by the individual copper patterns. Scenario (b) showed strong eddy currents towards the top of the hemisphere. The maximum magnetic field inside a 20-cm FOV became almost zero; hence, the un-segmented hemisphere acted like a shield (Fig. 4, bottom). For pattern (c), the currents due to segmentation occurred further to the outside of the copper hemisphere but remained strong because of the large segment areas. With segmentation into seven parts, the maximum gradient strength decreased by 37.6 % relative to (a) and strong non-linearity in the computed field map was observed. The star pattern in (d) resulted in a field reduction of 1.4 % relative to (a), whereas the reduction was 0.6 % with the ring pattern (e). With the ring and star pattern (f), the gradient field strength decreased by 5.6 % owing to an increased shielding effect.
(Top) Densities of currents induced on the surfaces of the transducer ground planes when pulsing the y-gradient coil (scale 0 to 40 kA/m2). Current densities are plotted for patterns (b) to (f). Only one fourth of the model is shown given the symmetry boundary conditions in the FEM simulations. (Bottom) Calculated maximum gradient strengths on a 20-cm sphere inside the FOV of the gradient model, plotted for all segmentation patterns described in Fig. 3 (reference gradient strength g ref = 49.44 mT/m)
The sagittal, axial, and coronal imaging axes exhibited different geometrical distortion and temperature characteristics and are therefore listed separately. For all axes, the geometric distortion map of the reference case (a) represents the difference in geometric distortion between SPGR and EPI, which is based on the pulse sequence and system characteristics but unrelated to geometric distortions due to eddy currents on the transducer ground plane. The means and standard deviations of the geometric distortion maps of the patterns (b) to (f) were compared with those of the reference scenario (a), assuming that further deviations in the difference between SPGR and EPI are mainly related to the additional copper patterns, less significant deviations between the individual patterns is caused by the variable position of the respective excited slice.
Figure 5 summarizes the results of the sagittal plane showing the geometric distortion maps (bottom row), the temperature maps inside the ADNI phantom calculated for SPGR (top row) and EPI (middle row) for all patterns. Within the geometric distortion maps, the setups (b), (c), and (e) showed much higher means and standard deviations than cases (a), (d), and (f) and were thus scaled differently for better visualization (blue background). The star pattern (d) (3.46 ± 2.33 mm) and star-ring setup (f) (2.72 ± 2.80 mm) showed smaller distortions than the seven-segment sheet (c) (5.54 ± 4.21 mm) or the ring pattern (e) (5.95 ± 5.00 mm) and were both comparable to the reference scenario (a) (2.77 ± 2.24 mm). However, the star-ring was assumed to achieve the best result because its distortions were consistently smaller, although the conclusion could be influenced by the mentioned variation in positioning of the phantom and hence excited slice. The top and middle rows of Fig. 5 show the SPGR and EPI temperature maps inside the ADNI phantom scaled to ±5 °C. The listed mean and standard deviations were computed in certain pre-selected regions as described in the 'Temperature map quantification' section; however, the complete temperature maps are shown in the graphs. The means and standard deviations stayed near 0 °C with small deviations for the ring pattern. The first column of Fig. 6 summarizes the relationship between the geometric distortion and SNR expressed by \( \frac{1}{\sigma } \) for SPGR and EPI in the sagittal plane. These graphs show that scenarios (d) and (f) performed as well as the reference scenario (a) in terms of high SNR and small geometric distortions in SPGR, whereas in EPI, pattern (c) performed the best owing to its high SNR, albeit with higher standard deviations. For EPI, pattern (d) performed as well as the reference scenario (a).
(Top) Sagittal images of patterns (a) to (f) in SPGR with calculated temperature maps. (Middle) Sagittal images of patterns (a) to (f) in EPI with calculated temperature maps. The mean and standard deviations were calculated on 2585 voxels of predefined region of interest. (Bottom) Geometric distortion maps created by masks to generate about 5000 voxels. Two different color scales are used to facilitate comparison
SNR expressed as 1/σ (ordinate) plotted against the standard deviation of geometric distortion expressed in millimeters (abscissa) in SPGR (top) and EPI (bottom) for the sagittal (left), axial (center), and coronal (right) planes for all scenarios (a) to (f). The desired design space comprises high SNR and small geometric distortion error bars centered at 0 °C
Figure 7 summarizes the results of the axial plane, showing the geometric distortion maps (bottom row) and the temperature maps for SPGR (top row) and EPI (middle row) for all patterns. All segmentation scenarios showed geometric distortions comparable to those of the reference scenario (a), suggesting that for an axially oriented slice selected towards the outer hemisphere, the transducer ground plane is not a critical imaging plane, although the conclusion could be influenced by the mentioned variation in positioning of the phantom and hence excited slice, which is also mirrored by the better performing cases (d) and (f) compared to the reference scenario (a). The second column of Fig. 6 shows the relationship between the SNR and geometric distortion for SPGR and EPI. For SPGR, scenarios (d) and (f) performed as well as the reference scenario (a). Scenarios (d) and (f) showed high SNR and small geometric distortions for EPI as well, although (c) also performed nearly as well as the reference scenario.
(Top) Axial images of patterns (a) to (f) in SPGR with calculated temperature maps. (Middle) Axial images of patterns (a) to (f) in EPI with calculated temperature maps. The mean and standard deviations were calculated on 7488 voxels of a predefined region of interest. (Bottom) Geometric distortion maps created by masks to generate about 2500 voxels
The coronal plane is summarized in Fig. 8, showing the geometric distortion maps (bottom row), the temperature maps for SPGR (top row) and EPI (middle row) for all setups. As in the sagittal case, the distortion maps of the copper patterns (b), (c), and (e) showed much higher means and standard deviations than those of the reference case (a), and hence were scaled separately. The star pattern (d) (2.73 ± 1.62 mm) and star-ring pattern (f) (2.68 ± 2.09 mm) performed markedly better than the seven-segment setup (c) (4.80 ± 4.44 mm) and comparably to the reference scenario (a) (2.89 ± 1.85 mm). Note that in the coronal plane, the star pattern has a slight advantage over the star-ring setup. The means of the SPGR and EPI temperature maps shown in the top and middle rows are again around 0 °C for patterns (a), (c), (d), and (f). However, those of the SPGR temperature map of the solid sheet (b) (−1.21 ± 0.63)°C and the ring setup (e) (SPGR (0.93 ± 2.23)°C, EPI (−5.47 ± 1.23)°C) show large deviations.
(Top) Coronal images of patterns (a) to (f) in SPGR with calculated temperature maps. (Middle) Coronal images of patterns (a) to (f) in EPI with calculated temperature maps. The mean and standard deviations were calculated on 1458 voxels of a predefined region of interest. (Bottom) Geometric distortion maps created by masks to generate about 4000 voxels. Two different color scales are used to facilitate comparisons
The relationship between the geometric distortion and SNR for the coronal case is summarized in the third column of Fig. 6 for SPGR and EPI. Scenarios (d) and (f) performed as well as the reference scenario (a) in SPGR, whereas in EPI, patterns (c), (d), and (f) performed the best, albeit with high standard deviations for (c). Table 2 summarizes the results of this section, listing the best-performing transducer segmentations in terms of geometric distortions and temperature map calculations.
Table 2 Best-performing patterns for the different pulse sequences and imaging planes tested. The exclamation mark after (c) emphasizes the SNR problem discussed in discussion
The simulations in this study indicate that the ring pattern (e) delivered the best results with respect to shielding effects and the maximum gradient strength calculation. However, the experiments showed that the ring pattern performed worst with respect to geometric distortion and temperature map quantification in all imaging planes. One explanation is that the particular combination of gradient axes pulsed for the prescribed pulse sequence together with the orientation of the transducer patterns in the MR system has a strong impact on how the induced currents propagate on the surface of the rings. Only the y-gradient was simulated, whereas the EPI blips and readout gradient waveforms were played on the transverse and longitudinal gradient system in the experiment.
Due to computational limitations, the simulation were based on a set of assumptions such that only one gradient axis was included. To minimize the difference between simulation and experimental outcome, the simulation model should include additional geometrical details of the MR gradient system: for example the x- and z-gradient coils, in addition, the thickness of the copper ground plane should be reduced to 2 μm, of course both to a computational limitation. If these steps allow for a reproducible prediction between simulation and experiment, future work on the simulation phase could be to study the influences of the whole EPI frequency components. Here, the evaluation focused on the given clinical set of EPI sequence parameters slightly changed for an optimized image quality. It is suggested to additionally simulate a bandwidth of frequencies from a typical EPI frequency range of 0.1 to 5 kHz. With this, the eddy current duration parameter is described more generally. In tcMRgFUS, the transducer can only be changed by ±4 mm along the z-axis and ±1 cm in the y- and x-axes. To generally describe the associated eddy current amplitude changes, it is suggested to include the offset positions in the segmentation design. In addition, the z-gradient field change may also have induced currents on the ring structures, leading to the poor qualitative overlap between simulation and experimental results for this structure. We note that the copper patterns used in the experiment differed from those simulated as (i) the star and ring patterns had 4 and 1 segments less, respectively, and (ii) the star-ring pattern was established by further segmenting the star pattern and not the seven-segment sheet as in the simulation. The thickness of the copper material in the simulation differed from that in the experiment. The resistance of the copper was measured to be in the range of 0.2 Ω, although a concrete value could not be determined. Furthermore, the solid copper pattern was fixed on the inside of the plastic shell and therefore remained in direct contact with the demineralized water. This might influence the propagation of the induced eddy currents in a different way than in the other setups. To summarize, the FEM simulations indicated that the imaging artifacts resulted from eddy currents induced in the copper ground plane. The simulations helped to achieve a visible representation of the induced currents on the surface of the copper ground plane, and the simulated gradient field strengths helped to identify segmentation patterns that minimized the eddy current flow with respect to the actively pulsed gradient coil. Consequently, by considering the discussed items, we expect a more user specific set of EPI parameters to be detectable.
Geometric distortion quantification was achieved by registering the EPI images to the SPGR images to obtain a relative error expressed in millimeters. In this scenario, SPGR is seen as the ground truth as SPGR is used in clinical treatment. The geometric distortion maps of the different setups can be compared directly with each other because imaging parameters such as the SNR have a negligible influence on the image contrast masks of the registration algorithm.
The overall performance of the star setup indicates that this segmentation structure is superior also with respect to the clinical pattern (c). Nevertheless, it was found that (e) and even the higher segmented pattern (f) showed higher geometric distortions than (d) in the axial and sagittal plane. This illustrates that certain pattern orientations in relation to the gradient system can actually increase geometric distortions even with a higher degree of segmentation. Therefore, a star pattern with a degree of segmentation as high as possible and an appropriate orientation is supposed to achieve the best possible results in concerns of geometrical distortions.
Temperature maps were extracted according to [1] by using the fourth phase image as the baseline image. The first of the five phase images showed higher SNRs than the following four phase images. This suggests that the system is stable and quickly settled into a steady state. Given that the change in SNR between the fourth and fifth images is less than 1 %, we regard the images used to calculate the temperature map as appropriately chosen.
The performance of the segmentation pattern was judged by setting the geometric distortion analysis into relation with the SNR evaluation (Fig. 6), expressed by 1/σ. The results were compared to the reference pattern (a). Here, upper and lower specification limits were not defined, as would be required within a design process. Additionally, for each scenario, the mockup model was positioned slightly different. Hence, the mean and standard deviations of the temperature maps and hence the decision on which scenario performed best might be influenced by the mentioned variation in positioning of the phantom and hence excited slice.
We also note that the sagittal and coronal planes show noisy areas towards the superior end. This could be explained by assuming that the shimming was optimized based on a MR signal collected from areas that contained the water bolus; hence, areas with no water bolus might show lower SNRs. Consequently, these areas were excluded from the temperature map calculation.
Here, only eddy current effects induced in the transducer ground plane were considered. Other sources of image distortion that arise from the motion of water pumped through the water bolus for scalp cooling or motion due to vibration have not been considered. In addition to optimizing the hardware components of the transducer, future studies could concentrate on compensating for spatially varying eddy currents in image reconstruction by calibrating the gradient field with static magnetic field measurements in real time. This can be achieved, for example, with the help of magnetic field monitoring [39], by a pre-calibration technique described by Duyn et al. [40], or by applying higher-order eddy current field correction algorithms as described by Xu et al. [30]. To further increase the temporal resolution, parallel imaging could be considered.
A further open question targets the fact whether a segmentation of the transducer ground plane additionally increases the RF-related heating properties, which is not addressed within this work. The used specific absorption rate value for the SPGR and EPI sequences were 0.53 W/kg (all three planes) and 0.02 W/kg (sagittal, coronal) and 0.08 W/kg (axial), respectively.
In conclusion, gradient-induced eddy currents in the tcMRgFUS transducer ground plane can dramatically degrade EPI image quality. The most critical impact of our results relates to the method used to quickly and efficiently down-select transducer designs for experimental prototyping. The root cause of EPI image distortion was investigated with FEM simulations and verified by experiments on phantoms. In simulations, artifact-causing eddy currents were reduced by increasing the segmentation of the transducer ground planes. The experiments showed that the degree of transducer ground plane segmentation, the pattern of segmentation with respect to the orientation of the pulsed gradient coil, and pulse sequence characteristic determine image quality and accuracy of the temperature map. In this particular test environment, the star pattern showed the best overall performance in terms of mitigating eddy current-induced geometric distortions due to fast-switching gradients, and producing accurate MR thermometry maps.
McDannold N, Clement G, Black P, Jolesz F, Hynynen K. Transcranial MRI-guided focused ultrasound surgery of brain tumors: Initial findings in three patients. Neurosurgery. 2010;66:323–32.
PubMed Central Article PubMed Google Scholar
Coluccia D, Fandino J, Schwyzer L, OGorman R, Remonda L, Anon J, et al. First noninvasive thermal ablation of a brain tumor with MR-guided focused ultrasound. Journal of Therapeutic Ultrasound. 2014;2:17.
Jeanmonod D, Werner B, Morel A, Michels L, Zadicario E, Schiff G, et al. Transcranial magnetic resonance imaging-guided focused ultrasound: noninvasive central lateral thalamotomy for chronic neuropathic pain. Neurosurg Focus. 2012;32:1–11.
Martin E, Jeanmonod D, Morel A, Zadicario E, Werner B. High-intensity focused ultrasound for noninvasive functional neurosurgery. Ann Neurol. 2009;66:858–61.
Hindman JC. Proton resonance shift of water in gas and liquid states. Journal of Chemical Physics. 1966;44:4582–92.
Ishihara Y, Calderon A, Watanabe H, Okamoto K, Suzuki Y, Kuroda K, et al. A precise and fast temperature mapping using water proton chemical shift. Magentic Resonance in Medicine. 1995;34:814–23.
De Poorter J, De Wagter C, De Deene Y, Thomsen C, Stahlberg F, Achten E. Noninvasive MRI thermometry with the proton resonance frequency (PRF) method: in vivo results in human muscle. Magn Reson Med. 1995;33:74–81.
De Poorter J. Noninvasive MRI thermometry with the proton resonance frequency method: study of susceptibility effects. Magn Reson Med. 1995;34:359–67.
Rieke V, Butts Pauly K. MR thermometry. J Magn Reson Imaging. 2008;27:376–90.
Peters R, Hinks RS, Henkelmann RM. Ex vivo tissue-type independence in proton-resonance frequency shift MR thermometry. Magn Reson Med. 1998;40:454–69.
Article CAS PubMed Google Scholar
Pernot M, Aubry J-F, Tanter M, Boch A-L, Marquet F, Kujas M, et al. In vivo transcranial brain surgery with an ultrasonic time reversal mirror. J Neurosurg. 2007;106:1061–6.
Pulkkinen A, Huang Y, Song J, Hynynen K. Simulations and measurements of transcranial low-frequency ultrasound therapy: skull-base heating and effective area of treatment. Phys Med Biol. 2011;56:4661–83.
Köhler MO, Mougenot C, Quesson B, Enholm J, Le Bail B, Laurent C, et al. Volumetric HIFU ablation under 3D guidance of rapid MRI thermometry. Med Phys. 2009;36:3521.
Mougenot C, Köhler MO, Enholm J, Quesson B, Moonen C. Quantification of near-field heating during volumetric MR-HIFU ablation. Med Phys. 2011;38:272–82.
Stafford RJ, Price RE, Diederich CJ, Kangasniemi M, Olsson LE, Hazle JD. Interleaved echo-planar imaging for fast multiplanar magnetic resonance temperature imaging of ultrasound thermal ablation therapy. J Magn Reson Imaging. 2004;20:706–14.
Weidensteiner C, Quesson B, Caire-Gana B, Kerioui N, Rullier A, Trillaud H, et al. Realtime MR temperature mapping of rabbit liver in vivo during thermal ablation. Magn Reson Med. 2003;50:322–30.
Holbrook AB, Kaye E, Santos JM, Rieke V, Pauly KB. Fast referenceless prf thermometry using spatially saturated, spatial-spectrally excited flyback EPI. 8th International Symposium on Therapeutic Ultrasound. 2009;1113:223–7.
Ogawa S, Lee TM, Kay AR, Tank DW. Brain magnetic resonance imaging with contrast dependant on blood oxygenation. Proc Natl Acad Sci USA. 1990;87:9868–72.
PubMed Central Article CAS PubMed Google Scholar
LeBihan D, Breton E, Lallemand D, Grenier P, Cabanis EA, Laval-Jeantet M. Mr imaging of intravoxel incoherent motions: application to diffusion and perfusion in neurologic disorders. Radiology. 1986;161:401–7.
Li W, Wait SD, Ogg RJ, Scoggins MA, Zou P, Wheless J, et al. Functional magnetic resonance imaging of the visual cortex performed in children under sedation to assist in presurgical planning: clinical article. Journal of Neurosurgery: Pediatrics. 2013;11:543–6.
Parker JG, Zalusky EJ, Kirbas C. Functional MRI mapping of visual function and selective attention for performance assessment and presurgical planning using conjunctive visual search. Brain and Behavior. 2014;4:227–37.
Butts K, Sinclair J, Daniel BL, Wansapura J, Pauly JP. Temperature quantitation and mapping of frozen tissue. JMRI. 2001;13:99–104.
Josan S, Pauly JM, Daniel BL, Pauly KB. Double half RF pulses for reduced sensitivity to eddy currents in UTE imaging. Magnetic Resonance in Medicine. 2009;61(5):1083–9.
Dragonu I, Denis de Senneville B, Quesson B, Moonen A, Ries M. Real-time geometric distortion correction for interventional imaging with echo-planar imaging (EPI). Magnetic Resonance in Medicine. 2009;61:994–1000.
Samoudi AM, Van Audenhaege K, Vermeeren G, Poole M, Tanghe E, Martens L, et al. Analysis of eddy currents induced by transverse and longitudinal gradient coils in different tungsten collimators geometries for SPECT/MRI integration. Magnetic Resonance in Medicine. 2015;74:1780–789.
Lechner-Greite SM, Mathieu J-B, Lee S-K, Amm BC, Foo TK, Schenck JF, et al. Design optimizations regarding eddy currents of a high performance head gradient coil. In Proc Intl Soc Mag Reson Med 20, page 2753, 2012
Lechner-Greite SM, Hehn N, Werner B, Zadicario E, Tarasek M, Yeo DTB. Impact of gradient-induced eddy currents on multi-shot EPI-based temperature map accuracy in a transcranial MR guided focused ultrasound applicator. In: Proceedings of the 23rd International Society for Magnetic Resonance in Medicine (ISMRM), Toronto. 2015.
Jackson JD. Classical Electrodynamics. 3rd ed: John Wiley & Sons Inc; 1998.
Bernstein MA, King KF, Zhou XJ. Handbook of MRI Pulse Sequences. Elsevier Academic Press; 2004.
Xu D, Maier JK, King KF, Collick BD, Wu G, Peters RD, et al. Prospective and retrospective high order eddy current mitigation for diffusion weighted echo planar imaging. Magnetic Resonance in Medicine. 2013;70(5):1293–305.
Kuroda K, Oshio K, Chung AH, Hynynen K, Jolesz FA. Temperature mapping using the water proton chemical shift: a chemical shift selective phase mapping method. Magn Reson Med. 1997;38:845–51.
Webpage to maxwell3d software: http://www.ansys.com/products/electronics/ansys-maxwell. Accessed 2016, see also impressum: (c) 2016 ANSYS, Inc. All rights reserved.
Lechner-Greite S, Mathieu J-B, Amm BC. Simulation environment to predict the effect of eddy currents on image quality in MRI. IEEE Transaction on Applied Superconductivity. 2012;22(3):4402104.
Lechner SM. EddySim: a problem solving environment for hardware-related eddy current simulations in magnetic resonance imaging. 2010. Dr. Hut.
Hayes CE, Eash MG. Shield for decoupling RF and gradient coils in an nmr apparatus, February 10 1987. US Patent 4,642,569.
Mallozzi RP, Blezek DJ, Gunter JL, Jack CR, Levy JR. Phantom based evaluation of gradient nonlinearity for quantitative neurological MRI studies. In: Proc 14th Annual Meeting ISMRM, Seattle. 2006. p. 1364.
Lucas BD, Kanade T. An iterative image registration technique with an application to stereo vision. In: Proceedings of Imaging Understanding Workshop. 1981. p. 121–30.
Chung AH, Jolesz FA, Hynynen K. Thermal dosimetry of a focused ultrasound beam in vivo by magnetic resonance imaging. Med Phys. 1999;26:2017–26.
Kasper L, Bollmann S, Vannesjo SJ, Gross S, Haeberlin M, Dietrich BE, et al. Monitoring, analysis, and correction of magnetic field fluctuations in echo planar imaging time series. Magn Reson Med. 2015;74(2):396-409.
Duyn JH, Yang Y, Frank JA, van der Veen JW. Simple correction method for k-space trajectory deviations in MRI. Journal of Magnetic Resonance. 1998;132:150–3.
We acknowledge the help of our colleagues Dr. Anne Menini and Dr. Jonathan Sperl for their technical contributions on image registration and statistical analysis. We would like to thank Editage (www.editage.com) for English language editing.
Diagnostics, Imaging and Biomedical Technologies Laboratory, GE Global Research Europe, Garching n., Munich, Germany
Silke M. Lechner-Greite & Nicolas Hehn
IMETUM, Technical University Munich, Garching n., Munich, Germany
Nicolas Hehn
Center for MR-Research, University Children's Hospital Zurich, Zurich, Switzerland
Beat Werner
InSightec Ltd., Tirat Carmel, Israel
Eyal Zadicario
Diagnostics, Imaging and Biomedical Technologies Laboratory, GE Global Research Niskayuna, Albany, NY, USA
Matthew Tarasek & Desmond Yeo
Silke M. Lechner-Greite
Matthew Tarasek
Desmond Yeo
Correspondence to Silke M. Lechner-Greite.
SLG is employed by GE Global Research Europe. MT and DY are employed by GE Global Research Niskayuna. EZ is employed by InSightec Ltd. BW is employed by the University's Children Hospital Zurich. NH has no competing interests to declare.
SL-G designed the study, carried out the simulations for different segmentation patterns, performed the experiments with the mockup models, implemented the data analysis tools, and drafted the manuscript. NH designed and built the transducer mockup, participated in the exams, and was involved in data processing, writing, and reviewing the draft manuscript. BW originally approached us with the image distortion problem. He participated in the design of the study, shared clinical protocols for SPGR and EPI and contributed problem descriptive images. EZ participated in the design of the study and shared transducer specifications. MT participated in the design of the study and contributed temperature map accuracy measures. DY supervised the design and execution of the study and drafted the manuscript. All authors read and approved the final manuscript.
Lechner-Greite, S.M., Hehn, N., Werner, B. et al. Minimizing eddy currents induced in the ground plane of a large phased-array ultrasound applicator for echo-planar imaging-based MR thermometry. J Ther Ultrasound 4, 4 (2016). https://doi.org/10.1186/s40349-016-0047-x
MR thermometry
Proton resonance frequency shift
Echo-planar imaging
Phased-array transducer | CommonCrawl |
ODs with a positive TPR conclusion, not subject to a conditional approval, and approved without requiring a PASS would be more likely to be reimbursed in Spain
José Luis Poveda1,
Claudia Gómez2,
Alicia Gil2 &
Xavier Badia ORCID: orcid.org/0000-0001-7568-25502
Orphanet Journal of Rare Diseases volume 18, Article number: 4 (2023) Cite this article
The present study aims to assess clinical and regulatory variables that would influence pricing and reimbursement (P&R) decisions for Orphan Drugs (ODs) in Spain. ODs approved by the European Commission (EC) between 2006 and 2021 were classified according to their P&R status in Spain: approved, undergoing decision and rejected. A statistical analysis was carried out to assess the potential association between clinical and regulatory variables and P&R decision of ODs in Spain: therapeutic area, rarity of disease, existence of alternative therapies, availability of survival-related outcomes, safety profile, type of population, conditional approval status granted by the European Medicines Agency (EMA) and a positive Therapeutic Positioning Report (TPR) opinion.
111 ODs have been approved by the EC and have obtained marketing authorisation in Spain between 2006 and 2021. Out of the 111 ODs, 57 (51.4%) were reimbursed, 24 (21.6%) were undergoing decision and 30 (27%) were rejected. According to the statistical analysis, ODs with a positive TPR conclusion (p-value < 0.01), not subject to a conditional approval by the EMA (p-value < 0.05) and approved without the obligation to conduct a post-authorisation safety study (PASS) (p-value < 0.05), were statistically significant, and therefore, would be more likely to obtain P&R approval in Spain.
This study shows that the TPR plays a key role in the P&R process in Spain and highlights that traditional evaluation tools, such us safety and efficacy, were the main drivers of P&R decisions for ODs. A positive conclusion of the TPR, non-conditional approval by the EMA and no obligation for a PASS seems to favourably affect P&R decisions in Spain.
More than 30 million inhabitants in the European Union (EU) suffer from a rare disease (RD) [1]. Although there is no universal definition for RDs [2], in the EU they are defined as those affecting no more than 5 per 10,000 inhabitants, with none or limited choice of therapeutic options. Some of these conditions are extremely rare or ultrarare, affecting less than 1 per 50,000 inhabitants [3]. Despite their low prevalence, they are life-threatening or chronically debilitating conditions with a high burden and very often limited level of awareness [1, 4]. RDs have a high impact on patients, their families, healthcare systems and even society in general, and are characterized by pain, disability, significant organ damage, and high mortality rates [5]. Although their prevalence is low, RDs are numerous and heterogeneous [6]. The true burden of rare diseases in Europe and elsewhere is difficult to estimate, since epidemiological data for most of these diseases are not available. It is estimated that more than 6000 RDs exist [1], affecting between 6 and 8% of the population.
RDs were so called "orphans" because they were neglected for many years. Orphan Drugs (ODs) are those intended to diagnose, prevent, or treat RDs [3]. RDs are now a public health priority within the European legislation. The EU Council established that patients suffering from rare conditions should be entitled to the same quality of treatment as patients suffering from more prevalent conditions. With that purpose, the EU introduced specific incentives for companies to develop OD to treat RDs, to compensate for the small market size and, introducing specific guidelines and requirements for clinical development programmes to reduce the uncertainty to develop an OD [4]. Applications for orphan designation are evaluated by the European Medicines Agency's (EMA) Committee for Orphan Medicinal Products (COMP). Once the product has been authorised by the European Commission (EC), ODs must be nationally authorised by local authorities at each member state before entering the market [7]. Pricing and reimbursement (P&R) decisions for ODs are determined at the national level, under varying evaluation and decision-making contexts which can often result in differences in restrictions and access levels for patients across different territories [8].
The distinctive features of ODs—limited knowledge and heterogeneity of the diseases, the limitations in following "standard" clinical trial development programmes due to small and typically heterogeneous patient populations, and the lack of hard clinical endpoints [9] pose an additional challenge in the appraisal of these products [10].
In Spain, the Committee on Pricing of Medicines, and Healthcare Products (CIPM), responsible for the final P&R decision, includes in their P&R resolutions the criteria used to justify such decisions. However, information on how these criteria are either measured or defined is not provided [11]. Therefore, the drivers influencing the approval or denial of a drug P&R are not clear [12], which could be interpreted as the existence of other factors influencing the P&R decisions within the Spanish NHS.
To reinforce decision-making, the Therapeutic Positioning Report (TPR) was introduced in Spain in 2013. Despite its name, the TPR is conditioned to the P&R negotiation and the final positioning of a new drug comes after the Directorate-General for the Basic Portfolio of Services of the National Healthcare and Pharmacy System (DGCBF) issues the reimbursement decision (and price). In a previous study, the impact of the TPR conclusion in the P&R process in Spain was reported, demonstrating its key role [13]. In 2020, the Consolidation Plan for the TPR was launched. To that end, a new Drug Evaluation Network (REvalMed NHS) was established, integrating alliances between the DGCBF, the Spanish Medicines Agency (AEMPS) and the representatives of the Spanish Regions, embodied into seven therapeutic nodes. With the introduction of the REvalMed NHS process, the TPR formally integrates the economic evaluation to assess the cost-effectiveness and/or budget impact of the new drug in the Spanish National Healthcare System (NHS) [14].
In Spain, once the companies submit the reimbursement request for a new medicine, they only have the possibility to discuss with the Ministry of Health during the allegation of the TPR, and during the P&R negotiation with DGCBF.
As a next step of the previous work [13], this study aims to review and assess the clinical and regulatory variables that might be relevant for the reimbursement decision of ODs in Spain.
ODs approved by the EC and granted marketing authorisation in Spain were identified and stratified according to their reimbursement status. Then, relevant variables that could influence the P&R process in Spain were selected and study's hypotheses were defined accordingly. Finally, a regression analysis was performed to test the validity of these hypotheses and to assess which variables influence the P&R process in Spain.
Identification of orphan drugs approved by the European Commission with Spanish marketing authorisation, and their reimbursement status
Medicines with current orphan designation by the COMP and authorised by the EC until 2021 were identified. This information was extracted from the Community Register of Orphan Medicinal Products [15]. In a second step, information on marketing authorisation granted by the AEMPS and authorisation dates were retrieved from the Spanish Medicine Online Information Centre (CIMA) search engine [16]. The Spanish marketing authorisation dates granted by the AEMPS were used to analyse evaluation timelines (months) from Spanish marketing authorisation to P&R decision date. Eventually, the BIFIMED database was used to search for reimbursement status of each OD authorised in Spain until 2021 [17]. ODs were classified as "approved" (ODs that have had their P&R request approved), "under P&R decision process" (ODs for which P&R had been requested but are still under review/ negotiation), and "rejected" (ODs that have seen their P&R request rejected).
Identification and description of clinical and regulatory variables relevant for the price and reimbursement process of orphan drugs in Spain
The variables considered for the analysis resulted from the official P&R criteria established by the Royal Decree Law 1/2015 of 24 July to evaluate the inclusion of new drugs [11], as well as from the variables reported in the mandatory information that the Marketing Authorisation Holder (MAH) must provide to European and Spanish regulatory bodies as a step into the centralised authorisation process and national P&R decision. Information was retrieved from ODs clinical trials, from their respective European Public Assessment Reports (EPARs) [18], or from the TPRs on the AEMPS website [19]. When information could not be found in these documents, a search in PubMed and grey literature was conducted. In addition, these identified clinical variables were tested in previous phases of this study: (i) Therapeutic area, (ii) Outcomes classification, (iii) Therapeutic alternatives, (iv) Rarity of disease, and (vi) Type of population. Regarding regulatory variables, (i) TPR conclusion and (ii) Conditional approval were included. Table 1 shows how these variables were defined and classified for the analysis. For those ODs without TPR, reimbursed ODs were considered to have a positive TPR conclusion. Conversely, ODs with a rejected P&R decision were considered to have a questionable opinion with respect to the EMA resolution as a TPR conclusion.
Table 1 Definition and classification of the variables relevant for the price and reimbursement process in Spain
Study hypotheses were defined for the variables that could have an impact on P&R decisions. ODs would be more likely to be reimbursed if they were (i) indicated for oncologic diseases, (ii) based on survival-related outcomes, (iii) lacking other therapeutic alternatives, (iv) without an obligation to conduct a post-authorisation safety study (PASS), (v) intended to treat ultra-rare diseases, and (vi) indicated for paediatric patients. ODs with a (i) positive conclusion in the TPR and (ii) ODs without conditional approval granted by the EMA would also be more likely to be reimbursed.
Approved ODs by the EC and granted Spanish marketing authorisation until 2021 were included in the analysis and categorised by their P&R status. Descriptive statistics were performed for quantitative variables (including time from Spanish marketing authorisation to P&R decision) and qualitative variables (clinical and regulatory variables). Mean (± SD) values were calculated for evaluation timelines. Frequency tables were displayed to describe data from clinical and regulatory variables.
As the study aimed to identify the variables that might positively influence the reimbursement decision of ODs in Spain, a Binary Dependent Variable (BDV) Model was considered for reimbursement status (Eq. 1). Therefore, only ODs with approved or rejected P&R were included in the regression analysis, excluding ODs under P&R decision process.
Binary dependent variable (BDV) model
$$y = \left\{ {\begin{array}{*{20}l} 0 \hfill & {if\;rejected} \hfill \\ 1 \hfill & {if\;approved} \hfill \\ \end{array} } \right.$$
First, bivariate analyses were carried out using the χ2 test of association between the dependent variable (reimbursement status, stratified by approval or rejected) and the independent variables (clinical and regulatory variables) [20, 21]. Then, a logistic regression model was used to test the validity of the hypotheses defined for the identified ODs [22]. All statistical analyses were conducted on the statistical software Stata/IC15 [23].
Orphan drugs authorised in Spain and approved by the European Commission from 2006 to 2021 and description of their reimbursement status in Spain
A total of 128 ODs have been approved by the EC between 2006 and 2021. Of those, 111 (86.7%) had been granted marketing authorisation in Spain, from which 57 (51.4%) had received P&R approval, 24 (21.6%) were undergoing the P&R process, and 30 (27%) had been rejected (Table 2). Mean time from Spanish marketing authorisation to P&R approval was 18.6 ± 11.9 months, with a minimum of 3 months (Kymriah® and Trepulmix®) [24, 25] and a maximum of 52 months (Revestive®) [26]. Mean time from marketing authorisation to P&R rejection was 17.6 ± 8.4 months. Before the inclusion of the TPR in 2013, the mean time from P&R request to P&R decision was 19.1 months; and after the inclusion of the TPR, the mean time was 18.2 months.
Table 2 List and description of identified variables for orphan drugs authorised in Spain from 2006 to 2021
Relevant variables for the pricing and reimbursement process in Spain
Out of the 111 ODs with marketing authorisation in Spain, 41 (36.9%) were indicated for oncologic diseases, 43 (38.7%) were indicated for a disease with no therapeutic alternatives and 40 (36.0%) were indicated for ultra-rare diseases (< 1/50000 inhabitants). Thirty-seven (33.3%) ODs had a survival-related endpoint included in their pivotal study, and 81 (73.0%) did not have to conduct a PASS. Finally, 49 (44.1%) out of 111 ODs were indicated for paediatric patients. Regarding regulatory variables, 66 (75.9%) ODs had a positive TPR opinion and 92 (83.0%) did not have a conditional approval by the EMA.
ODs for which P&R had been approved
Out of the 57 ODs with P&R approval in Spain, 21 (36.8%) were oncologic, 18 (31.6%) did not have a therapeutic alternative and 25 (43.9%) reimbursed were indicated for ultra-rare diseases. A survival-related endpoint was the outcome variable used in clinical trials of 21 (36.8%) reimbursed ODs and 41 (72.0%) did not have the obligation to conduct a PASS. Finally, 23 (40.3%) ODs were indicated for paediatric patients. For the regulatory variables, 55 (96.5%) ODs had a TPR with a positive opinion, and 51 (89.5%) ODs did not have a conditional authorisation granted by the EMA.
ODs with rejected P&R
Out of the 30 rejected ODs, 7 (23.3%) were indicated for oncologic diseases. Almost half (n = 14, 46.7%) of the rejected ODs had no therapeutic alternatives. Ten ODs (33.3%) were indicated for ultra-rare diseases and only 7 (23.3%) of the rejected ODs had survival-related endpoints as study outcomes. Twenty-six (86.7%) ODs did not conduct a PASS for their safety assessment and 14 (46.7%) ODs had been indicated for paediatric patients.
Regarding regulatory aspects, 11 (36.7%) out the 30 rejected ODs in Spain had a TPR with a positive opinion and 24 (80.0%) had not been subject to a conditional authorisation.
Statistical analysis of potential relationship between clinical and regulatory variables and reimbursement status of ODs in Spain
The statistical analysis was carried out to assess the potential association between clinical and regulatory variables and P&R status of ODs in Spain. In the bivariate analysis, TPR conclusion showed a statistically significant association with the reimbursement decision. The logistic regression model was fitted to estimate the probability of reimbursement explained by the analysed clinical and regulatory variables. For reimbursement status, only ODs with approved or rejected P&R were considered (n = 87). The logistic regression results are shown in Table 3. The pseudo R-squared obtained in the model was 0.472, therefore the model explained 47% of the variability in the dependent variable. Values from 0.2 to 0.4 indicate an excellent model fit [27]. According to these findings, ODs with a positive TPR conclusion (p-value < 0.01), ODs not subject to a conditional approval by the EMA (p-value < 0.05), and ODs approved without the obligation to conduct a PASS (p-value < 0.05), were statistically significant, and therefore, would be more likely to obtain P&R approval in Spain.
Table 3 Impact of clinical and regulatory variables on the P&R approval according to logistic regression analysis
The aim of the study was to assess clinical and regulatory variables that could influence the P&R decisions of ODs authorised in Spain.
From 2006 to 2021, 111 ODs had been granted marketing authorisation by the AEMPS, representing 86.7% of the total of ODs approved at the European level. However, only 51.4% (n = 57) of these ODs received P&R approval in Spain, 27% were rejected and 21.6% were undergoing the P&R decision process. This highlights that with the same evidence triggering EMA approval with or without conditions, the timing and level of access to ODs could vary across countries determined by differences in national criteria used for medicines assessments and P&R decisions [28].
Regarding evaluation timelines, the mean time from Spanish marketing authorisation to P&R decision after the inclusion of the TPR in 2013 was 18.2 months. P&R evaluation timelines have been slightly reduced since the inclusion of TPRs by an average of less than 1 month. This could be due to the fact that the performance of the TPR requires time. In addition, evaluation timelines could have been affected by the COVID pandemic over the last two years. Other reports that assessed the access to orphan medicines in Spain until December 2021 have reported similar findings in terms of estimated regulatory timelines during the P&R process, thus reinforcing the validity of the data presented in this study [29, 30].
Regression analysis showed that a positive TPR conclusion was key in the P&R decision in Spain. This is consistent with a previous study, where the association between a positive TPR and reimbursement of new ODs has been shown [13]. In addition, regression analysis has also shown that ODs whose evaluation is subject to less uncertainty, i.e. ODs without a conditional authorization by the EMA and without a PASS study, would be more likely to be reimbursed. Therefore, with respect to the findings discussed above, variables related to safety and efficacy have shown an impact on the likelihood of reimbursement. The study showed that traditional evaluation criteria were the main drivers in the P&R decision. A recent report by the Spanish Ministry of Health highlighted that clinical uncertainty (translated into financial uncertainty) actually increases the complexity in P&R decision-making [31]. Although the price of the ODs was not included in the analysis, we considered would be a key variable to explain reimbursement status, however, further studies would be needed to corroborate it. The prices for ODs may be higher, as it is difficult to recover the costs of innovation. Thus, ODs are unlikely to reach the standard cost-effectiveness thresholds [5, 32, 33]. However, many ODs are often reimbursed despite having incremental cost-effective ratios (ICERs) that are much higher than the willingness to pay (WTP). This suggests that, in practice, alternative approaches might be considered in ODs for P&R decisions, as the incorporation of new financing schemes reflected in the resolutions (e.g. expenditure cap, pharmacological protocols) [34].
Previous reports at national and international level have described similar findings on the identified ODs, their P&R status, and regulatory times to P&R decisions (e.g. aeLmhu report "Access to ODs in Spain", Spanish Ministry of Health report "The evolution of the financing and pricing of ODs in the NHS", and the "Waiting to Access Innovative Therapies (WAIT)" report performed by IQVIA) [29,30,31]. However, the main differentiating aspect of the present study is the assessment of the impact of TPR on P&R evaluation timelines and the assessment of clinical and regulatory variables that could be relevant in the P&R process of ODs in Spain.
Another finding to highlight from the regression analysis is that the absence of therapeutic alternatives does not seem to be associated with the P&R approval of an OD in Spain despite being a P&R criterion as established in article 92 of Royal Decree Law 1/2015 of 24 July. This could be due to the limitation of sample size, as despite having collected all available and published data, we still have a small sample size to be able to identify significant differences for some criteria in a multivariate analysis.
In addition, there are some variables, such as the authorisation under exceptional circumstances by EMA or the inclusion in the Valtermed registry in Spain, which have not been included in the study because the sample size is too small. In Spain, there are only 9 ODs approved under exceptional circumstances and with P&R resolution, and 11 ODs included to the Valtermed registry.
The study results have reflected the importance of the TPR prepared by the REvalMed NHS network in the reimbursement decision. However, unlike its name suggests, the final positioning of the drug is only established once the price has been negotiated with the DGCBF. Accordingly, the positioning of the drug is, among other factors, determined by its price. In addition, as stated in the above-mentioned report by Spanish Ministry of Health, clinical benefit uncertainty and price proposed by the MAH were highlighted as the main drivers to deny the P&R by the CIPM [31]. This would have been a determinant variable to have contributed to our analysis, but such information is not publicly available because official listed prices in the available databases do not reflect the reimbursement price agreed between the Ministry of Health and the MAH. Confidential prices would be around 40% of the list price, but they do not always follow the same pattern [13]. In addition, the reimbursed price depends on other variables such as the requested price, the price of other similar treatment alternatives and the medicine price in other EU reference countries, which are also not public and, therefore, not controllable.
Among the methodological limitations of the study, several assumptions were made. For those ODs appraised before the introduction of the TPR, it was assumed that reimbursed ODs had a positive TPR opinion. However, rejected ODs were assigned a questionable opinion of the TPR as opposed to the EMA's efficacy and safety assessment. Thus, we could include the maximum number of observations, with respect to the sample, in the analysis.
As the data cut-off point was December 2021, the number of observations to compare evaluation timelines before and after the introduction of REvalMed NHS in 2020 was not large enough (n = 9). Future analysis could assess the impact of the new procedure on evaluation timelines and reimbursement decisions.
As mentioned, economic criteria influencing the P&R decisions, such as the price of the OD and budget impact, have not been included in the statistical analysis, as the available official public prices do not reflect the reimbursement price. In addition, the study could have omitted alternative criteria considered by evaluators.
Other limitations come from the potential interaction between some of the explanatory variables. For instance, it could be assumed that ultra-rare diseases will present a limited arsenal of therapeutic alternatives. However, the objective of the model was to provide a construct of variables that could shed some light on P&R decisions in Spain. Considering the criteria established in Spain for the reimbursement of new drugs [11], it would be advisable to increase transparency regarding how these criteria are measured and assessed for decision-making [12] related to the value of a new drug.
Out of the 111 ODs authorised by the AEMPS, 51.4% of these ODs received P&R approval in Spain until 2021, 27% were rejected and 21.6% were undergoing P&R decision. P&R approval would be associated with a positive TPR conclusion, non-conditional approval by the EMA and no obligation for a PASS. Therefore, the study highlighted the role that a TPR plays in the reimbursement process and showed that traditional evaluation tools, such us safety and efficacy, were the main drivers of P&R decisions for ODs. Although economic variables have not been included in the analysis, these are considered a decisive factor in the reimbursement process.
In 2017, Omakase Consulting S.L. developed an OD database to collect data related to medicinal products with OD designation, currently authorised in Europe and their P&R situation in Spain. The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
AEMPS:
Spanish Medicines Agency
Anatomical, therapeutic, chemical classification system
BDV:
Binary dependent variable
CIMA:
Medicine Online Information Centre
CIPM:
Committee on Pricing of Medicines and Healthcare Products
COMP:
Committee for Medicinal Products and Orphan Products
DGCBF:
Directorate-General for the Basic Portfolio of Services of the National Healthcare and Pharmacy System
EPAR:
European Public Assessment Report
ICER:
Incremental cost-effective ratio
MAH:
NHS:
National Healthcare System
ODs:
P&R:
Price and reimbursement
Post-authorisation safety study
Patient-reported outcomes
RD:
REvalMed:
Drug evaluation network
TPR:
Therapeutic Positioning Report
WTP:
Official Journal of the European Union. Council recommendation on action in the field of rare diseases—2947th employment, social policy, health and consumer affairs—council meeting. 2009.
Richter T, Nestler-Parr S, Babela R, Khan ZM, Tesoro T, Molsen E, et al. Rare disease terminology and definitions-a systematic global review: report of the ISPOR rare disease special interest group. Value Health. 2015;18(6):906–14.
Communities OJ of the E. Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on clinical trials on medicinal products for human use, and repealing Directive 2001/20/EC. Vol. L. 2014.
Official Journal of the Eurpean Communities. Regulation (EC) No 141/2000 of the European parliament and of the council of 16 December 1999 on orphan medicinal products.
de Andrés-Nogales F, Cruz E, Calleja MÁ, Delgado O, Gorgas MQ, Espín J, et al. A multi-stakeholder multicriteria decision analysis for the reimbursement of orphan drugs (FinMHU-MCDA study). Orphanet J Rare Dis. 2021;16(1):1–12.
Nguengang Wakap S, Lambert DM, Olry A, Rodwell C, Gueydan C, Lanneau V, et al. Estimating cumulative point prevalence of rare diseases: analysis of the Orphanet database. Eur J Hum Genet. 2019;28(2):165–73.
Rare Diseases, Orphan Medicines: Getting the Facts Straight. European Medicines Agency (EMA).
Zamora B, Maignen F, O'Neill P, Mestre-Ferrandiz J, Garau M. Comparing access to orphan medicinal products in Europe. Orphanet J Rare Dis. 2019;14(1):1–2.
Szegedi M, Zelei T, Arickx F, Bucsics A, Cohn-Zanchetta E, Fürst J, et al. The European challenges of funding orphan medicinal products. Orphanet J Rare Dis. 2018;13(1):1–8.
Morel T, Arickx F, Befrits G, Siviero P, Van Der Meijden C, Xoxi E, et al. Reconciling uncertainty of costs and outcomes with the need for access to orphan medicinal products: a comparative study of managed entry agreements across seven European countries. Orphanet J Rare Dis. 2013;8(1):1–15.
BOE. Real Decreto Legislativo 1/2015, de 24 de julio, por el que se aprueba el texto refundido de la Ley de garantías y uso racional de los medicamentos y productos sanitarios.
Calleja MÁ, Badia X. Feasibility study to characterize price and reimbursement decision-making criteria for the inclusion of new drugs in the Spanish National Health System: the cefiderocol example. Int J Technol Assess Health Care. 2022;38(1).
Badia X, Vico T, Shepherd J, Gil A, Poveda-Andrés JL, Hernández C. Impact of the therapeutic positioning report in the P&R process in Spain: Analysis of orphan drugs approved by the European Commission and reimbursed in Spain from 2003 to 2019. Orphanet J Rare Dis. 2020;15(1):1–13.
Comisión Permanente de Farmacia del Consejo Interterritorial del SNS. Plan para la consolidación de los Informes de Posicionamiento Terapéutico de los medicamentos en el Sistema Nacional de Salud. Dirección General de Cartera Común de Servicios del SNS y Farmacia. 2020.
Union Register of medicinal products - Public health - European Commission.
Agencia Española de Medicamentos y Productos Sanitarios. Ministerio de Sanidad. Centro de Información de Medicamentos (CIMA).
Ministerio de Sanidad. BIFIMED - Buscador situación financiación medicamentos.
Communities OJ of the E. REGULATION (EC) No 726/2004 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 31 March 2004.
Informes de posicionamiento terapéutico [Therapeutic positioning report].
Bertani A, Di Paola G, Russo E, Tuzzolino F. How to describe bivariate data. J Thorac Dis. 2018;10(2):1133–7.
Bewick V, Cheek L, Ball J. Statistics review 8: Qualitative data: tests of association. Crit Care. 2004;8(1):46–53.
Sperandei S. Understanding logistic regression analysis. Biochem Medica. 2014;24(1):12–8.
StataCorp. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC. 2019.
European Public Assessment report: Kymriah.
European Public Assessment report: Trepulmix.
European Public Assessment report: Revestive.
McFadden D. Conditional logit analysis of qualitative choice behavior. 1973.
Drummond MF, Wilson DA, Kanavos P, Ubel P, Rovira J. Assessing the economic challenges posed by orphan drugs. Int J Technol Assess Health Care. 2007;23(1):36–42.
aeLmhu. Informe de acceso 2021 de los medicamentos huérfanos en España.
Newton M, Scott K, Troein P. EFPIA Patients W.A.I.T. Indicator 2021 Survey. 2022.
Informe evolución de la financiación y fijación de precio de los medicamentos huérfanos en el sns (2016–2021).
Criterios de financiación y reembolso de los medicamentos huérfanos. Criterios de financiación y reembolso de los medicamentos huérfanos. 2021.
Lasalvia P, Prieto-Pinto L, Moreno M, Castrillón J, Romano G, Garzón-Orjuela N, et al. International experiences in multicriteria decision analysis (MCDA) for evaluating orphan drugs: a scoping review. Expert Rev Pharmacoeconomics Outcomes Res. 2019;19(4):409–20.
Paolucci F, Redekop K, Fouda A, Fiorentini G. Decision making and priority setting: the evolving path towards universal health coverage. Appl Health Econ Health Policy. 2017;15(6):697–706.
World Health Organization (WHO). Anatomical Therapeutic Chemical (ATC) Classification. 2011.
Powers JH, Patrick DL, Walton MK, Marquis P, Cano S, Hobart J, et al. Clinician-reported outcome assessments of treatment benefit: report of the ISPOR clinical outcome assessment emerging good practices task force. Value Heal. 2017;20(1):2–14.
European Medicines Agency (EMA). Post-authorisation safety studies (PASS).
Conditional marketing authorisation. European Medicines Agency (EMA).
Observemhe.
No funding.
Hospital Universitario y Politécnico de La Fe, Valencia, Spain
José Luis Poveda
Omakase Consulting S.L., Barcelona, Spain
Claudia Gómez, Alicia Gil & Xavier Badia
Claudia Gómez
Alicia Gil
Xavier Badia
XB was the major contributor in designing the study, designing the protocol, interpreted the data, validated the analysis plan and revised the manuscript. CG updated the internal database used in the study, contributed to develop the analysis plan, analysed and interpreted the data and contributed to writing the manuscript. AG, JLP interpreted the data and were major contributors in revising the manuscript. All authors read and approved the final manuscript.
Correspondence to Xavier Badia.
Not applicable. The study did not involve human participants (patients or otherwise). In 2017, Omakase Consulting S.L. developed an OD database to collect data related to medicinal products with OD designation, currently authorised in Europe and their P&R situation in Spain. The study was conducted by analysing data from the mentioned database. The study does not require ethics approval or otherwise approval since it does not involve the participation of patients or the enquiry/analysis of medical records.
Not applicable. The study did not contain data from any individual person.
Poveda, J.L., Gómez, C., Gil, A. et al. ODs with a positive TPR conclusion, not subject to a conditional approval, and approved without requiring a PASS would be more likely to be reimbursed in Spain. Orphanet J Rare Dis 18, 4 (2023). https://doi.org/10.1186/s13023-022-02610-4
Regulatory Variables | CommonCrawl |
Search results for: D. Klein
Items from 1 to 20 out of 834 results
Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more
The European Physical Journal C > 2019 > 79 > 7 > 1-27
A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity...
Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV
The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more
Journal of High Energy Physics > 2019 > 2019 > 6 > 1-34
Abstract Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy of 13 TeV using the CMS detector. The results are interpreted in the context of models of gauge-mediated supersymmetry breaking. Production...
Light Management in Organic Photovoltaics Processed in Ambient Conditions Using ZnO Nanowire and Antireflection Layer with Nanocone Array
Mohammad Mahdi Tavakoli, Hadi Tavakoli Dastjerdi, Jiayuan Zhao, Katherine E. Shulenberger, more
Small > 15 > 25 > n/a - n/a
Low carrier mobility and lifetime in semiconductor polymers are some of the main challenges facing the field of organic photovoltaics (OPV) in the quest for efficient devices with high current density. Finding novel strategies such as device structure engineering is a key pathway toward addressing this issue. In this work, the light absorption and carrier collection of OPV devices are improved by...
Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV via Higgs boson decays to τ leptons
Abstract A search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to a pair of τ leptons is performed. A data sample of proton-proton collisions collected at s $$ \sqrt{s} $$ = 13 TeV by the CMS experiment at the CERN LHC is used, corresponding to an integrated luminosity of 35.9 fb−1. The signal strength is measured relative to the expectation...
Search for a low-mass τ−τ+ resonance in association with a bottom quark in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A general search is presented for a low-mass τ−τ+ resonance produced in association with a bottom quark. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The data are consistent with the standard model expectation. Upper limits at 95% confidence level...
Search for supersymmetry in events with a photon, jets, $$\mathrm {b}$$ b -jets, and missing transverse momentum in proton–proton collisions at 13$$\,\text {Te}\text {V}$$ Te
A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like...
Combined measurements of Higgs boson couplings in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te , corresponding to an integrated luminosity of 35.9$${\,\text {fb}^{-1}} $$ fb-1 . The combination is based...
Combinations of single-top-quark production cross-section measurements and |fLVVtb| determinations at s $$ \sqrt{s} $$ = 7 and 8 TeV with the ATLAS and CMS experiments
The ATLAS collaboration, M. Aaboud, G. Aad, B. Abbott, more
Abstract This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using data from LHC proton-proton collisions at s $$ \sqrt{s} $$ = 7 and 8 TeV corresponding to integrated luminosities of 1.17 to 5.1 fb−1 at s $$ \sqrt{s} $$ = 7 TeV and 12.2 to 20.3 fb−1 at s $$ \sqrt{s} $$ = 8 TeV. These combinations...
Measurement of inclusive very forward jet cross sections in proton-lead collisions at s N N $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV
Abstract Measurements of differential cross sections for inclusive very forward jet production in proton-lead collisions as a function of jet energy are presented. The data were collected with the CMS experiment at the LHC in the laboratory pseudorapidity range −6.6 < η < −5.2. Asymmetric beam energies of 4 TeV for protons and 1.58 TeV per nucleon for Pb nuclei were used, corresponding to a...
Measurement of the energy density as a function of pseudorapidity in proton–proton collisions at $$\sqrt{s} =13\,\text {TeV} $$ s=13TeV
A measurement of the energy density in proton–proton collisions at a centre-of-mass energy of $$\sqrt{s} =13$$ s=13 $$\,\text {TeV}$$ TeV is presented. The data have been recorded with the CMS experiment at the LHC during low luminosity operations in 2015. The energy density is studied as a function of pseudorapidity in the ranges $$-\,6.6<\eta <-\,5.2$$ -6.6<η<-5.2 and $$3.15<|\eta...
Measurement of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A measurement of the top quark–antiquark pair production cross section $$\sigma _{\mathrm {t}\overline{\mathrm {t}}} $$ σtt¯ in proton–proton collisions at a centre-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te is presented. The data correspond to an integrated luminosity of $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 , recorded by the CMS experiment at the CERN LHC in 2016. Dilepton events ($$\mathrm...
Search for vector-like quarks in events with two oppositely charged leptons and jets in proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te
A search for the pair production of heavy vector-like partners $$\mathrm {T}$$ T and $$\mathrm {B}$$ B of the top and bottom quarks has been performed by the CMS experiment at the CERN LHC using proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te . The data sample was collected in 2016 and corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . Final states...
Measurements of the pp → WZ inclusive and differential production cross sections and constraints on charged anomalous triple gauge couplings at s $$ \sqrt{s} $$ = 13 TeV
Abstract The WZ production cross section is measured in proton-proton collisions at a centre-of-mass energy s $$ \sqrt{s} $$ = 13 TeV using data collected with the CMS detector, corresponding to an integrated luminosity of 35.9 fb−1. The inclusive cross section is measured to be σtot(pp → WZ) = 48.09 − 0.96+ 1.00 (stat) − 0.37+ 0.44 (theo) − 2.17+ 2.39 (syst) ± 1.39(lum) pb, resulting in...
Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ $$ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} $$ final state at s $$ \sqrt{s} $$ = 13 TeV
Abstract Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair, are presented. This search uses data from proton-proton collisions at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1, collected by the CMS detector at the LHC. No signal is observed, and...
Search for contact interactions and large extra dimensions in the dilepton mass spectra from proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for nonresonant excesses in the invariant mass spectra of electron and muon pairs is presented. The analysis is based on data from proton-proton collisions at a center-of-mass energy of 13 TeV recorded by the CMS experiment in 2016, corresponding to a total integrated luminosity of 36 fb−1. No significant deviation from the standard model is observed. Limits are set at 95% confidence...
Ready Player: the use of virtual reality in paediatric asthma education
Francis J Real, Matthew Zackoff, Andrew F Beck, Melissa D Klein
Medical Education > 53 > 5 > 519 - 520
Measurement of the top quark mass in the all-jets final state at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV and combination with the lepton+jets channel
A top quark mass measurement is performed using $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 of LHC proton–proton collision data collected with the CMS detector at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The measurement uses the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ all-jets final state. A kinematic fit is performed to reconstruct the decay of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ system...
Search for resonant production of second-generation sleptons with same-sign dimuon events in proton–proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
A search is presented for resonant production of second-generation sleptons ($$\widetilde{\mu } _{\mathrm {L}}$$ μ~L , $$\widetilde{\nu }_{\mu }$$ ν~μ ) via the R-parity-violating coupling $${\lambda ^{\prime }_{211}}$$ λ211′ to quarks, in events with two same-sign muons and at least two jets in the final state. The smuon (muon sneutrino) is expected to decay into a muon and a neutralino (chargino),...
Search for resonant t t ¯ $$ \mathrm{t}\overline{\mathrm{t}} $$ production in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for a heavy resonance decaying into a top quark and antiquark t t ¯ $$ \left(\mathrm{t}\overline{\mathrm{t}}\right) $$ pair is performed using proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV. The search uses the data set collected with the CMS detector in 2016, which corresponds to an integrated luminosity of 35.9 fb−1. The analysis considers three exclusive...
Search for excited leptons in ℓℓγ final states in proton-proton collisions at s = 13 $$ \sqrt{\mathrm{s}}=13 $$ TeV
Abstract A search is presented for excited electrons and muons in ℓℓγ final states at the LHC. The search is based on a data sample corresponding to an integrated luminosity of 35.9 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV, collected with the CMS detector in 2016. This is the first search for excited leptons at s $$ \sqrt{s} $$ = 13 TeV. The observation is consistent...
Last 3 months (15)
Content availability
Available (811)
HIGGS (13)
RADON (7)
SUPERSYMMETRY (7)
ROBOTIC SURGERY (6)
TOP QUARK (5)
B2G (4)
BSM (4)
CROSS SECTION (4)
DENGUE VIRUS (4)
EXOTICA (4)
HEAVY IONS (4)
INHIBITOR (4)
LHC (4)
NEUROPEPTIDE (4)
QCD (4)
TOOTH (4)
TOOTH DEVELOPMENT (4)
AMNIOTIC MESENCHYMAL STEM CELLS (3)
ANIMAL MODELS (3)
B-PHYSICS (3)
CAPILLARY BLOOD (3)
CONGENITAL ANOMALIES (3)
DIBOSON (3)
EXTRA DIMENSIONS (3)
FIV (3)
GALANIN (3)
GENDER DIFFERENCES (3)
HEALTH SERVICES RESEARCH (3)
METHIONINE AMINOPEPTIDASE (3)
MURA (3)
NS2B-NS3 PROTEASE (3)
NUCLEAR TRACK DETECTOR (3)
PREVENTIVE CARE (3)
REAL-TIME DIAGNOSIS (3)
SCIENCE EDUCATION (3)
SOCIAL DETERMINANTS OF HEALTH (3)
WEST NILE VIRUS (3)
Β-HYDROXYBUTYRATE (3)
2HDM (2)
6 HZ SEIZURE (2)
ABDOMINAL PAIN (2)
ABDOMINAL WALL DEFECT (2)
ADOLESCENT HEALTH SERVICES (2)
ADOLESCENT PREGNANCY (2)
ADULT STEM CELLS (2)
ALPHA PARTICLES (2)
ALPHA-S (2)
ALTERNATIVE MEDICINE (2)
AMELOGENESIS (2)
ANTICONVULSANT DRUG (2)
APPENDICITIS (2)
AQGC (2)
ASSOCIATED ANOMALIES (2)
AUDIOGENIC SEIZURES (2)
B-TAGGING (2)
BACILLUS SPHAERICUS (2)
BORONIC ACID (2)
CHARGE ASYMMETRY (2)
CHEMOTAXIS (2)
CITIES AND TOWNS (2)
COGNITIVE PROCESSES (2)
COMBINATION STUDIES (2)
CONGENITAL DIAPHRAGMATIC HERNIA (2)
CORNEAL KINDLED MOUSE (2)
Elsevier (393)
Springer (369)
Wiley (48)
ieee (11)
BazTech (3)
PSJD (1)
Journal of High Energy Physics (192)
Physics Letters B (85)
The European Physical Journal C (85)
Journal of Adolescent Health (63)
Journal of Pediatric Surgery (29)
Academic Pediatrics (13)
Radiation Measurements (9)
European Journal of Cancer (7)
Journal of Dairy Science (7)
Journal of Medicinal Chemistry (6)
Bioorganic & Medicinal Chemistry (5)
Cell Reports (5)
Journal of Surgical Research (5)
Pediatric Surgery International (5)
Bioorganic & Medicinal Chemistry Letters (4)
Developmental Cell (4)
Nuclear Physics A (4)
The International Journal of Medical Robotics and Computer Assisted Surgery (4)
The Journal of Molecular Diagnostics (4)
AJIC: American Journal of Infection Control (3)
Ambulatory Pediatrics (3)
Antiviral Research (3)
Educational Technology Research and Development (3)
Epilepsy Research (3)
Gastrointestinal Endoscopy (3)
Head & Neck (3)
IEEE Transactions on Nuclear Science (3)
Materialwissenschaft und Werkstofftechnik (3)
Neurochemical Research (3)
Nuclear Inst. and Methods in Physics Research, B (3)
Surgical Endoscopy (3)
The American Journal of Gastroenterology (3)
Theoretical and Applied Genetics (3)
Acta Paediatrica (2)
Applied Surface Science (2)
Attention, Perception, & Psychophysics (2)
BMC Pediatrics (2)
Biochemical and Biophysical Research Communications (2)
Biochimie (2)
Cell Stem Cell (2)
Contemporary Educational Psychology (2)
Diabetes and Metabolism (2)
Diabetologia (2)
Euphytica (2)
European Journal of Integrative Medicine (2)
European Journal of Medicinal Chemistry (2)
European Journal of Pediatrics (2)
FEBS Letters (2)
Gene Expression Patterns (2)
Instructional Science (2)
International Journal for Parasitology (2)
Journal of Autism and Developmental Disorders (2)
Journal of Biomechanics (2)
Journal of Bone and Mineral Research (2)
Journal of Community Health (2)
Journal of Computing in Higher Education (2)
Journal of Veterinary Internal Medicine (2)
Journal of the American College of Surgeons (2)
Learning & Behavior (2)
Learning and Instruction (2)
Maternal and Child Health Journal (2)
Neurotherapeutics (2)
Oceanologia (2)
Performance Improvement Quarterly (2)
Phytochemistry (2)
Psychonomic Bulletin & Review (2)
Radiotherapy and Oncology (2)
Reproduction in Domestic Animals (2)
Russian Journal of Ecology (2)
Seminars in Pediatric Surgery (2)
Stem Cell Reports (2)
Tetrahedron Letters (2)
The Annals of Thoracic Surgery (2)
Thin Solid Films (2)
Veterinary Microbiology (2)
Accident Analysis and Prevention (1)
Acta Physica Polonica A (1)
Acta Psychologica (1)
Aerospace Science and Technology (1)
American Journal of Human Biology (1)
American Journal of Medical Genetics Part C: Seminars in Medical Genetics (1)
Analusis (1)
Analytical Biochemistry (1)
Animal Reproduction Science (1)
Annals of Dyslexia (1)
Annals of Surgical Oncology (1)
Annals of the New York Academy of Sciences (1)
Applied Microbiology and Biotechnology (1)
Archives of Biochemistry and Biophysics (1)
Archives of Toxicology (1)
BBA - Molecular Cell Research (1)
BMC Complementary and Alternative Medicine (1)
BMC Veterinary Research (1)
Behavior Research Methods (1)
BioEssays (1) | CommonCrawl |
How to calculate the prime number when quadratic residue is known?
by Sejal Gupta Last Updated July 11, 2019 19:20 PM
[(x^2%prime_number)=(a)] find the prime number when x and a are given for many test cases
Tags : prime-numbers quadratic-residues quadratic-reciprocity
Show that prime $p=4n+1$ is a divisor of $n^{n}-1$
Prime factors of $16k^4 +1$ mod $8$
Updated December 12, 2017 12:20 PM
Prove $\sum\limits_{j=1}^{p-1} j\left(\frac{j}{p}\right) = 0 $ for an odd prime $p$ with $p\equiv 1\text{ mod } 4$
Updated February 05, 2018 23:20 PM
Find the set of primes $p$ which $6$ is a quadratic residue $\mod p$
Quadratic residues in finite field | CommonCrawl |
ellipsix informatics
Square wheels
Posted by David Zaslavsky on April 14, 2012 11:59 PM
— Comments
It hasn't escaped my notice that Mythbusters is back with a new season! Actually, it's not really that new, since we're now three (well, now four) weeks in, but I missed the first two episodes since I was out of the country. But it works out because this (actually last) week's myth is full of interesting physics to analyze!
This past Sunday, Adam and Jamie tested the myth that if you're driving fast enough, square wheels can actually provide a surprisingly smooth ride. At first, the idea of square wheels working at all, much less actually being smooth, can seem a little wacky, but with a bit of physical intuition, it's not hard to convince yourself that it's actually pretty plausible. As they explained in the show, the reason a square wheel is expected to bounce you up and down is that the distance from the axle to the bottom of the wheel changes as it turns. If you're going slowly, every time the wheel tips over another corner, it's going to fall down until its side is resting against the ground, taking you with it. But if you speed up enough, the wheel won't have time to fall very far before it rotates through a quarter turn, and the next corner gets under it to hold it up.
Simple model: a slowly turning wheel
You can actually calculate about how fast you would need to go to do this. Let's consider just a single square wheel, and at first, suppose it's going really slowly. That way, the wheel is going to constantly stay in contact with the ground. There are basically two "phases" in the cycle of a slowly turning square wheel:
Starting from a position where the wheel is "perched" on its corner with the corner pointing down, it's going to first just pivot forward around that corner, and fall down on its side.
After that, it'll pivot up around the next corner, until that next corner is now pointing down.
Diagram of rotating square wheel
Suppose the wheel has a side length of \(2r\) and is rotating at angular speed \(\omega\), which I'm going to assume is constant for simplicity. Based on geometry, the height of the first corner relative to the wheel's center is
$$y_1(t) = -\sqrt{2}r\cos(\omega t)$$
and since the second corner trails by an angle of \(\frac{pi}{2}\) (a quarter circle), its height relative to the center is going to be
$$y_2(t) = -\sqrt{2}r\cos\biggl(\omega t - \frac{\pi}{2}\biggr)$$
At any given time, the height of the lowest point on the wheel relative to its center will be the lesser of these two expressions: \(y_1\) for the first eighth turn, and \(y_2\) for the next eighth. But what we really want is the height of the center of the wheel above the ground, which will be the negative of that minimum:
$$y_\text{slow}(t) = \begin{cases}\sqrt{2}r\cos(\omega t) & 0 \le t < \frac{\pi}{4\omega} \\ \sqrt{2}r\cos\biggl(\omega t - \dfrac{\pi}{2}\biggr) & \frac{\pi}{4\omega} \le t < \frac{\pi}{2\omega}\end{cases}$$
This function tells us the height of the truck as a function of time as one quarter turn of the wheel elapses. It looks like this:
Height function for slow wheel
From this, we should be able to figure out how bumpy the ride on this slowly turning wheel would be. But that brings up another question: how exactly do you measure bumpiness?
Think back to the last time you were on a car driving on a road with a lot of potholes, or any other rough surface. What makes it unpleasant is that you get shaken up and down a lot. The larger the vibrations, the rougher the ride. So it makes sense to say that our measure of bumpiness should be related to the distance by which the car bounces up and down in a cycle — in other words, the maximum height minus the minimum height, which is often called peak-to-peak amplitude.
But if you think about it, the time scale over which these oscillations occur is also important. When you drive up and down a mountain, that's a huge bounce, but it doesn't feel like it because it's so slow. So the "bumpiness metric" should also be anti-correlated to the cycle time: quicker bumps at the same amplitude have more of an effect. Accordingly, I'm going to define a simple measure of bumpiness as the ratio of the peak-to-peak amplitude to the period for one up-and-down cycle of oscillation (which is actually a quarter cycle of the wheel). I'm sure there are more complicated (and more realistic) ways to define bumpiness, but this one should be good enough to make my point here.
For the model of a slowly rotating square wheel, we can find the peak-to-peak amplitude using the maximum value of \(y(t)\), which occurs at \(t = 0\) (and again at \(t = \pi/2\omega\)), and the minimum value, which occurs at \(t = \pi/4\omega\).
$$B = \frac{\sqrt{2}r\cos(0) - \sqrt{2}r\cos\bigl(\omega\times\frac{\pi}{4\omega}\bigr)}{\frac{\pi}{2\omega}} = \frac{2}{\pi}(\sqrt{2} - 1)r\omega$$
Speeding it up
With the simple slow wheel model out of the way, let's see what happens when you speed the wheel's rotation up. The most important change comes from the fact that there is nothing actually holding the wheel's surface to the ground. In the first part of the cycle, the only thing pulling the wheel down is gravity, and gravity can't accelerate it any faster than \(\SI{9.8}{m/s^2}\). So if our slow-wheel model says that the wheel should be moving downward at faster than \(g = \SI{9.8}{m/s^2}\), that is if \(y''(t) < -g\), then we've got a problem.
Of course, it's not hard to figure out when this actually does happen, using basic Newtonian mechanics. There are two relevant forces acting on the wheel, gravity and the normal force from the ground. Their relationship is given by Newton's second law, \(\sum F = ma\), or in this case:
$$-mg + F_N = my'' = -\sqrt{2}mr\omega^2\cos(\omega t)$$
For a slowly rotating wheel, \(\omega\) is small, and thus \(F_N\) will need to be positive to make this equation true. Once \(\omega\) gets large enough that \(-mg > -\sqrt{2}mr\omega^2\cos(\omega t)\), though, there will be no zero or positive value of \(F_N\) that can make the equation true. That's when the wheel is going to leave the ground. This will happen at \(t = 0\) as long as
$$\omega^2 > \frac{g}{\sqrt{2}r}$$
If the normal force is going to be zero, the wheel won't be touching the ground as it rotates. Instead, it's going to be in free fall for some amount of time. That's an easy situation to analyze; the height of an object in free fall is just \(y = y_0 + v_0 (t - t_0) - g(t - t_0)^2/2\), and since in this case the free-fall phase starts at time 0 with zero vertical velocity, that just reduces to
$$y_\text{free}(t) = \sqrt{2}r - \frac{gt^2}{2}$$
The wheel will remain in free fall as long as this height \(y_\text{free}\) is greater than the height difference between the center of the wheel and its lowest point. The latter quantity is something we've already calculated: it's \(y_\text{slow}\). So we need to identify the first nonzero time at which \(y_\text{free}(t) = y_\text{slow}(t)\), the solution to
$$\sqrt{2}r - \frac{gt^2}{2} = \begin{cases}\sqrt{2}r\cos(\omega t) & 0 \le t < \frac{\pi}{4\omega} \\ \sqrt{2}r\cos\biggl(\omega t - \dfrac{\pi}{2}\biggr) & \frac{\pi}{4\omega} \le t < \frac{\pi}{2\omega}\end{cases}$$
This is a little tricky for a couple of reasons: first, it's a transcendental equation, because it involves both a polynomial in \(t\) and a trigonometric function of \(t\). That means you can't write the solution as a symbolic function. You can still solve it numerically, though, and that's what I'm going to do shortly. The other issue is that it has a piecewise function on the right. That's not that hard to deal with, at least not if you have numbers for everything; you can just solve the first case, and see if the solution you get satisfies the condition for that case (\(0 \le t < \frac{\pi}{4\omega}\)); if not, then the solution comes from the second piece. It turns out that in our situation, because of the requirement \(\omega^2 > g/\sqrt{2}r\), the solution is almost always going to come from the second case; for most values of \(\omega\) the first corner of the wheel never touches the ground again after the very beginning of the cycle, and the small region of \(\omega\) where that's not the case is basically negligible. (Plus it would take a whole other post to do that analysis properly) So we can reduce this last equation to
$$\sqrt{2}r - \frac{gt^2}{2} = \sqrt{2}r\cos\biggl(\omega t - \dfrac{\pi}{2}\biggr)$$
The solution to this tells us when the second corner of the wheel is going to hit the ground. Call that solution \(t_2\). Then the height as a function of time for a square wheel which does not have to be moving slowly is
$$y_(t) = \begin{cases}\sqrt{2}r\cos(\omega t) & 0 \le t < t_2 \\ \sqrt{2}r\cos\biggl(\omega t - \dfrac{\pi}{2}\biggr) & t_2 \le t < \frac{\pi}{2\omega}\end{cases}$$
This function looks like this:
Height function for fast wheel
Notice the difference between this and the equivalent graph for a slow wheel (which is included in the background, for comparison). The peak-to-peak amplitude of this motion is considerably less. This will also be reflected in the formula for the bumpiness, which is
$$B = \frac{\sqrt{2}r\cos(0) - \sqrt{2}r\cos\bigl(\omega t_2 - \frac{\pi}{2}\bigr)}{\frac{\pi}{2\omega}} = \frac{2}{\pi}\bigl(1 - \sin(\omega t_2)\bigr)r\omega$$
To recap, our measure of bumpiness over all possible rotational frequencies is given by the following piecewise function:
$$B = \begin{cases}\frac{2}{\pi}(\sqrt{2} - 1)r\omega & \omega^2 \le \frac{g}{\sqrt{2}r} \\ \frac{2}{\pi}\bigl(1 - \sin(\omega t_2)\bigr)r\omega & \omega^2 > \frac{g}{\sqrt{2}r}\end{cases}$$
For a square wheel with a \(\SI{50}{cm}\) side length, a plot of this function looks like this:
Bumpiness plot
The horizontal axis shows speed in meters per second. There's a peak in the graph around \(\SI{6.2}{m/s}\), or \(\SI{14}{mph}\), and after that it starts going down — which means that for this size of wheel, once you pass 14 miles per hour, the ride actually should start getting smoother! And that's pretty close to what Jamie and Adam actually observed in the show.
©2004-2023 David Zaslavsky Contact me • Site use guidelines | CommonCrawl |
Hemodynamics of cerebral bridging veins connecting the superior sagittal sinus based on numerical simulation
Youyu Zhu1,
Feng Wang1 &
Xuefei Deng1
The physiological and hemodynamic features of bridging veins involve wall shear stress (WSS) of the cerebral venous system. Based on the data of cadavers and computational fluid dynamics software pack, the hemodynamic physical models of bridging veins (BVs) connecting superior sagittal sinus (SSS) were established.
A total of 137 BVs formed two clusters along the SSS: anterior group and posterior group. The diameters of the BVs in posterior group were larger than of the anterior group, and the entry angle was smaller. When the diameter of a BV was greater than 1.2 mm, the WSS decreased in the downstream wall of SSS with entry angle less than 105°, and the WSS also decreased in the upstream wall of BVs with entry angle less than 65°. The minimum WSS in BVs was only 63% of that in SSS. Compared with the BVs in anterior group, the minimum WSS in the posterior group was smaller, and the distance from location of the minimum WSS to the dural entrance was longer.
The cerebral venous thrombosis occurs more easily when the diameter of a BV is greater than 1.2 mm and the entry angle is less than 65°. The embolus maybe form earlier in the upstream wall of BVs in the posterior part of SSS.
Compared with the cerebral artery system, the cerebral venous system is usually asymmetric and its variability is greater, which makes it prone to venous thrombus and a variety of neurological disorders. With the development of medical imaging technology, especially with the rapid development of magnetic resonance technology [1,2,3], the diseases related to the cerebral venous system are more generally known and valued by clinics. This has prompted research into the hemodynamics of the cerebral venous system. Cerebral venous thrombosis is one of the most common of cerebral venous diseases [4]. The patients often develop symptoms of intracranial hemorrhage, cerebral edema, venous infarction and even death because of not getting timely treatment [5]. In clinical cure cases, there are a considerable number of patients with varying degrees of sequelae [5, 6] and significantly decreased quality of life. This phenomenon is largely due to not having timely diagnosis, and may delay the best treatment time.
The direct or indirect signs of thrombosis in radiographic images are an important basis for the diagnosis of cerebral venous thrombosis [7, 8]. Early clinical symptoms of most patients with thrombosis are atypical. There is no obvious manifestation of venous reflux obstruction. The restriction of imaging technology and the difficulty in determining the location of thrombus has led to difficulty in the early diagnosis of patients with cerebral venous thrombosis [7]. Therefore, how to improve the early diagnosis level of thrombosis has become an urgent problem to be solved in the study of cerebral venous thrombosis.
An international cooperation participated by 21 countries (including Portugal, Netherlands, France, and Mexico) shows that cerebral venous thrombosis is mainly in the superior sagittal sinus connected by the bridging veins [8], as illustrated in Figs. 1 and 2. However, the reason of its occurrence is not clear. In this study, we hypothesized that the cerebral bridging veins connecting superior sagittal sinus may have some specific morphological characteristics, then these parts of bridging veins and superior sagittal sinus are susceptible to the influence of pathogenic factors, which lead to the formation of thrombus.
Anatomical picture of bridging veins (yellow arrow) entering the superior sagittal sinus (red line)
Bridging veins (yellow arrow) entering the superior sagittal sinus (red line) in lateral view (a) and anteroposterior view (b) of DSA, CTV (c) and MRV (d)
The changes in hemodynamics such as wall shear stress (WSS) are an important factor for the formation of thrombus [9,10,11]. The WSS acts on vascular endothelial cells, and is parallel to the long axis of the vessel [12]. A certain level of WSS may have an effect of generating anticoagulant, inhibition of leukocyte adhesion and proliferation of smooth muscle [13,14,15,16,17,18]. The reference value of WSS in the arterial system is 1–7 Pa, while that in venous system is 0.1–0.6 Pa [19]. When the WSS is significantly lower than the normal value, the sharp reduction of the anticoagulant substance, enhancement of leukocyte adhesion and proliferation of smooth muscle can lead to thrombosis, atherosclerosis and other diseases [19, 20]. There is also some convincing research that compared with the low but steady WSS, sharp changes in WSS can easily lead to the occurrence of diseases [21, 22].
At present, computational fluid dynamics (CFD) is internationally used to simulate the movement of blood and other fluids. In the medical field, CFD has been widely used in the simulation of the occurrence and development of atherosclerosis, aortic dissection, aneurysm and other arterial diseases [23,24,25,26,27,28,29]; however, the hemodynamic simulation of the venous system has not been reported. Therefore, in this study, the hemodynamic physical models are established with the help of microanatomy observation data and CFD to determine the morphological features of thrombosis and find the predilection site of thrombus. Then, based on this, explanation of pathogenesis of cerebral venous thrombosis and imaging diagnosis are provided.
Micro-dissection
Six cases (12 sides) of formalin fixed adult cadaver brains provided by the Department of Anatomy in Anhui Medical University were chosen, each three cases for male and female, and the age was 42 ± 9 years old (34–59 years). After removing the calvaria by conventional craniotomy, the cavity congestion in superior sagittal sinus and internal jugular veins was flushed by intubations; then blue latex was injected into the superior sagittal sinus and internal jugular veins.
The dura mater was cut along 25 mm near superior sagittal sinus after 48 h, the adhesion between dura mater and arachnoid mater was carefully removed, and the bridging veins entering superior sagittal sinus were carefully separated. The bridging veins were found to be centrally located in the anterior and posterior segment of superior sagittal sinus. In accordance with the previous section standards [30], the bridging veins were divided into two groups: anterior group and posterior group. The diameter and angle of the bridging veins entering the superior sagittal sinus (entry angle) were measured.
Computational fluid analysis
Models of one single cerebral bridging vein entering superior sagittal sinus were established from the anatomical data by CFD software ANSYS-Fluent. The inlet boundary conditions were entrance velocity. According to the measurement results of Chen et al. from the patients with selective craniotomy 1 year ago [31], the inlet velocity of superior sagittal sinus was 15 cm/s and the inlet velocity of bridging veins was 10 cm/s. The outlet boundary conditions was zero pressure. The wall is assumed to be smooth, and no slip condition is specified at the wall. The ambient pressure was the intracranial pressure of 1333 Pa, with fluid density of 1050 kg/m3 and viscosity of 4.24 × 10−3 Pa s (normal blood).
Statistical treatment
The obtained data were processed by statistical software SPSS, and the results were expressed as \( \bar{x} \pm {\text{s}} \) (min–max). The different results were compared by one-way ANOVA.
Diameter and entry angle of the bridging veins
A total of 137 bridging veins were observed; 62 of which entered the anterior segment of superior sagittal sinus (anterior group) with diameters of 2.0 ± 0.9 mm and entry angles of 93 ± 34°, while 75 of which entered the posterior segment of superior sagittal sinus (posterior group) with diameters of 3.0 ± 1.1 mm and entry angles of 43 ± 25°. Compared to the anterior segment of bridging veins, the diameters of posterior segment of bridging veins were enhanced, and the entry angles were obviously decreased (Figs. 1, 3, Table 1).
Establishment of the hemodynamic physical model. A–C The obtainment of the morphological data. The entry angle > 90° in A ≈ 90° in B and > 90° in c, respectively. D The grid after meshing and vessel boundary: SSS superior sagittal sinus; BV bridging vein; Black circle dural entrance which is the point that BV entering SSS; α entry angle which is the angle that BV entering the SSS; I-SSS inlet of SSS; BV inlet of BV; O-SSS outlet of SSS; SSU upstream wall of SSS from the dural entrance; SSD downstream wall of SSS from the dural entrance; SSO opposite wall of SSS from the dural entrance; BVU upstream wall of BV from the dural entrance; BVD downstream wall of BV from the dural entrance
Table 1 Diameter and angle of bridging veins entering the superior sagittal sinus
Stable value of wall shear stress in different vascular wall
According to the microsurgical anatomy data, 137 models of cerebral bridging veins entering the superior sagittal sinus were built, and the definition of the vessel walls of superior sagittal sinus and bridging vein is shown in Fig. 3D. Then, the WSS in a certain point is calculated as:
$$ \overline{{WSS_{{}} }} = \frac{{\iiint_{D} {\tau_{\omega } (x,y,z)d_{x} d_{y} d_{z} }}}{||D||}, $$
where τ w is the WSS on the wall, and x, y and z are the 3D coordinates in space. D is the volume while d is the infinitisimal distance.
The WSS in all the vascular wall of cerebral superficial venous system were relatively stable, expect the inlets of vessel and the place near the entrance (Figs. 4, 5). As the WSS at opposite wall of superior sagittal sinus from the dural entrance (SSO) had significant different between the place before and after entrance (Fig. 5b), the SSO was divided into two segments: downstream of SSO (SSO-U) and upstream of SSO (SSO-D).
Line graphs of WSS in SSD and BVU under typical entry angle and diameter. a, b Typical entry angle of BV. c, d Typical diameter of BV. a, c WSS in downstream wall of SSS from the dural entrance (SSD). b, d WSS in upstream wall of BV from the dural entrance (BVU)
Line graphs of WSS in SSU, SSO and BVD. a WSS in upstream wall of SSS from the dural entrance (SSU). b WSS in opposite wall of SSS from the dural entrance (SSO). c WSS in downstream wall of BV from the dural entrance (BVD)
The stable value of WSS in the whole cerebral superficial venous system was 0.544 + 0.072 Pa. According to the statistical difference, the stable value were divided into three groups: stable value in downstream wall of superior sagittal sinus from the dural entrance (SSD) and SSO-D was 0.563 + 0.009 Pa; stable value in upstream wall of bridging vein from the dural entrance (BVU) and downstream wall of bridging vein from the dural entrance (BVD) was 0.619 + 0.015 Pa; stable value in upstream wall of superior sagittal sinus from the dural entrance (SSU) and SSO-U was 0.450 + 0.007 Pa. The difference of WSS between groups was statistically significant, and there was no statistical significance in the group (Fig. 6).
Stable value of WSS along the vessel wall in the cerebrovenous system. According to whether the WSS along different walls has statistical discrepancy, the walls of cerebrovenous system were divided into three groups: 1 SSD (downstream wall of SSS from the dural entrance) and SSO-D (opposite and downstream wall of SSS from the dural entrance), 2 BVU (upstream wall of BV from the dural entrance) and BVD (downstream wall of BV from the dural entrance), 3 SSU (upstream wall of SSS from the dural entrance) and SSO-U (opposite and upstream wall of SSS from the dural entrance)
Comparison of wall shear stress between models with different entry angles and diameters
As shown in Fig. 4, when the BV entry angles are small and the diameters are large, the local WSS in the SSD and BVU were significantly decreased. In the other parts of the vessel wall, the differences of WSS among various models were not so obvious (Fig. 5). The minimum values of the above two WSS in SSD and BVU were arrayed from low to high, and are graphically displayed in Fig. 7a, b. It is seen that at the minimum value of around 0.017 Pa, there is a clear demarcation in the level of WSS.
Minimum WSS in SSD and BVU. a The minimum WSS arrayed from low to high in SSD. b The minimum WSS arrayed from low to high in BVU. c The 3-D scatterplot of minimum WSS in SSD with various diameter and angles. d The 3-D scatterplot of minimum WSS in BVU with various diameter and angles
Corresponding to the original data and the scatter plots as shown in Fig. 7c, d. In the SSD, it is seen that when the diameters of the bridging veins were less than or equal to 1.2 mm or the angles were larger than or equal to 105°, the WSS did not significantly decrease (and the minimum value of WSS was above 0.017 Pa). In the BVU, it is seen that when the diameters of bridging veins were less than or equal to 1.2 mm or the angles were larger than or equal to 65°, and the WSS did not significantly decrease.
According to the minimum value of the WSS in the scatter plots and graphs, in accordance with the different entry angles, the bridging veins models were divided into three groups: (10°, 65°), (65°, 105°)and (105°, 170°), as shown in Table 2. The data of bridging veins with diameters less than or equal to 1.2 mm were not included. It was observed that no matter how the entry angles changed, the WSS decreased significantly.
Table 2 The difference of minimum WSS in the models of BVs with various entry angles
The minimum WSS in SSD in each group were 0.008 ± 0.001, 0.010 ± 0.001 and 0.338 ± 0.139 Pa, respectively. The minimum value in the (100°, 170°) group was higher than those in the other two groups (P < 0.01); The minimum WSS in BVU in each group were 0.005 ± 0.002, 0.189 ± 0.126 and 0.728 ± 0.296 Pa, respectively. The differences between the three groups were statistical significant (P < 0.01). In the (10°, 60°) group, the minimum WSS in BVU was 63% of that in SSD. The differences were statistical significant (P < 0.01).
Comparison of the wall shear stress in the anterior and posterior segments of bridging vein models
The bridging vein models were divided into anterior group and posterior group. As displayed in Table 3, in the anterior group, the minimum WSS in the SSD was 0.105 ± 0.164 Pa, at a distance of 5.6 + 9.2 mm from the dural entrance. The minimum WSS in BVU was 0.440 ± 0.426 Pa, at a distance of 0.7 ± 1.9 mm from the dural entrance. In the posterior group, the minimum WSS in SSD was 0.009 ± 0.001 Pa, at a distance of 9.0 ± 6.1 mm from the dural entrance. The minimum WSS in BVU was 0.043 ± 0.081 Pa, at a distance of 2.5 ± 2.6 mm from the dural entrance (Detailed data were shown in Additional file 1). Compared to the anterior group, the minimum value of the posterior vascular WSS was smaller, and the average distance from the dural entrance was longer.
Table 3 The differences of minimum WSS in anterior and posterior groups models
The calculation processes of CFD are divided into five steps: geometric modeling, meshing, setting boundary conditions, solving and post processing. The geometry of the BV physical models, the dividing methods of meshing and the setting of different boundary conditions may influence the calculation results. The geometry of the BV physical models is considered to be the most critical factor to determine whether the results of the physical models were correct or not [32]. In this study, the geometry of the physical models was derived from the microsurgical anatomy photographs and data. This conforms to the reality, and it can help to obtain more accurate model analysis results.
The WSS is formed by friction between the blood flow and fixed vascular wall. A certain size and stable value of WSS may have an effect of generating anticoagulant, inhibition of leukocyte adhesion and proliferation of smooth muscle [20]. Due to the lack of relevant literature, it is difficult to determine the amount of WSS considered as abnormal in the venous system. The results of this study show that on the minimum WSS curves, the lowest WSS is 0.017 Pa, which is the most drastic change of the curve. Therefore, the WSS of less than 0.017 Pa is considered as a reference index to judge the abnormal WSS.
In this study, 137 models were established by using anatomical data, the WSS in the downstream wall of superior sagittal sinus from the dural entrance and the upstream wall of bridging vein from the dural entrance were significantly decreased. It can be seen from the scatter diagram (Fig. 4) of minimum WSS value, when the diameters of bridging veins were ≤ 1.2 mm, the minimum value of WSS was above 0.017 Pa, that is the WSS did not significantly decrease. When the diameters of bridging veins were ≤ 1.2 mm, no matter how the entry angle changed, the hemodynamics of superior sagittal sinus did not significantly change. Thus, the cerebral venous thrombosis is not easy to form when the bridging veins is ≤ 1.2 mm.
This study found that in the models of bridging vein diameters > 1.2 mm, the WSS decreased in the downstream wall of superior sagittal sinus from the dual entrance with the entry angle less than 105°, and the minimum WSS was under 0.014 Pa. When 65° < entry angle < 105°, the distance of minimum WSS from the dural entrance was 3.3 ± 1.8 mm. When entry angle < 65°, the average distance of minimum WSS from dural entrance was 9.4 ± 3.2 mm. It was obviously that the latter is greater than the former, that is, the reducing range was large. When the entry angles are smaller than 65°, the hear stress in the upstream wall of bridging vein from dural entrance was significantly decreased, and the minimum WSS was 0.005 ± 0.002 Pa. The reduction of WSS is an important factor for the formation of thrombus [20]. At the same time, the larger the range of WSS in an area, the more prone it is to thrombosis formation. Therefore, the harmful morphological characteristics of bridging veins were found to be: the entry angle of bridging veins injected into the superior sagittal sinus to be smaller than 65° and the diameter to be greater than 1.2 mm.
Previous studies have indicated that cerebral venous thrombosis usually occurs in the dural sinus and extends to bridging veins, while single bridging vein thrombosis is rarely seen [8]. Niggemann et al. have reported a case of a simple bridging vein thrombosis, and considered that cerebral venous thrombosis is more likely to occur in bridging veins [33]. The results of this study support this view. When the entry angle of bridging veins injected into the superior sagittal sinus is smaller than 65° and the diameter is greater than 1.2 mm, the minimum WSS in the downstream superior sagittal sinus wall is 0.008 Pa while that in the upstream is 0.005 Pa. Compared to the superior sagittal sinus wall, the WSS in the bridging vein wall reduces more obviously, and the tube wall is easier to be hurt. Therefore, thrombosis is more likely to occur in bridging veins than in the superior sagittal sinus.
The BV models were divided into two groups according to the different segments of bridging veins. Compared with the anterior segment group, the diameter of bridging veins in the posterior segment was larger, and entry angle of superior sagittal sinus was smaller. Bridging veins with large diameter and small entry angle may lead to the decrease of WSS. Compared with the anterior segment of bridging vein models, the minimum WSS in posterior group was smaller, and the distance from the dural entrance was larger. The distance from the minimum WSS to the dural entrance is 2.9 ± 2.5 (0.3–13.5) mm, while the lowest WSS is in the central position of the region where the WSS is reduced. The range of minimum WSS is about two times the distance from the dural entrance to the minimum WSS, which is 5.7 ± 5.1 (0.6–27.0) mm. As a result, the predilection site of thrombosis is on the upstream wall of cerebral bridging veins from the dural entrance, which is within 27 mm from the entrance.
The collateral circulation of bridging veins is abundant [34]. Due to the compensatory effect of adjacent veins, thrombotic occlusion of one or a few bridging veins usually does not cause obvious clinical symptoms. The superior sagittal sinus thrombosis causes backflow obstruction of all draining veins before the lesion location, and different measures of compensation. This leads to complications of cerebral hemorrhage, cerebral edema, venous infarction and so on, for which the treatment is relatively difficult [35]. The results of this study have shown that thrombosis is more likely to occur in bridging veins; when the disease process is accentuated, the disease can be gradually extended to the superior sagittal sinus.
Our data suggest that the cerebral venous thrombosis occurs more easily when the diameter of a BV is greater than 1.2 mm and the entry angle is less than 65°. The embolus is formed earlier in the upstream wall of BVs in the posterior part of SSS. Therefore, in the early stages of the disease, the predilection site of thrombus in the image is observed carefully to enable early discovery of thrombus. Lesion migration to superior sagittal sinus can then be avoided by active treatments, which is of great significance for the prognosis of the disease and reduction of the incidence of complications.
BV:
bridging vein
BVD:
downstream wall of BV from the dural entrance
BVU:
upstream wall of bridging vein from the dural entrance
CTV:
computed tomographic venography
DSA:
digital subtraction angiography
MRV:
magnetic resonance venography
superior sagittal sinus
SSU:
upstream wall of SSS from the dural entrance
downstream wall of SSS from the dural entrance
SSO:
opposite wall of SSS from the dural entrance
Schuchardt F, Schroeder L, Anastasopoulos C, Markl M, Bauerle J, Hennemuth A, Drexl J, Valdueza JM, Mader I, Harloff A. In vivo analysis of physiological 3D blood flow of cerebral veins. Eur Radiol. 2015;25(8):2371–80.
Seo H, Choi DS, Shin HS, Cho JM, Koh EH, Son S. Bone subtraction 3D CT venography for the evaluation of cerebral veins and venous sinuses: imaging techniques, normal variations, and pathologic findings. AJR Am J Roentgenol. 2014;202(2):W169–75.
Xia XB, Tan CL. A quantitative study of magnetic susceptibility-weighted imaging of deep cerebral veins. J Neuroradiol. 2013;40(5):355–9.
Bousser MG, Ferro JM. Cerebral venous thrombosis: an update. Lancet Neurol. 2007;6(2):162–70.
de Bruijn SF, de Haan RJ, Stam J. Clinical features and prognostic factors of cerebral venous sinus thrombosis in a prospective series of 59 patients. For the cerebral venous sinus thrombosis study group. J Neurol Neurosurg Psychiatry. 2001;70(1):105–8.
Stolz E, Rahimi A, Gerriets T, Kraus J, Kaps M. Cerebral venous thrombosis: an all or nothing disease? Prognostic factors and long-term outcome. Clin Neurol Neurosurg. 2005;107(2):99–107.
Wasay M, Azeemuddin M. Neuroimaging of cerebral venous thrombosis. J Neuroimaging. 2005;15(2):118–28.
Ferro JM, Canhao P, Stam J, Bousser MG, Barinagarrementeria F. Prognosis of cerebral vein and dural sinus thrombosis: results of the International Study on Cerebral Vein and Dural Sinus Thrombosis (ISCVT). Stroke. 2004;35(3):664–70.
Liu X, Peng C, Xia Y, Gao Z, Xu P, Wang X, Xian Z, Yin Y, Jiao L, Wang D, et al. Hemodynamics analysis of the serial stenotic coronary arteries. Biomed Eng Online. 2017;16(1):127.
Yang Y, Liu X, Xia Y, Liu X, Wu W, Xiong H, Zhang H, Xu L, Wong KKL, Ouyang H, Huang W. Impact of spatial characteristics in the left stenotic coronary artery on the hemodynamics and visualization of 3D replica models. Sci Rep. 2017;7(1):15452.
Xu P, Liu X, Song Q, Chen G, Wang D, Zhang H, Yan L, Liu D, Huang W. Patient-specific structural effects on hemodynamics in the ischemic lower limb artery. Sci Rep. 2016;6:39225.
Lee SJ, Choi W, Seo E, Yeom E. Association of early atherosclerosis with vascular wall shear stress in hypercholesterolemic zebrafish. PLoS ONE. 2015;10(11):e0142945.
Hsu S, Chu JS, Chen FF, Wang A, Li S. Effects of fluid shear stress on a distinct population of vascular smooth muscle cells. Cell Mol Bioeng. 2011;4(4):627–36.
Li J, Zhang K, Yang P, Liao Y, Wu L, Chen J, Zhao A, Li G, Huang N. Research of smooth muscle cells response to fluid flow shear stress by hyaluronic acid micro-pattern on a titanium surface. Exp Cell Res. 2013;319(17):2663–72.
Shav D, Gotlieb R, Zaretsky U, Elad D, Einav S. Wall shear stress effects on endothelial-endothelial and endothelial-smooth muscle cell interactions in tissue engineered models of the vascular wall. PLoS ONE. 2014;9(2):e88304.
Dunn J, Simmons R, Thabet S, Jo H. The role of epigenetics in the endothelial cell shear stress response and atherosclerosis. Int J Biochem Cell Biol. 2015;67:167–76.
Yamamoto K, Ando J. Vascular endothelial cell membranes differentiate between stretch and shear stress through transitions in their lipid phases. Am J Physiol Heart Circ Physiol. 2015;309(7):H1178–85.
Wu J, Liu G, Huang W, Ghista DN, Wong KKL. Transient blood flow in elastic coronary arteries with varying degrees of stenosis and dilatations: CFD modelling and parametric study. Comput Methods Biomech Biomed Eng. 2014;18(16):1835–45.
Malek AM, Alper SL, Izumo S. Hemodynamic shear stress and its role in atherosclerosis. JAMA. 1999;282(21):2035–42.
Cunningham KS, Gotlieb AI. The role of shear stress in the pathogenesis of atherosclerosis. Lab Invest. 2005;85(1):9–23.
Nagel T, Resnick N, Dewey CF Jr, Gimbrone MA Jr. Vascular endothelial cells respond to spatial gradients in fluid shear stress by enhanced activation of transcription factors. Arterioscler Thromb Vasc Biol. 1999;19(8):1825–34.
White CR, Haidekker M, Bao X, Frangos JA. Temporal gradients in shear, but not spatial gradients, stimulate endothelial cell proliferation. Circulation. 2001;103(20):2508–13.
Sarifuddin, Chakravarty S, Mandal PK, Layek GC. Numerical simulation of unsteady generalized Newtonian blood flow through differently shaped distensible arterial stenoses. J Med Eng Technol. 2008;32(5):385–99.
Su B, Huo Y, Kassab GS, Kabinejadian F, Kim S, Leo HL, Zhong L. Numerical investigation of blood flow in three-dimensional porcine left anterior descending artery with various stenoses. Comput Biol Med. 2014;47:130–8.
Altnji HE, Bou-Said B, Walter-Le Berre H. Morphological and stent design risk factors to prevent migration phenomena for a thoracic aneurysm: a numerical analysis. Med Eng Phys. 2015;37(1):23–33.
Cong Y, Wang L, Liu X. A numerical study of fluid-structure coupled effect of abdominal aortic aneurysm. Biomed Mater Eng. 2015;26(Suppl 1):S245–55.
Marrero VL, Tichy JA, Sahni O, Jansen KE. Numerical study of purely viscous non-Newtonian flow in an abdominal aortic aneurysm. J Biomech Eng. 2014;136(10):101001.
Qiao A, Zeng K. Numerical simulation of hemodynamics in intracranial saccular aneurysm treated with a novel stent. Neurol Res. 2013;35(7):701–8.
Liu X, Gao Z, Xiong H, Ghista D, Ren L, Zhang H, Wu W, Huang W, Hau WK. Three-dimensional hemodynamics analysis of the circle of Willis in the patient-specific nonintegral arterial structures. Biomech Model Mechanobiol. 2016;15(6):1439–56.
Han H, Tao W, Zhang M. The dural entrance of cerebral bridging veins into the superior sagittal sinus: an anatomical comparison between cadavers and digital subtraction angiography. Neuroradiology. 2007;49(2):169–75.
Chen Y, Zhang R, Lian J, Luo F, Han H, Deng X. Flow dynamics of cerebral bridging veins entering superior sagittal sinus by color-coded duplex sonography. J Med Imaging Health Inf. 2017;7(4):862–6.
Canstein C, Cachot P, Faust A, Stalder AF, Bock J, Frydrychowicz A, Kuffer J, Hennig J, Markl M. 3D MR flow analysis in realistic rapid-prototyping model systems of the thoracic aorta: comparison with in vivo data and computational fluid dynamics in identical vessel geometries. Magn Reson Med. 2008;59(3):535–46.
Niggemann P, Stracke CP, Krings T, Thron A. Bridging vein thrombosis with and without dural sinus involvement. Clin Neuroradiol. 2007;17(1):34–40.
Meder JF, Chiras J, Roland J, Guinet P, Bracard S, Bargy F. Venous territories of the brain. J Neuroradiol. 1994;21(2):118–33.
Renowden S. Cerebral venous sinus thrombosis. Eur Radiol. 2004;14(2):215–26.
XD contributed to the experimental design, analysis and interpretation of data. YZ and FW carried out the experiments and the statistical analysis. XD provided final approval of the version of submitted manuscript. All authors read and approved the final manuscript.
All data generated or analyzed during this study are included in this published article and its additional files.
This study was approved by the Ethics Committee of Anhui Medical University. The cadavers assigned to this project were from those used for research and educational purposes by the department of anatomy in our institution, and were with permission given by their next-of-kin(s).
The project was funded by the National Natural Science Foundation of China (Reference No: 81200895).
Department of Anatomy, Anhui Medical University, 81 Meishan Road, Hefei, 230032, China
Youyu Zhu
, Feng Wang
& Xuefei Deng
Search for Youyu Zhu in:
Search for Feng Wang in:
Search for Xuefei Deng in:
Correspondence to Xuefei Deng.
Additional file 1. The detail data about the difference between the anterior and posterior segments of bridging vein models.
Zhu, Y., Wang, F. & Deng, X. Hemodynamics of cerebral bridging veins connecting the superior sagittal sinus based on numerical simulation. BioMed Eng OnLine 17, 35 (2018) doi:10.1186/s12938-018-0466-8
Cerebral bridging veins
Cerebral venous thrombosis
Wall shear stress | CommonCrawl |
Fine asymptotics of profiles and relaxation to equilibrium for growth-fragmentation equations with variable drift rates
KRM Home
Nonlinear stability of a Vlasov equation for magnetic plasmas
June 2013, 6(2): 245-268. doi: 10.3934/krm.2013.6.245
Large deviations for the solution of a Kac-type kinetic equation
Federico Bassetti 1, and Lucia Ladelli 2,
Dipartimento di Matematica, Università degli Studi di Pavia, Via Ferrata 1, 27100, Pavia, Italy
Dipartimento di Matematica, Politecnico di Milano, P.zza Leonardo da Vinci 32, 20133, Milanod, Italy
Received October 2012 Revised November 2012 Published February 2013
The aim of this paper is to study large deviations for the self-similar solution of a Kac-type kinetic equation. Under the assumption that the initial condition belongs to the domain of normal attraction of a stable law of index $\alpha < 2$ and under suitable assumptions on the collisional kernel, precise asymptotic behavior of the large deviations probability is given.
Keywords: self-similar solutions, Kac's like equations, large deviations, stable laws..
Mathematics Subject Classification: Primary: 60F10; Secondary: 82C40, 60F0.
Citation: Federico Bassetti, Lucia Ladelli. Large deviations for the solution of a Kac-type kinetic equation. Kinetic & Related Models, 2013, 6 (2) : 245-268. doi: 10.3934/krm.2013.6.245
K. B. Athreya and P. E. Ney, "Branching Processes,", Reprint of the 1972 original, (1972). Google Scholar
F. Bassetti and L. Ladelli, Self similar solutions in one-dimensional kinetic models: A probabilistic view,, Ann. Appl. Prob., 22 (2012), 1928. Google Scholar
F. Bassetti, L. Ladelli and D. Matthes, Central limit theorem for a class of one-dimensional kinetic equations,, Probab. Theory Related Fields, 150 (2011), 77. doi: 10.1007/s00440-010-0269-8. Google Scholar
F. Bassetti, L. Ladelli and E. Regazzini, Probabilistic study of the speed of approach to equilibrium for an inelastic Kac model,, J. Stat. Phys., 133 (2008), 683. doi: 10.1007/s10955-008-9630-z. Google Scholar
F. Bassetti, L. Ladelli and G. Toscani, Kinetic models with randomly perturbed binary collisions,, J. Stat. Phys., 142 (2011), 686. doi: 10.1007/s10955-011-0136-8. Google Scholar
F. Bassetti and E. Perversi, Speed of convergence to equilibrium in Wasserstein metrics for Kac's like kinetic equations,, Electron. J. Probab., 18 (2013), 1. Google Scholar
B. Basu, B. K. Chackabarti, S. R. Chackavart and K. Gangopadhyay, eds., "Econophysics & Economics of Games, Social Choices and Quantitative Techniques,", Springer-Verlag, (2010). Google Scholar
D. Ben-Avraham, E. Ben-Naim, K. Lindenberg and A. Rosas, Self-similarity in random collision processes,, Phys. Rev. E, 68 (2003). Google Scholar
A. V. Bobylev and C. Cercignani, Self-similar asymptotics for the Boltzmann equation with inelastic and elastic interactions,, J. Statist. Phys., 110 (2003), 333. doi: 10.1023/A:1021031031038. Google Scholar
A. V. Bobylev, C. Cercignani and I. M. Gamba, Generalized kinetic Maxwell type models of granular gases,, in, 1937 (2008), 23. doi: 10.1007/978-3-540-78277-3_2. Google Scholar
A. V. Bobylev, C. Cercignani and I. M. Gamba, On the self-similar asymptotics for generalized nonlinear kinetic Maxwell models,, Comm. Math. Phys., 291 (2009), 599. doi: 10.1007/s00220-009-0876-3. Google Scholar
A. Dembo and O. Zeitouni, "Large Deviations Techniques and Applications,", Second edition, (1998). Google Scholar
E. Dolera, E. Gabetta and E. Regazzini, Reaching the best possible rate of convergence to equilibrium for solutions of Kac's equation via central limit theorem,, Ann. Appl. Probab., 19 (2009), 186. doi: 10.1214/08-AAP538. Google Scholar
E. Dolera and E. Regazzini, The role of the central limit theorem in discovering sharp rates of convergence to equilibrium for the solution of the Kac equation,, Ann. Appl. Probab., 20 (2010), 430. doi: 10.1214/09-AAP623. Google Scholar
D. Duffie, G. Giroux and G. Manso, Information percolation,, American Economic Journal: Microeconomics, 2 (2010), 100. Google Scholar
B. Fristedt and L. Gray, "A Modern Approach to Probability Theory,", Probability and its Applications, (1997). Google Scholar
E. Gabetta and E. Regazzini, Central limit theorem for the solution of the Kac equation,, Ann. Appl. Probab., 18 (2008), 2320. doi: 10.1214/08-AAP524. Google Scholar
E. Gabetta and E. Regazzini, Central limit theorem for the solution of the Kac equation: Speed of approach to equilibrium in weak metrics,, Probab. Theory Related Fields, 146 (2010), 451. doi: 10.1007/s00440-008-0196-0. Google Scholar
C. C. Heyde, On large deviation problems for sums of random variables which are not attracted to the normal law,, Ann. Math. Statist., 38 (1967), 1575. Google Scholar
C. C. Heyde, A contribution to the theory of large deviations for sums of independent random variables,, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 7 (1967), 303. Google Scholar
C. C. Heyde, On large deviation probabilities in the case of attraction to a non-normal stable law,, Sankhyā Ser. A, 30 (1968), 253. Google Scholar
I. A. Ibragimov and Yu. V. Linnik, "Independent and Stationary Sequences of Random Variables,", Wolters-Noordhoff Publishing, (1971). Google Scholar
M. Kac, Foundations of kinetic theory,, in, (1956), 1954. Google Scholar
Z. Kielek, An application of convolution iterates to evolution equation in Banach space,, Univ. Iagel. Acta Math., 27 (1988), 247. Google Scholar
D. Matthes and G. Toscani, On steady distributions of kinetic models of conservative economies,, J. Statist. Phys., 130 (2008), 1087. doi: 10.1007/s10955-007-9462-2. Google Scholar
H. P. McKean, Jr., Speed of approach to equilibrium for Kac's caricature of a Maxwellian gas,, Arch. Rational Mech. Anal., 21 (1966), 343. Google Scholar
L. Pareschi and G. Toscani, Self-similarity and power-like tails in nonconservative kinetic models,, J. Statist. Phys., 124 (2006), 747. doi: 10.1007/s10955-006-9025-y. Google Scholar
M. Patriarca, E. Heinsalu and A. Chakraborti, Basic kinetic wealth-exchange models: Common features and open problems,, Eur. Phys. J. B, 73 (2010), 145. Google Scholar
A. Pulvirenti and G. Toscani, Asymptotic properties of the inelastic Kac model,, J. Statist. Phys., 114 (2004), 1453. doi: 10.1023/B:JOSS.0000013964.98706.00. Google Scholar
V. Vinogradov, "Refined Large Deviation Limit Theorems,", Pitman Research Notes in Mathematics Series, 315 (1994). Google Scholar
V. M. Yakovenko, Statistical mechanics approach to econophysics,, in, (). Google Scholar
Weronika Biedrzycka, Marta Tyran-Kamińska. Self-similar solutions of fragmentation equations revisited. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 13-27. doi: 10.3934/dcdsb.2018002
Qiaolin He. Numerical simulation and self-similar analysis of singular solutions of Prandtl equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 101-116. doi: 10.3934/dcdsb.2010.13.101
F. Berezovskaya, G. Karev. Bifurcations of self-similar solutions of the Fokker-Plank equations. Conference Publications, 2005, 2005 (Special) : 91-99. doi: 10.3934/proc.2005.2005.91
Hyungjin Huh. Self-similar solutions to nonlinear Dirac equations and an application to nonuniqueness. Evolution Equations & Control Theory, 2018, 7 (1) : 53-60. doi: 10.3934/eect.2018003
K. T. Joseph, Philippe G. LeFloch. Boundary layers in weak solutions of hyperbolic conservation laws II. self-similar vanishing diffusion limits. Communications on Pure & Applied Analysis, 2002, 1 (1) : 51-76. doi: 10.3934/cpaa.2002.1.51
Marco Cannone, Grzegorz Karch. On self-similar solutions to the homogeneous Boltzmann equation. Kinetic & Related Models, 2013, 6 (4) : 801-808. doi: 10.3934/krm.2013.6.801
Anna Chiara Lai, Paola Loreti. Self-similar control systems and applications to zygodactyl bird's foot. Networks & Heterogeneous Media, 2015, 10 (2) : 401-419. doi: 10.3934/nhm.2015.10.401
Hideo Kubo, Kotaro Tsugawa. Global solutions and self-similar solutions of the coupled system of semilinear wave equations in three space dimensions. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 471-482. doi: 10.3934/dcds.2003.9.471
Jochen Merker, Aleš Matas. Positivity of self-similar solutions of doubly nonlinear reaction-diffusion equations. Conference Publications, 2015, 2015 (special) : 817-825. doi: 10.3934/proc.2015.0817
Zoran Grujić. Regularity of forward-in-time self-similar solutions to the 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 837-843. doi: 10.3934/dcds.2006.14.837
Bendong Lou. Self-similar solutions in a sector for a quasilinear parabolic equation. Networks & Heterogeneous Media, 2012, 7 (4) : 857-879. doi: 10.3934/nhm.2012.7.857
Shota Sato, Eiji Yanagida. Singular backward self-similar solutions of a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 897-906. doi: 10.3934/dcdss.2011.4.897
Marek Fila, Michael Winkler, Eiji Yanagida. Convergence to self-similar solutions for a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 703-716. doi: 10.3934/dcds.2008.21.703
Kin Ming Hui. Existence of self-similar solutions of the inverse mean curvature flow. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 863-880. doi: 10.3934/dcds.2019036
Meiyue Jiang, Juncheng Wei. $2\pi$-Periodic self-similar solutions for the anisotropic affine curve shortening problem II. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 785-803. doi: 10.3934/dcds.2016.36.785
Adrien Blanchet, Philippe Laurençot. Finite mass self-similar blowing-up solutions of a chemotaxis system with non-linear diffusion. Communications on Pure & Applied Analysis, 2012, 11 (1) : 47-60. doi: 10.3934/cpaa.2012.11.47
Thomas Y. Hou, Ruo Li. Nonexistence of locally self-similar blow-up for the 3D incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 637-642. doi: 10.3934/dcds.2007.18.637
Dongho Chae, Kyungkeun Kang, Jihoon Lee. Notes on the asymptotically self-similar singularities in the Euler and the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1181-1193. doi: 10.3934/dcds.2009.25.1181
Rostislav Grigorchuk, Volodymyr Nekrashevych. Self-similar groups, operator algebras and Schur complement. Journal of Modern Dynamics, 2007, 1 (3) : 323-370. doi: 10.3934/jmd.2007.1.323
Christoph Bandt, Helena PeÑa. Polynomial approximation of self-similar measures and the spectrum of the transfer operator. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4611-4623. doi: 10.3934/dcds.2017198
2018 Impact Factor: 1.38
Federico Bassetti Lucia Ladelli | CommonCrawl |
and Receive
· Free subscription to Gotham's digital edition
· Recommendations to the best New York has to offer
· Special access to VIP events across the city
Salutation Ms Mrs. Mr. Other
By signing up you agree to receive occasional emails, invitations to future events, offers and newsletters from Modern Luxury. For more information, see our Privacy Policy and T&Cs.
You're subscribed.
Add Other Modern Luxury Cities
Log likelihood formula
log likelihood formula t. 0 -0. formula. On the left, there is the posterior (be careful! this is not the likelihood), on the top right there is the likelihood and the prior. 1 and the annual cumulative seismicity rate follows the relation log N 4. This best represents two-parameter distributions, with the values of the parameters on the x- and y-axes and the log-likelihood value on the z-axis. 9330986. From: Methods and Applications of Longitudinal Data Analysis, 2016. SAS prints the result as -2 LOG L. Then, and so the bias in the log likelihood estimate is approximately half the variance of the log likelihood estimate. The log-likelihood function is used throughout various subfields of mathematics, both pure and applied, and has particular importance in In the binomial, the parameter of interest is p (since n is typically fixed and known). † It is often easier to maximise the log likelihood function (LLF). This article is organized as follows. 1. There are various of lower bound of l( ). Hello, Which formula of log likelihood is used in AntConc. Some of these evaluations may turn out to be positive, and some may turn out to be negative. We can achieve this goal directly when the likelihood function is of the regular case. Definition: (Maximum Likelihood Estimators. The distribution of the LR statistic is closely approximated by the chi-square distribution for large sample sizes. For short this is simply called the log likelihood. For log_likelihood_fn, we used the following formula for a normal distribution (if you'd like to see a proof, you can do so at ()): Note that the notation indicates that we are calculating the sum of the value inside the summation for every FIT5197_S2_2019_assignment_2 - Jupyter Notebook 12 of 22 08-Sep-19, 10:54 PM 5. The likelihood ofthe sam ple isthe jointPDF (or log(1& p), where k isa constantthatdoesnÕtinvolve the param eterp. -log posterior (Y − θ)2 2 + (θ − µ)2 2σ2 + constant −log likelihood + −log prior fit to data + control/constraints on parameter This is how the separate terms originate in a vari-ational approach. 1 Likelihood Function for Logistic Regression Because logistic regression predicts probabilities, rather than just classes, we can fit it using likelihood. Because logarithms are strictly increasing functions, maximizing the likelihood is equivalent to maximizing the log-likelihood. The likelihood function can be maximized w. Fitting Lognormal Distribution via MLE. One advantage of the log-likelihood is that the terms are additive. When finding the MLE it sometimes easier to maximize the log-likelihood function since Oct 05, 2021 · Manage the representation of the log likelihood of a cell by a Poisson. However, for complex models common in neuroscience and computational biology, obtaining exact formulas for the log-likelihood Jul 06, 2017 · The curve of our log-likelihood is shown below: Note: By taking the log of our function to derive the log-likelihood, we guarantee (as an added bonus) that our objective function is strictly concave, meaning there is 1 global maximum. Likelihood is a tool for summarizing the data's evidence about unknown parameters. 28. In the future we willom itthe constant, MAXIMUM LIKELIHOOD ESTIMATION 3 A. Check out http://oxbridge-tutor. This is particularly useful when implementing the likelihood metric in digital signal processors. Since ln(x) is an increasing function, the maxima of the likelihood and log likelihood coincide. The log likelihood function is X − (X i −µ)2 2σ2 −1/2log2π −1/2logσ2 +logdX i We know the log likelihood function is maximized when σ = sP (x i −µ)2 n This is the MLE of σ. select. 5 1. . to AntConc-Discussion. We can then calculate the log-likelihood value according to this formula: This equates to calculating log-likelihood G2 as follows: G2 = 2*((a*ln (a/E1)) + (b*ln (b/E2))) Note 1: (thanks to Stefan Th. 46, which is also reflected in the log-likelihood value being equal to -3688. 2 The Score Vector The first derivative of the log-likelihood function is called Fisher's score function, and is denoted by u(θ) = ∂logL(θ;y) ∂θ. 2 Log-Likelihood. The Math: Newton's Method with One Variable. org Log-likelihood function is a logarithmic transformation of the likelihood function, often denoted by a lowercase l or , to contrast with the uppercase L or for the likelihood. üWe have observed a set of outcomes in the real world. It is often convenient to work with the Log of the likelihood function. The catalog of earthquakes is complete above ML 2. The likelihood for the full tree then is the product of the likelihood at each site. 961 instead of -3834. The results were that 265 of those 284 trials resulted in survival and 19 resulted in death. Here is the log-likelihood function. e. 12/16 Maximum likelihood estimation If the model is correct then the log-likelihood of ( ;˙) is logL( ;˙jX;Y) = n 2 log(2ˇ)+log˙2 1 2˙2 kY X k2 where Y is the vector of observed responses. For each training data-point, we have a vector of features, x i, and an observed class, y i. L(fX ign =1;) = Yn i=1 F(X i;) 2. STEP 1 Write down the likelihood function, L(θ), where L(θ)= n i=1 fX(xi;θ) that is, the product of the nmass/density function terms (where the ith term is the mass/density function evaluated at xi) viewed as a function of θ. The statistic -2LogL (minus 2 times the log of the likelihood) is a badness-of-fit indicator, that is, large numbers mean poor fit of the model to the data. Then ϕˆ is called the Maximum Likelihood Estimator (MLE). In the maximum likelihood estimation of time series models, two types of maxi-mum likelihood estimates (mles) may be computed. Note, too, that the log-likelihood function is in the negative quadrant because of the logarithm of a number between 0 and 1 is negative. The logarithms of likelihood, the log likelihood function, does the same job and is usually preferred for a few reasons: likelihood estimate ^ = h=n. Gries) The form of the log-likelihood calculation that I use comes from the Read and Cressie research cited in Rayson and Garside (2000) rather A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "log likelihood"), because the logarithm is an increasing function. 5, which is . ) model is y/n = 0. co. log (. If θis a single parameter,find θ by For log_likelihood_fn, we used the following formula for a normal distribution (if you'd like to see a proof, you can do so at ()): Note that the notation indicates that we are calculating the sum of the value inside the summation for every FIT5197_S2_2019_assignment_2 - Jupyter Notebook 12 of 22 08-Sep-19, 10:54 PM In the case of a one-dimensional parameter co the signed log likelihood ratio is defined by r=sign (d -wo)[2{l(c)-l(W )}]12 where 1 is the log likelihood function and c3 is the maximum likelihood estimator. It would be desirable to calibrate a likelihood scale for evidence with the more familiar p-value scale. The Wilks statistics is −2log max H 0 lik maxlik = 2[logmaxLik −logmax H 0 Lik] In R software we first store the data in a vector called xvec A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "log likelihood"), because the logarithm is an increasing function. To be clear the equation is. The best fit and the maximum of the likelihood is obtained, when the scale parameter is estimated using the formula \(\hat{\sigma}^2 = \frac{1}{T}\sum_{t=1}^T\left(y_t - \bar{y}\right)^2\), resulting in log-likelihood of -3687. In large samples, ML estimators have optimality properties. par. So, y = 265, n = 284, and the MLE for S in the S(. family and var. Perform a "line-search" to find the setting that achieves the highest log-likelihood score transformed. Note that beta is set For log_likelihood_fn, we used the following formula for a normal distribution (if you'd like to see a proof, you can do so at ()): Note that the notation indicates that we are calculating the sum of the value inside the summation for every FIT5197_S2_2019_assignment_2 - Jupyter Notebook 12 of 22 08-Sep-19, 10:54 PM Jan 22, 2015 · The log-likelihood is: lnL(θ) = −nln(θ) Setting its derivative with respect to parameter θ to zero, we get: d dθ lnL(θ) = −n θ. Notice: X, observed value of the data, has been plugged into the formula for density. Notice: coin tossing example uses the discrete density for f. For = :05 we obtain c= 3:84. (A. Note About Bias See the discussion regarding bias with the normal distribution for information regarding parameter bias in the lognormal distribution. (ϕˆ) = max (ϕ). formula for density. x L(N) = ∏ L(j) j=1 Since the individual likelihoods are extremely small numbers it is convenient to sum the log likelihoods at each site and report the likelihood of the entire tree as the log likelihood. Likelihood Function Surface ReliaSoft's Weibull++ software contains a feature that allows the generation of a three-dimensional representation of the log-likelihood function. This comparison can be quantified by the 'log-likelihood', a number that captures how well the model explains the data. com See full list on reliawiki. 91 Write the Monte Carlo likelihood estimate as L { 1 + ϵ }, where the unbiasedness of the particle filter gives E [ [] ϵ] = 0. Feb 16, 2011 · Naturally, the logarithm of this value will be positive. Note that the minuslogl function should return the negative log-likelihood, -log L (not the log-likelihood, log L, nor the deviance, -2 log L). Likelihood Ratio and Deviance The Likelihood Ratio test statistic is -2 times the difference between the log likelihoods of two models, one of which is a subset of the other. Despite their many advantages, however, LRs are rarely used, primarily because interpreting them requires a calculator to convert back and forth between probability of disease (a term familiar to all clinicians) and odds of disease (a term mysterious to most people other than statisticians and Purpose Performing likelihood ratio tests and computing information criteria for a given model requires computation of the log-likelihood where is the vector of population parameter estimates for the model being considered. In E Sep 26, 2021 · Deriving and nullifying the log-likelihood function according to parameters results in two formulas that have to be solved numerically in order to obtain the estimates. The log-likelihood cannot be computed in closed form for nonlinear mixed effects models. Thus, taking the natural log of Eq. that there are "enough" data and that the estimated parameter values do the formula here gives the log likelihood ratio test statistic and you can plug in the numbers the observed and expected frequencies and that is again a measure of the fit of the model, so the goodness of fit test statistics. STEP 2 Take the natural log of the likelihood, collect terms involving θ. In model estimation, the situation is a bit more complex. 9894228) 1. For the initial model (intercept only), our result is the value 27. The fit on Figure 3. For the problem considered here the LLF is l(p;y) = ˆ Xn i=1 yi! logp+ Xn i For log_likelihood_fn, we used the following formula for a normal distribution (if you'd like to see a proof, you can do so at ()): Note that the notation indicates that we are calculating the sum of the value inside the summation for every FIT5197_S2_2019_assignment_2 - Jupyter Notebook 12 of 22 08-Sep-19, 10:54 PM Oct 22, 2012 · Log Likelihood Function: It is often useful to calculate the log likelihood function as it reduces the above mentioned equation to series of additions instead of multiplication of several terms. $$ One use of likelihood functions is to find maximum likelihood estimators. ( x) + b ⋅ log. Note, too, that the binomial coefficient does not contain the parameterp . For log_likelihood_fn, we used the following formula for a normal distribution (if you'd like to see a proof, you can do so at ()): Note that the notation indicates that we are calculating the sum of the value inside the summation for every FIT5197_S2_2019_assignment_2 - Jupyter Notebook 12 of 22 08-Sep-19, 10:54 PM Maximum Likelihood Estimation Large-sample Properties For large n (and under certain regularity conditions), the MLE is approx-imately normally distributed: The log-likelihood function for the training set (in general, not for deep learning in particular) will depend on your choice of loss function. For log_likelihood_fn, we used the following formula for a normal distribution (if you'd like to see a proof, you can do so at ()): Note that the notation indicates that we are calculating the sum of the value inside the summation for every FIT5197_S2_2019_assignment_2 - Jupyter Notebook 12 of 22 08-Sep-19, 10:54 PM Sep 26, 2021 · Deriving and nullifying the log-likelihood function according to parameters results in two formulas that have to be solved numerically in order to obtain the estimates. r. The Log-Likelihood Function For computational convenience, one often prefers to deal with the log of the likelihood function in maximum likelihood calculations. 852. 726. di log (3. 055 0. It can however be estimated in a general framework for all […] This likelihood ratio, or equivalently its logarithm, can then be used to compute a p-value, or compared to a critical value to decide whether to reject the null model in favor of the alternative model. the likelihood function will also be a maximum of the log likelihood function and vice versa. In Section 2, we intro- Maximum Likelihood Estimation 1. Because logarithm is a monotonic strictly increasing function, maximizing the log likelihood is precisely equivalent to maximizing the likeli-hood, and also to minimizing the negative log likelihood. : Statistics 3858 : Likelihood Ratio for Exponential Distribution In these two example the rejection rejection region is of the form fx : 2log(( x)) >cg for an appropriate constant c. 2. Please give the reference so manually check that formula. 7 is only a little more than twice as likely as the hypothesis that the subject's long-term Bayesian Maximum Likelihood • Computation of mode sometimes referred to as 'Basyesian maximum likelihood': θmode=argmax θ (log £ p ¡ Ydata|θ XN i=1 log[pi(θi)] maximum likelihood with a penalty function. Xk i=1 X x2S i kx u ik 2 where u log-likelihood l( ) and the red curve is the corresponding lower bound. 060 0. The transformation is the log for the variance parameters, the identity for the mean, and the logit for the proportions. 4. . The log-likelihood function for a sample {x1, …, xn} from a lognormal distribution with parameters μ and σ is. 3836466. Notice, on the bottom right, that the probability of a dataset given a model is a constant in respect to thetas so we can ignore it in the optimisation process. Before we maximize our log-likelihood, let's introduce Newton The formula for calculating the likelihood ratio is: probability of an individual with the condition having the test result LR = probability of an individual without the condition having the test result Sep 26, 2021 · Deriving and nullifying the log-likelihood function according to parameters results in two formulas that have to be solved numerically in order to obtain the estimates. In the code below probs is an N x m matrix of probabilities for each of the N observations on each of the m categories. Log Likelihood Function † Themaximumofthelog likelihood function, l(p;y) = logL(p;y), is at the same value of p as is the maximum of the likelihood function (because the log function is monotonic). Aug 20, 2017 · The log-likelihood is the logarithm (usually the natural logarithm) of the likelihood function, here it is $$\ell(\lambda) = \ln f(\mathbf{x}|\lambda) = -n\lambda +t\ln\lambda. 19 Answer: A formula for a PDF (as known as probability density function) of a half normal distribution is: \boxed{PDF = \frac{\sqrt{2}}{\sigma\sqrt{\pi}}e^{-\frac{(x - \mu)^2}{2\sigma^2}}} for x \geq \mu A maximum likelihood function for a half normal distribution of probability will be: L(x_1, Oct 28, 2021 · Show activity on this post. The likelihood Introduction to the Science of Statistics Maximum Likelihood Estimation 1800 1900 2000 2100 2200 0. Log Likelihood Calculating the likelihood value for a model and a dataset once you have the MLEs For lab 01, weekly survival was monitored for 284 duck weeks. Jun 05, 2020 · Maximizing the (log) likelihood is the same as minimizing the cross entropy loss function. The probability of that class was either p, if y i =1, or 1− p, if y i =0. We will see that this term is a constant and can often be omitted. The first term is called the conditional log-likelihood, and the second term is called the marginal log-likelihood for the initial values. Usually ˝1, maybe 0:01 or something. Finding the likelihood maximizing can be done with a method such as gradient descent upon log L: If we wish to find ^ = argmin ( log L( )) we pick an initial guess (0):Then denote G( ) = r ( log L( )) 2Rn+1: We recursively define (i+1) = (i) G( (i)) where 0 < is a learning rate. uk/un Sep 26, 2021 · Deriving and nullifying the log-likelihood function according to parameters results in two formulas that have to be solved numerically in order to obtain the estimates. I. Let's take a better look at logmeanexp, starting by studying the code: Dec 23, 2020 · Author summary Researchers often validate scientific hypotheses by comparing data with the predictions of a mathematical or computational model. 050 0. ) Suppose that there exists a parameter ϕˆthat maximizes the likelihood function (ϕ) on the set of possible parameters , i. When you fit a model to a dataset, the log likelihood will be evaluated at every observation. 47 is better than on Figure 3. In contrast to the log likelihood ratio statistic itself, i. loglike: a LogLike object; ts_min : tolerance for if use fitter; thresh: sigma threshold to assume no Bayesian restriction; Constructor takes a LogLike object, fits it to the poisson-like function (see Poisson), and defines a function to evaluate that. Estimate the variance of the MLE estimator as the reciprocal of the expectation of second derivative of the log-likelihood function with respect to parameters: Properties & Relations (5) LogLikelihood is the sum of logs of PDF values for data: Jul 03, 2020 · Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message. 12. 13 ML. The likelihood values are quite small since we are multiplying several probabilities together. Likelihood ratios (LRs) constitute one of the best ways to measure and express diagnostic accuracy. . 5A we obtain this critical value from a ˜2 (1) distribution. The length of the vector depends on the chosen effect. Nov 08, 2021 · The log-likelihood function is defined to be the natural logarithm of the likelihood function . The log-likelihood is easier to maximize, especially for the multiplied likelihoods for independent random variables. The log-likelihood is defined to be `(~x,~a)=ln{L(~x,~a)} 3. Jan 12, 2016 · The log-likelihood. So we have: Maximizing the Likelihood. 00001829129\) and the log-likelihood would be. shows that. An analog to the likelihood ratio test statistic is also developed to test the statistical significance of a direct contrast of predictions between the conventional and the log-gamma linear mixed models. The log likelihood function, written l(), is simply the logarithm of the likeli-hood function L(). Jul 16, 2018 · A clever trick would be to take log of the likelihood function and maximize the same. To summarize the maximum likelihood principle: (a) Make a distributional assumption about the data (b) Use the conditioning to write the joint likelihood function (c) For convenience, we work with the log-likelihood function (d) Maximize the likelihood function with respect to the parameters There are some subtle points. Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. (a)Write down the log-likelihood function. ( 1 − x) = t. We maximize this penalized log-likelihood to obtain the penalized ML estimate. 070 N L(N|42) Likelihood Function for Mark and Recapture Likelihood Ratio Test Statistic. Apr 16, 2020 · The full likelihood contains values that are data-specific, based on the number of cases involved, but are the same regardless of the parameter estimates, given the same number of cases. 1 Log likelihood If is often easier to work with the natural log of the likelihood function. Aug 31, 2015 · The ratio of the likelihood at p = . 5 0. Our results are critical for accurate routine quantitative analysis of past, current and, future seismicity in Ethiopia. N L= L(1) x L(2) . of the normal log-likelihood has the martingale difference property when the first two conditional moments are correctly specified, the QMLE is generally consistent and has a limiting normal distribution. To find the maxima of the log likelihood function LL(θ; x), we can: pose two likelihood-based methods-the signed log-likelihood ratio test and the modified signed log-likelihood ratio test (Barndorff-Nielsen and Cox, 1994). Just as it can often be convenient to work with the log-likelihood ratio, it can be convenient to work with the log-likelihood function, usually denoted \(l(\theta)\) [lower-case L]. Optimization Methods Unlike OLS estimation for the linear regression, we don't have a closed-form solution for the MLE. More precisely, , and so in particular, defining the likelihood function in expanded notation as. N Sep 26, 2021 · Deriving and nullifying the log-likelihood function according to parameters results in two formulas that have to be solved numerically in order to obtain the estimates. where a and b are positive integers, and x is between 0 and 1. 00001829129) [1] -10. We provide easily computable formulas for asymptotic standard errors that are valid under nonnormality. Sep 26, 2021 · Deriving and nullifying the log-likelihood function according to parameters results in two formulas that have to be solved numerically in order to obtain the estimates. Let us denote the unknown parameter (s) of a distribution generically by \ (\theta\). 739. The log likelihood function for the unordered logit model is given by the product of the probabilities for each case taking its observed value: where beta_0 is a K vector of zeroes and each of the remaining beta_j is a K vector of parameters to be estimated. For a size test, using Theorem 9. As with log likelihood ratios, unless otherwise specified, we use log base e. Apr 20, 2021 · A likelihood method is a measure of how well a particular model fits the data; They explain how well a parameter (θ) explains the observed data. Hence, L ( θ ) is a decreasing function and it is maximized at θ = x n. g. We use likelihood for most inference problems: Log Likelihood-1. The vector of transformed model parameters that the data likelihood will be evaluated at. 2Very roughly: writing for the true parameter, ^for the MLE, and ~for any other consis-tent estimator, asymptotic e ciency means limn!1 E h nk ^ k2 i limn!1 E h nk~ k i. So in our example, \(\mathcal{L} = . 8 yields the log likelihood function: l( ) = XN i=1 yi XK k=0 xik k ni log(1+e K k=0xik k) (9) To nd the critical points of the log likelihood function, set the rst derivative with respect to each equal to zero. Redo the previous example using log likelihood. Station corrections significantly reduce ML residuals and range between 0. This is okay because the maxima of the likelihood and its log occur at the same value of the parameters. In other words, given these experimental results (7 successes in 10 tries), the hypothesis that the subject's long-term success rate is 0. The penalized log-likelihood is then ln{L(β; y)} − r(β − m) 2 /2, where r/2 is the weight attached to the penalty relative to the original log-likelihood. Let l( ) = lnL( ) denote the log-likelihood, and write its Taylor expansion JSM 2016 - Section on Statistical Education 968 that result in a lower (rather than higher) log-likelihood score ! " Solution: instead of updating the parameters to the newly estimated ones, interpolate between the previous parameters and the newly estimated ones. 0 In this video it is explained why it is, in practice, acceptable to maximise log likelihood as opposed to likelihood. The maximum likelihood estimate is thus, θ^ = Xn. Thus, the log-likelihood function for a sample {x1, …, xn} from a lognormal distribution is equal to the log-likelihood function from {ln x1, …, ln xn} minus the constant term ∑lnxi. 2{l(o)-l(w)}, r allows of one-sided testing. log(L(θ))= i=1 n ∑log(P(X i|θ)) The idea is to üassume a particular model with unknown parameters, üwe can then define the probability of observing a given event conditional on a particular set of parameters. Use an explicit formula for the density of the tdistribution. Since the probability distribution depends on \ (\theta\), we can make this dependence See full list on statlect. 42 ML units. which is < 0 for θ > 0. the parameter(s) , doing this one can arrive at estimators for parameters as well. 7, which is . It is apparent that, although the log-likelihood is bounded above by 0, it does not reach a maximum as beta increases. One can thus arrive at the same estimates of the model parameters by maximizing either the log-likelihood or just the kernel of the log-likelihood. If the log-likelihood is concave, one can find the Sep 26, 2021 · Deriving and nullifying the log-likelihood function according to parameters results in two formulas that have to be solved numerically in order to obtain the estimates. Jun 13, 2017 · The Bayes formula. 045 0. I have been wondering if there is a way to approximate the roots of the log binomial likelihood equation. Example 2. The full log-likelihood function is called the exact log-likelihood. answer: We had the likelihood P(55 heads jp Oct 27, 2020 · The log-likelihood is just the sum of the log of the probabilities that each observation takes on its observed value. To do this, nd solutions to (analytically or by following gradient) dL(fX ign i=1;) d = 0 For log_likelihood_fn, we used the following formula for a normal distribution (if you'd like to see a proof, you can do so at ()): Note that the notation indicates that we are calculating the sum of the value inside the summation for every FIT5197_S2_2019_assignment_2 - Jupyter Notebook 12 of 22 08-Sep-19, 10:54 PM Figure 1 shows a graph of the log- likelihood as a function of the slope "beta". 12, is only 2. This will convert the product to sum and since log is a strictly increasing function, it would not impact the resulting value of θ. Related terms: Covariance Matrix; Degrees Sep 26, 2021 · Deriving and nullifying the log-likelihood function according to parameters results in two formulas that have to be solved numerically in order to obtain the estimates. Nov 11, 2021 · Chief among these properties are simple formulas for the gradient of the log-likelihood $\ell$, and for the Fisher information matrix, which is the expected value of the Hessian of the negative log-likelihood under a re-sampling of the response under the same predictors. We could take the natural logarithm of the likelihood to alleviate this issue. Maximum likelihood estimation AIC for a linear model Search strategies Implementations in R Caveats - p. We demonstrate the high performance of the proposed methods in small-sample set- tings. I'm guessing you're using something like a quadratic loss function for a binary classification problem, since this is a common approach. 0 0. a ⋅ log. Jun 30, 2017 · The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the lognormal distribution are covered in Appendix D. 27, to the likelihood at p = . 065 0. It is the user's responsibility to ensure that the likelihood is correct, and that asymptotic likelihood inference is valid (e. 7) Note that the score is a vector of first partial derivatives, one for each element of θ. log likelihood formula
nhz 0ve vtn srd vim awz nro pxv jew 9fm nhz 2hx sgo kbq 7g0 yuo fef 2sf gm1 qob | CommonCrawl |
What is the reason behind the phenomenon of Joule-Thomson effect?
For an ideal gas there is no heating or cooling during an adiabatic expansion or contraction, but for real gases, an adiabatic expansion or contraction is generally accompanied by a heating or cooling effect. What is the reason behind such a phenomenon? Is it related to the property of real gases or is it something else?
physical-chemistry thermodynamics
Gaurang Tandon
J_B892J_B892
In a reversible adiabatic expansion or compression, the temperature of an ideal gas does change.
In a Joule-Thompson type of irreversible adiabatic expansion (e.g., in a closed container), the internal energy of the gas does not change. For an ideal gas, its internal energy depends only on its temperature. So, for an irreversible adiabatic expansion of an ideal gas in a closed container, its temperature does not change. But, the internal energy of a real gas depends not only on its temperature but also on its specific volume (which increases in an expansion). So, for a real gas, its temperature changes. The Joule-Thompson effect is one measure of the deviation of a gas from ideal gas behavior.
This addresses a comment from the OP regarding the effect of specific volume on the internal energy of a real gas.
Irrespective of the Joule-Thompson effect, one can show (using a combination of the first and second laws of thermodynamics) that, for a pure real gas, liquid, or solid (or one of constant chemical composition), the variation of specific internal energy with respect to temperature and specific volume is given by: $$dU=C_vdT-\left[P-T\left(\frac{\partial P}{\partial T}\right)_V\right]dV$$The first term describes the variation with respect to temperature and the second term describes the variation with respect to specific volume. For an ideal gas, the second term is equal to zero. However, for a real gas, the second term is not equal to zero, and that means that, at constant internal energy (as in the Joule-Thompson effect), the temperature will change when the specific volume changes. This is a direct result of the deviation from ideal gas behavior.
Chet MillerChet Miller
10.1k11 gold badge99 silver badges1616 bronze badges
$\begingroup$ Could you elaborate on the internal energy dependency of real gases in the Joule-Thompson effect? $\endgroup$ – J_B892 Mar 21 '18 at 8:18
$\begingroup$ See my Addendum. $\endgroup$ – Chet Miller Mar 21 '18 at 12:15
$\begingroup$ It's my understanding that, in a Joule-Thompson expansion, the internal energy can change, and what stays constant is the enthalpy, i.e., U + PV. $\endgroup$ – theorist Jan 10 '19 at 22:23
$\begingroup$ @theorist There are actually two versions of JT. One is the version you referred to involving steady flow through a porous plug or valve. The other version is a closed system containing two chambers separated by a partition. The initial pressures in the two chambers are unequal, and the partition is either totally removed or punctured. In this case, the total internal energy is constant. $\endgroup$ – Chet Miller Jan 10 '19 at 23:35
$\begingroup$ I believe what you were initially describing is typically referred to as a Joule expansion, as distinguished from a Joule-Thomson expansion. At least that's how I've always seen the two distinguished (e.g., www-thphys.physics.ox.ac.uk/people/AlexanderSchekochihin/A1/…) (though that author really shouldn't be putting deltas in front of W or Q). $\endgroup$ – theorist Jan 11 '19 at 1:27
Not the answer you're looking for? Browse other questions tagged physical-chemistry thermodynamics or ask your own question.
Mathematical basis of why enthalpy of mixing is 0 for ideal gas
Average or individual molar heat capacity?
Why is the ideal gas law so ubiquitous?
Reason for negative Joule Thomson coefficient of Helium and Hydrogen at NTP conditions
Calculating temperature change due to Joule-Thomson effect
adiabatic expansion vs Joule-Thomson Effect
How could the ideal gas law be discovered from experiments on real gases?
A problem regarding the first law of thermodynamics | CommonCrawl |
Monitoring of nutrient limitation in growing E. coli: a mathematical model of a ppGpp-based biosensor
Alexandra Pokhilko ORCID: orcid.org/0000-0001-6565-65511
E. coli can be used as bacterial cell factories for production of biofuels and other useful compounds. The efficient production of the desired products requires careful monitoring of growth conditions and the optimization of metabolic fluxes. To avoid nutrient depletion and maximize product yields we suggest using a natural mechanism for sensing nutrient limitation, related to biosynthesis of an intracellular messenger - guanosine tetraphosphate (ppGpp).
We propose a design for a biosensor, which monitors changes in the intracellular concentration of ppGpp by coupling it to a fluorescent output. We used mathematical modelling to analyse the intracellular dynamics of ppGpp, its fluorescent reporter, and cell growth in normal and fatty acid-producing E. coli lines. The model integrates existing mechanisms of ppGpp regulation and predicts the biosensor response to changes in nutrient state. In particular, the model predicts that excessive stimulation of fatty acid production depletes fatty acid intermediates, downregulates growth and increases the levels of ppGpp-related fluorescence.
Our analysis demonstrates that the ppGpp sensor can be used for early detection of nutrient limitation during cell growth and for testing productivity of engineered lines.
The efficient production of biofuels, recombinant proteins and other useful compounds in E. coli cells requires the optimization of metabolic fluxes and growth conditions [1,2,3]. The uncontrolled consumption of essential nutrients might cause early cessation of growth and reduction of product yields [1, 2]. Interestingly, cells possess a natural mechanism of sensing of nutrient limitation related to the production of the second messenger guanosine tetraphosphate, ppGpp (an "alarmone"). This signalling pathway might be useful for the control of biotechnological processes. ppGpp is an early sensor of nutrient limitation, which directly controls bacterial growth by binding to RNA polymerase bound to ribosomal RNA (rRNA) gene promoters P1 and P2 [4,5,6,7,8,9,10]. Binding of ppGpp decreases the life time of short lived open complexes that RNA polymerase forms with P1 and P2 promoters. By inhibiting RNA polymerase activity ppGpp adjusts rRNA biosynthesis to available nutrient levels [4,5,6, 11, 12]. Regulation of the P1 and P2 promoters by ppGpp is fast (minutes) and covers a wide dynamic range of P1, P2 activities [6, 11]. This suggests the possibility of transmitting changes in ppGpp concentrations into a P1, P2-coupled fluorescent output, which would allow continuous monitoring of transcriptional activity of ppGpp inside the cells. This approach is different from previously developed chemosensors, which were used for end time-point measurements of ppGpp concentration in bacterial lysates [13,14,15]. The practical application of these chemosensors was limited due to small spectral shift in their fluorescence upon ppGpp binding and the requirement to synthesise these complicated organic compounds [13, 16, 17]. Using the natural mechanism of ppGpp sensing via modulation of P1, P2 activity should overcome the limitations of these previously designed ppGpp chemosensors. Here we use mathematical modelling to analyse the intracellular kinetics of ppGpp and to design a ppGpp-based biosensor that reports ppGpp concentration and thus serves to indicate poor intracellular nutritional status. Next we explore the capacity of the biosensor to respond to dynamic changes in intracellular nutrient state during batch growth of fatty acid-producing and non-producing E. coli cells.
Fatty acids (FAs) are potential biofuels, which can be synthesised by engineered E. coli lines overexpressing a thioesterase (Tes) enzyme. Tes hydrolyses the thioester bond in Acyl-ACP molecule [2, 18, 19]. Acyl-ACP (long-chain FA linked to activated acyl carrier protein, hereafter simply called ACP) is the primary product of fatty acid synthesis (FAS) [10, 20, 21], which is naturally used by cells for phospholipid (PL) production (Fig. 1). Additionally to FA synthesis, Acyl-ACP can be diverted for the production of other types of biofuels, such as long chain alkyl esters and alkanes in engineered lines [22]. Alkanes represent the most desirable biofuel, with the highest energy density; however, attempts to engineer alkane-producing organisms have been hampered by low yields and high contamination with fatty alcohols [23, 24]. It was proposed that it might be more practical to use chemical production of alkanes from FA, because of the much higher yields of FA in cells, and economy of cellular resources which would otherwise be required for expression of alkane-synthesising pathways [2]. Therefore the production of FAs in Tes-overexpressing (Tes-ox) lines of E. coli represents an important technological step in the biosynthesis of alkanes [23]. However, lines with high Tes levels have decreased FA yields and growth, which might be related to depletion of the Acyl-ACP pool required for membrane biosynthesis [2]. It was suggested that the consumption of Acyl-ACP for the production of Acyl-ACP-derived products should be carefully monitored in order to achieve high yields [19, 22]. A natural mechanism of sensing Acyl-ACP depletion is mediated by the accumulation of ppGpp due to decreased activity of the ppGpp hydrolase SpoT [5, 10]. This suggests that a ppGpp biosensor might be used for diagnostics of the productivity of FA-producing lines.
The scheme for ppGpp signalling, FA production and ppGpp sensor included in the model. The left colour box illustrates the relationships between ppGpp and growth. ppGpp accumulates at the end of exp. growth phase [6, 8], marked by asterisk. This is described in the model through a depletion of exp. phase-limiting nutrient lim (e.g., main carbohydrate). Increase of ppGpp inhibits the rRNA biosynthesis from P1/P2 promoter [6, 25], which slows down the growth [12]. The termination of growth in stationary (stat) phase was described in the model by the depletion of growth-supporting nutrient nutr (e.g., secondary carbohydrate) (double asterisk). The decrease of ppGpp concentrations in stat phase [6, 8] was described in the model through downregulation of ribosome-mediated synthesis of ppGpp [5]. The right colour box illustrates the relationship between ppGpp and fatty acids. In normal cells Acyl-ACP product of the fatty acid synthesis (FAS) is consumed for membrane PL synthesis (PLS). But in FA-producing cells Acyl-ACP is diverted for the synthesis of FA by thioesterase (Tes) enzyme (orange) [22]. Excessive production of FA leads to Acyl-ACP depletion, which stimulates accumulation of ppGpp [10]. In addition to inhibiting growth, ppGpp inhibits PLS flux (through inhibition of the key enzyme PlsB, [26]), causing transient accumulation of Acyl-ACP, which downregulates FAS flux through a feedback inhibition of key FAS enzymes by Acyl-ACP [20, 26]. FAS and PLS fluxes are also inhibited in stat phase due to decrease in protein synthesis at low rib [22]. The growth is additionally inhibited by PLS decrease [27]. The ppGpp-mediated regulations are shown in blue. The bottom colour box shows the proposed ppGpp-based biosensor. It includes the expression of transcriptional inhibitor I from the P1/P2 promoter and repression of GFP expression by I
Nutrient depletion increases ppGpp levels through various mechanisms [5]. Thus, amino acid starvation upregulates ppGpp synthase RelA, while carbohydrate and fatty acid starvation downregulates ppGpp hydrolase SpoT [5]. Therefore, nutrient (in particular, carbohydrate) limitation at the end of fast exponential growth phase (hereafter called exp) leads to an increase of ppGpp concentration [6, 8]. This in turn slows down the growth. For example, SpoT mutants with high ppGpp levels have reduced P1, P2 activities and slower growth [12]. Additionally to growth phase-specific accumulation of ppGpp, it is expected to be elevated in high FA-producing lines due to Acyl-ACP depletion [5, 10]. Here we model the changes in the levels of ppGpp and ppGpp-based fluorescence (a biosensor) in normal and FA-producing E. coli cells. We explore the potential effects of biosensor parameters on its performance and demonstrate that the sensor might be used for early detection of nutrient limitation during E. coli growth.
Model construction
Here we model changes in the concentrations of ppGpp and its fluorescent reporter GFP (the biosensor) in E. coli cells. We consider 2 types of nutrient limitation, which might increase intracellular ppGpp levels: growth phase-specific or non-specific (Fig. 1). The first type is related to a depletion of growth-limiting nutrient (e.g., carbohydrates) in the growth medium at the end of exp. growth phase. The second type is related to a depletion of essential molecules inside a cell due to their excessive diversion into the production of biotechnological products (e.g., depletion of Acyl-ACP during fatty acid (FA) production; [2], Fig. 1). We use the model to describe and explain the existing data on ppGpp and growth in normal and FA-producing cells ([2, 6]; Results) and to explore the applicability of a ppGpp biosensor for the control of growth conditions and optimization of FA production. The parameters of the model were estimated based on available data as described in Additional file 1. The analysis of parameter sensitivity demonstrated that the model is robust to parameter variation (Additional file 1: Fig. S1). The model consists of 9 ordinary differential equations presented below.
The processes included in the model are summarised on Fig. 1 and described below. Briefly, we model the mutual relationship between ppGpp and growth phases. The increase of ppGpp concentration [6, 8] at the end of exp. growth was described by the depletion of exp. phase-limiting nutrient lim (e.g., main carbohydrate, see Results). The decrease in the growth rate during exp. to stat phase transition is described through ppGpp-mediated decrease of ribosomal synthesis [6, 25]. The termination of growth in stat phase is described by the depletion of a second, growth-supporting nutrient nutr (e.g., secondary carbohydrate, which is released into the medium during exp. phase; Results).
The model also describes ppGpp accumulation due to a depletion of Acyl-ACP product of FAS during FA production (Fig. 1). The accumulation of ppGpp inhibits membrane phospholipids synthesis (PLS), FAS, and growth through several parallel mechanisms in our model (Fig. 1). This includes inhibition of the key PLS enzyme PlsB by ppGpp [26], causing transient accumulation of Acyl-ACP and inhibition of FAS flux by Acyl-ACP [20, 26]; downregulation of FAS and PLS fluxes and growth by low ribosomal activity rib (relative number of active ribosomes; Fig. 1); and inhibition of growth by decreased PLS flux in high FA-production lines [27].
In addition to ppGpp, FA and growth, the model analyses the kinetics of GFP reporter of ppGpp in FA-producing and non-producing lines. The ppGpp reporter is designed using the natural property of ppGpp to inhibit transcription from P1 and P2 promoters, responsible for rRNA biosynthesis (Fig. 1; [6, 11, 25]). P1 and P2 promoters are regulated in a similar way in E. coli cells [6, 11], so we considered a tandem of the ribosomal P1 and P2 promoters as a single entity (hereafter called the P1/P2 promoter) in our model. To transmit the increase of ppGpp levels to increase of a fluorescent signal, an artificial inhibitor I (e.g., tet-repressor TetR or lac repressor LacI) is expressed from the P1/P2 promoter. Expression of GFP protein is from an I-repressible promoter (e.g., TetR- or LacI-inhibited promoter; Fig. 1). The use of rapidly degraded variants of I and GFP proteins ensures that the biosensor reports dynamic changes in the metabolic state of the cells [28, 29], as we discuss in Results. The levels of intracellular compounds are determined in our model by their synthesis and degradation. Their dilution due to cell division was ignored due to its slow rate (less than 0.01 min−1, [8]).
The model equations are presented below.
Acyl-ACP and FA production
The intracellular kinetics of the key intermediate of fatty acid metabolism Acyl-ACP is described in our model as:
$$ \frac{dAcyl\hbox{-} ACP}{dt}={V}_{FA S}-{V}_{PLS}-{k}_{fa}\cdot {V}_{FA} $$
Where V FAS and V PLS are the intracellular rates of FAS and PLS (in μM/min); V FA is the rate of FA production in a cell culture, normalized to cell number N (in mg/l/min/OD); and k fa is a volume coefficient for re-calculation of the V FA rate into intracellular units of μM/min (Additional file 1).
The rate of FAS is described by the simplified lump equation, which includes the Michaelis-Menten dependence of FAS rate on concentration of substrates - AcCoA and ACP protein and feedback inhibition of FAS rate by Acyl-ACP product [26]:
$$ {V}_{FAS}={V}_{m- FAS}\cdot rib\cdot \frac{ACP}{ACP+{K}_{m- ACP}}\cdot \frac{AcCoA}{AcCoA+{K}_{m- AcCoA}}\cdot \frac{1}{1+ Acyl\hbox{-} ACP/{K}_{i- Acyl- ACP}} $$
(1')
Where AcCoA and ACP are concentrations of acetyl-CoA and free active ACP protein; Acyl-ACP is the total concentration of long-chain acyl product of FAS.
We assumed that FAS and PLS fluxes and growth rate are proportional to the ribosomal activity rib due to the dependence of the protein synthesis on rib.
PLS rate is assumed to be determined by the rate of the first committed enzyme, PlsB, with Michaelis-Menten dependence on Acyl-ACP substrate and inhibition by ppGpp [19, 26]:
$$ {V}_{PLS}={V}_{m- PLS}\cdot rib\cdot \frac{Acyl\hbox{-} ACP}{Acyl\hbox{-} ACP+{K}_{m- Acyl- ACP}}\cdot \frac{1}{1+{\left( ppGpp/{K}_{i- ppGpp}\right)}^n} $$
(1'')
The use of the Hill function for the inhibition of PLS by ppGpp is motivated by existing data on multiple levels of negative regulation of PlsB and related enzymes by ppGpp [22].
The rate of FA production is described by Michaelis-Menten kinetics of thioesterase Tes, which hydrolyses thioester bond in the molecule of Acyl-ACP and releases free FA:
\( {V}_{FA}={V}_{tes}\cdot \frac{Acyl\hbox{-} ACP}{Acyl\hbox{-} ACP+{K}_{m- tes}} \) (1″').
The activity of Tes (V Tes ) depends on Tes concentration. In simulated Tes-ox and normal lines V Tes values were estimated from the measured FA yields ([2]; see Results).
The rate of the total FA production by a cell culture is described as:
$$ \frac{dFA}{dt}=N\cdot {V}_{FA} $$
where N is a cell number.
Our model describes the kinetics of cell growth (cell number, N) [2, 6], which affects nutrient levels and FA yields. The accumulation of ppGpp at the end of exp. phase (Fig. 1) was described by the depletion of a limiting nutrient (variable lim, eqs. 5, 6, see below). Increase of ppGpp in turn leads to decrease of the ribosomal activity (eqs. 7, 7′) and inhibition of growth (eq. 3, Fig. 1) during exp. to stationary (stat) phase transition. The depletion of a second, growth-supporting nutrient (variable nutr; eq. 4) determines the cessation of growth and entrance to stat phase. Additionally, we assumed that a certain minimal rate of phospholipid synthesis (V 0 ) is required to sustain growth [27]. In addition to limited production of total membrane PL, the membrane composition might be unbalanced in high Tes-ox lines due to higher proportion of unsaturated FA, which was suggested to be a key factor of FA toxicity and growth limitation in high Tes-ox lines [30]. In our model, the growth limitations in high Tes-ox lines [2] were collectively accounted by restricting growth rate at low rates of PLS (V PLS ) (eq. 3'below). The cell growth is described as:
$$ \frac{dN}{dt}={v}_g\cdot N $$
$$ {v}_g={K}_{gr}\cdot nutr\cdot rib\cdot {V}_{PLS}/\left({V}_{PLS}+{V}_0\right) $$
Where v g is the growth rate (in min−1) and N is a cell number, expressed in units of OD measured at a wavelength of 600 nm.
The kinetics of nutr and lim depletion was assumed to be proportional to cell number N:
$$ \frac{dnutr}{dt}=-{k}_{nutr}\cdot N\cdot \frac{nutr}{nutr+0.001} $$
$$ \frac{d\mathit{\lim}}{dt}=-{k}_{lim}\cdot N\cdot \frac{\mathit{\lim}}{\mathit{\lim}+0.001} $$
where nutr and lim are relative amounts of the growth-supporting and exp. phase-limiting nutrients respectively, initially both set to 1. To avoid negative values of nutr and lim, their depletion is restricted when their concentrations reach levels of 0.001.
ppGpp kinetics
The kinetics of ppGpp is determined by the balance between its synthesis and hydrolysis [5]:
$$ \frac{dppGpp}{dt}={k}_{+ ppGpp}\cdot rib- ppGpp\cdot \left({k}_{- ppGpp}\cdot \mathit{\lim}+k{0}_{- ppGpp}\right)\cdot \frac{Acyl\hbox{-} ACP}{Acyl\hbox{-} ACP+{K}_{m- Ac\_ pp}} $$
Here ppGpp, rib and lim are the levels of ppGpp, ribosome activity and limiting nutrient lim, respectively. Several molecular mechanisms are integrated during ppGpp synthesis and degradation. ppGpp is synthesises on ribosomes [5, 6], therefore in our model we assumed that the synthesis rate of ppGpp is proportional to the number of active ribosomes rib [5, 6], with the rate constants k +ppGpp . We next assumed that depletion of lim nutrient at the end of the exp. phase slows down ppGpp hydrolysis (rate constant k -ppGpp ), presumably via the inhibition of ppGpp hydrolase SpoT [31, 32]. This results in the increase of ppGpp concentration at the end of exp. phase in our model ([6, 8], Fig. 1). The background hydrolysis of ppGpp in the absence of lim is accounted for by the rate constant k0 -ppGpp . Finally, ppGpp levels are upregulated in our model by the depletion of Acyl-ACP, due to the inactivation of SpoT hydrolase activity [5, 10, 21, 33].
Ribosomal and P1/P2 promoter activities
The equation for the ribosomal activity (relative number of active ribosomes) rib is:
$$ \frac{drib}{dt}={k}_{+ rib}\cdot P1P2-{k}_{- rib}\cdot rib $$
where P1P2 and rib are the relative activities of the P1/P2 promoter and the ribosomes. The rib synthesis is determined by the rate of rRNA transcription from the P1/P2 promoter [6, 11, 34]; therefore, we assume a linear dependence of rib synthesis rate on P1/P2 activity.
Based on the existing data we assumed that during cell growth transcription from P1/P2 promoter is regulated by ppGpp inhibition [6, 11, 25]. P1 and P2 show similar responses to changes in nutrient levels, but the tandem of P1 and P2 promoters (P1/P2) shows a stronger response compared to single P1 and P2 promoters [11]. This was accounted for in the model by using a Hill coefficient m = 2 in the equation for P1/P2. The data [6, 11] suggest that P1/P2 activity quickly (in minutes) responds to changes in ppGpp concentration. Therefore transcriptional activity of P1/P2 was expressed via ppGpp by the algebraic equation:
$$ P1P2=\frac{1}{1+{\left( ppGpp/{k}_{i-P1P2- ppGpp}\right)}^m} $$
ppGpp sensor
ppGpp sensing was implemented through the inhibition of GFP expression by the inhibitor I, which is expressed from the P1/P2 promoter (Fig. 1). Since the abundance of most proteins changes much more slowly than the abundance of their mRNAs [28], we assumed that the amount of I mRNA is simply proportional to the transcriptional activity of P1/P2 promoter, so that the kinetics of protein I is described by the following equation:
$$ \frac{dI}{dt}={k}_I\cdot \left(P1P2-I\right) $$
where I is the relative amount of the inhibitor I (changing from 0 to 1) and k I is a rate constant of protein I degradation. The rate constant of protein I synthesis was assumed to be equal to k I to achieve the maximal level of I = 1.
The amount of GFP mRNA is determined by the amount of the inhibitor I. The equation for the relative amount of fluorescent GFP protein is:
$$ \frac{dGFP}{dt}={k}_{GFP}\cdot \left(\frac{1}{1+{\left(I/{Ki}_I\right)}^l}- GFP\right) $$
where GFP is the relative amount of GFP fluorescence (changing from 0 to 1) and k GFP is the rate constant of GFP protein degradation. The rate constant of GFP synthesis was assumed to be equal to k GFP to achieve the maximal level of GFP = 1. A Hill coefficient l = 4 accounts for the formation of tetrameric inhibitor complex (e.g., lacR) on a double-stranded DNA [35].
The constant Ki I for inhibition of GFP expression by the relative amount of the inhibitor I integrates two unknown parameters of the system: the absolute expression level of I and its inhibition strength. Ki I was varied as discussed in Results, with the optimal value of Ki I = 0.1. The system of ODEs was solved using MATLAB, integrated with the stiff solver ode15s (The MathWorks UK, Cambridge). The MATLAB code of the model is provided in Additional file 1.
E. coli cells are commonly used as cell factories for the production of useful products, such as FA [1, 2, 18]. But the redirection of nutrients from essential metabolic pathways often slows down cell growth and limits product yields [1,2,3]. To control the growth conditions during biotechnological applications of E. coli cells, we propose to use a biosensor, which couples changes in ppGpp concentrations with a fluorescent GFP-based output (Fig. 1). The increase GFP levels would allow detection of nutrient limitation, which could be used for controlling and optimizing growth conditions. To simulate intracellular dynamics of ppGpp during batch cultivation of E. coli cells, we built a mathematical model, which integrates literature data on ppGpp regulation. The model describes the interrelationship between ppGpp, growth and FA production, and simulates the kinetics of GFP reporter of ppGpp, as summarised in the legend of Fig. 1 and discussed below. We used the model to describe and explain the existing data on ppGpp and growth in normal and FA-producing lines [2, 6] and analysed applicability of the sensor for monitoring of ppGpp concentration under various conditions.
ppGpp kinetics in normal (non-FA-producing) E. coli cells
Cell growth in batch cultures is widely used in biotechnology, and growth can be described by three main phases [36, 37]. The first phase, exp. growth, is characterized by an exponential increase of the cell number (Fig. 2a). During the second phase, growth is gradually slowing down, while cells move from exp. to stat phase (Fig. 2a). The third, stat phase is characterized by the absence of growth (Fig. 2a). The existing data suggest that the end of exp. phase coincides with increase of ppGpp concentration ([6, 8], Fig. 2a). Therefore, the slowing down of growth in phase 2 appears to be a consequence of ppGpp increase, presumably related to nutrient depletion during exp. phase. However, the cell number keeps increasing during the second phase (Fig. 2a), and both the first and second phases are characterized by high accumulation of biotechnological products, such as FA in engineered lines [2, 3, 18]. This suggests that despite a depletion of an exp. phase-limiting nutrient (called lim in our model), the medium has sufficient amount of other nutrients to sustain growth. To describe the observed kinetics of growth and ppGpp, we implemented a two-factor depletion scheme in our model (Fig. 1). We assumed that sequential depletion of two nutrient factors lim and nutr determines the growth phases. The observed increase of ppGpp concentration at the end of phase 1 was explained in our model by the inhibition of ppGpp hydrolysis upon depletion of lim [31, 32] (Fig. 2a). The lim factor might be represented by a carbohydrate nutrient (e.g. glycerol for [2, 6] conditions) or some other components of the medium, whose depletion limits exp. growth. The decline of growth rate during phase 2 is explained in our model through ppGpp-mediated inhibition of rRNA transcription from P1/P2 promoter (Fig. 1), in agreement with the P1/P2 data (Fig. 2a; [6]). The termination of phase 2 is described in our model through the depletion of a second nutrient nutr, causing the cessation of growth and entrance to stat phase (Fig. 2a). nutr might represent a secondary carbohydrate source, such as acetate, which is released into the medium during phase 1. Alternatively, depletion of other nutrients might limit cellular growth during the two phases. The nature of the limiting factor might vary between different experiments [1, 36, 37]. However our two-factor phenomenology provides a good fit to growth data [2] (see below). Notably, after reaching its maximum, ppGpp concentration decreases during phase 2 (Fig. 2a; [6, 8]). The model explains this by a decrease in ribosome-mediated synthesis of ppGpp during phase 2 (Figs. 1 and 2a; [6, 38,39,40].
The kinetics of model variables during batch growth of normal, non-FA producing lines of E. coli cells. a. The blue line and symbols show a growth curve (OD units) in the model and published data [6]. Three phases of growth are indicated by black vertical lines: 1-exp. phase, 2-exp.-to-stat transition phase, 3-stat phase. ppGpp concentrations in the model and published data [6] are shown by the red line and symbols respectively. The end of phase 1 coincides with an increase of ppGpp concentration and the depletion of limiting nutrient lim (green dotted line). The end of phase 2 coincides with the depletion of the growth-supporting nutrient nutr (green solid line). The transcriptional activity of P1/P2 promoter (normalized to maximum) is shown by black solid line and black symbols for the model and the data [6] respectively. The relative ribosomes number is shown by the orange line. The data on ppGpp kinetics were quantified based on available measurements of ppGpp concentrations [8]. b. The kinetics of Acyl-ACP, ppGpp and FAS/PLS flux are shown by black, red and blue colours respectively
The model also predicts the kinetics of Acyl-ACP accumulation during cell growth. Acyl-ACP is produced by FAS and consumed for PL and FA synthesis (Fig. 1). In normal lines the production of free FA is negligible [18]; therefore FAS and PLS fluxes are equal. In our model, FAS and PLS fluxes are downregulated by ppGpp at the beginning of phase 2. Thus, PLS is inhibited by ppGpp directly, which leads to transient accumulation of Acyl-ACP and resulting inhibition of FAS by Acyl-ACP (Figs. 1 and 2b; [19, 20, 26, 41]). In addition, FAS and PLS fluxes are downregulated by low ribosome activity rib, which affects protein synthesis and thus metabolic fluxes in our model (Fig. 1). Therefore, the model predicts a transient accumulation of Acyl-ACP due to the inhibition of PLS by ppGpp at the beginning of phase 2, followed by a decrease of Acyl-ACP levels due to a drop of the FAS rate (Fig. 2b).
Sensing of ppGpp levels in normal E. coli cells. Response time of GFP fluorescent output
The synthesis of ppGpp, followed by the fast (~minutes) ppGpp-mediated inhibition of the transcription from P1/P2 promoter provides an early sensing of nutrient limitation in bacterial cells (Fig. 1; [6, 11]). Based on this natural mechanism, in our biosensor we express the inhibitor I (e.g., TetR or LacI) of GFP expression under the control of the P1/P2 promoter (Fig. 1). An increase of ppGpp downregulates the transcription from P1/P2, leading to a decrease of I levels and expression of GFP protein. The response of the sensor to dynamic changes in ppGpp concentrations can be accelerated by using rapidly degraded versions of I and GFP proteins [28, 29], as discussed below.
The model demonstrates that the double negative regulation of GFP expression by ppGpp (Fig. 1) results in a strong correlation between steady state levels of ppGpp and GFP (Fig. 3a). This suggests that GFP can be used as a reporter of ppGpp concentration over a wide range of observable ppGpp concentrations (0.05–1 mM; [6, 8]). Since ppGpp inhibits growth, GFP fluorescence indicates the degree of growth inhibition. The ppGpp sensor might be used for monitoring of intracellular metabolic state in various biotechnological applications. In the next section we applied it for diagnostics of FA-producing lines. The sensor can also be used for exploring the nature of the growth-limiting nutrient lim. Thus, the model predicts that if a nutrient limits growth, its restoration at the end of phase 1 would result in a quick decrease of ppGpp-related GFP fluorescence (Fig. 3b). The response time of GFP fluorescence, T0.5,GFP (half-life of GFP after nutrient upshift) depends on the model parameters. The model predicts that accelerated degradation of GFP protein can substantially shorten the GFP response time (Fig. 3c). Therefore, the use of rapidly degraded fluorescent reporters is desirable. Similarly, an increase of the inhibition strength of GFP expression by inhibitor I can also shorten the T0.5,GFP (Fig. 3c). Therefore manipulation of the expression level of I seems to be the most practical approach for optimizing biosensor performance, as further discussed below.
Modelling of the ppGpp-based biosensor in normal, non-FA-producing E. coli lines. a. Dependence of steady state GFP fluorescence on ppGpp concentrations. Each data point was calculated at different ppGpp concentration. b. Timecourses of ppGpp, GFP and lim in the model during nutrient upshift (shown by arrow) at the end of phase 1. c. Dependence of the response time of GFP fluorescence (T 0.5,GFP ) on the fold increase of the rate constant k GFP of GFP protein degradation (black) or fold decrease of the rate constant Ki I of the inhibition of GFP expression by the inhibitor I (blue). T 0.5,GFP was calculated as the time required for 2-fold decrease of GFP fluorescence after the nutrient upshift. Red circle indicates the value of T 0.5,GFP corresponding to k GFP = 0.1 min−1 and Ki I = 0.3 used in a, b
ppGpp kinetics in FA-producing lines of E. coli
We next used the model to analyse the efficiency of FA production in Tes-ox lines of E. coli with different expression levels of Tes [2]. Tes-ox lines were simulated by fitting Tes activities (V Tes ) to the measured FA yields of growing cell cultures (Fig. 4a; [2]). Figure 4a,b shows simulated kinetics of FA accumulation and cell growth in the control line 0 (only endogenous Tes is present) and Tes-ox lines 1, 2, 3 with low, medium and high levels of over-expressed Tes. The model predicts that increased expression of Tes should initially increase FA yields, but only until a certain critical level of Tes, above which a decrease of FA yields is observed (Fig. 4a,c). For example, high-Tes line 3 has 3-fold lower FA yield compared to line 1 (Fig. 4a), which corresponds to experimental observations on lower FA yields in high Tes lines [2]. The model explains the reduction of FA yields in high Tes lines by depletion of Acyl-ACP levels, which is predicted to increase the accumulation of ppGpp (Figs. 1 and 4d). High ppGpp in turn should reduce growth rates, in agreement with the data (Fig. 4b). Additionally, a strong pull of Acyl-ACP towards FA production reduces synthesis of membrane lipids, which further downregulates growth in high Tes lines in our model (Fig. 1).
Modelled kinetics of FA production during batch growth of E. coli cells. Kinetics of FA yields (a, in mg FA per 1 l of cell culture) and growth (b, in OD units) in cell cultures, expressing different amounts of thioesterase Tes. a, b. Solid, dashed, dotted and dashed-dotted lines correspond to the simulated control line 0 and lines 1, 2, 3 (indicated by numbers) with Tes activity of 0.08, 1.6, 19 and 110 mg/l/OD/min respectively. Data points are redrawn from [2] and correspond to lines 0 (red) and 1 (blue) (a) and lines 0, 2, 3 (b). c. Dependence of modelled FA yield of cell cultures after 24 h of growth on the levels of Tes activity. d. Intracellular kinetics of ppGpp (red) and Acyl-ACP (black) in the control line 0 and high Tes line 3, shown by solid and dashed-dotted lines respectively
Using the ppGpp-based biosensor for diagnostics of the efficiency of FA-producing lines
The model predicts that depletion of Acyl-ACP in high-Tes lines should lead to increased ppGpp levels (Figs. 1 and 4d), which downregulates growth and FA yields. In addition, GFP levels are predicted to increase in parallel to ppGpp in control and Tes-ox lines (Fig. 5a). This suggests that elevated GFP levels might serve as an indicator of low efficiency of FA production. Indeed, the peak levels of GFP inversely correlate with FA yields at high Tes levels (Fig. 5b). Therefore laborious methods of FA quantification in testing the efficiency of Tes-ox lines might be replaced by measurements of GFP fluorescence. The model analysis further suggests that the parameters of the ppGpp sensor affect the steepness of the dependence of GFP fluorescence on Tes levels. In particular, strengthening of the inhibition of GFP expression by I (e.g., by increasing of I expression) increases the steepness (Fig. 5b). Therefore, the increased expression of I improves the sensitivity of the sensor to the variations in Tes levels, in parallel to the improvement of GFP response time (Figs. 3c and 5b). However, increased inhibition of GFP expression by I also leads to a reduction of the total level of GFP fluorescence (Fig. 5c). Therefore there is a trade-off between the rate and the amplitude of the GFP response. This should be kept in mind during the optimisation of the sensor responsiveness, so that the total GFP signal might be rather low but within the detection limit. The level of GFP fluorescence might be further increased by using specially designed bright mutants of GFP with 20–35-fold higher intensity compared to standard GFP [42].
Using the ppGpp sensor for diagnostics of low efficiency of FA-producing lines. a. Simulated GFP fluorescence (green) follows ppGpp kinetics (red) during batch cultivation of E. coli cells; Ki I = 0.1. Solid and dashed-dotted lines correspond to the control line 0 and high Tes line 3 (indicated by numbers) with Tes activity of 0.08 and 100 mg/l/OD/min respectively. b. Dependence of the maximal levels of GFP fluorescence during batch cultivation on the Tes activity. Green solid and dashed green lines correspond to Ki I = 0.1 and Ki I = 0.02 respectively. The black line shows FA yields after 24 h of cell growth, normalized to maximum. c. Kinetics of GFP fluorescence (green) and ppGpp concentrations (red) under increased strength of GFP inhibition (Ki I = 0.02). Line designations are the same as in a
In addition to testing FA productivity, the biosensor might be used for optimization of FA production. To achieve this, a specific version of the sensor might be developed, which includes a feedback downregulating Tes expression at high ppGpp levels, potentially by using ppGpp-sensitive transcription factors.
Existing methods of genetic manipulations should be sufficient to perform the proposed engineering of the biosensor. In particular, recombination with serine integrases might be especially useful, because it allows insertion and removal of DNA fragments, if required [43]. This property of serine integrase is related to their ability to reverse the directionality in presence of recombination directionality factors (RDFs). Therefore, combinations of specific genes, promoters and ribosome-binding sites might be tested to optimize biosensor performance.
Based on existing data we built a mathematical model of intracellular kinetics of ppGpp during batch growth of E. coli cells. To monitor nutrient limitation, we designed a ppGpp biosensor, which couples changes in ppGpp concentrations and P1/P2-mediated transcription to a fluorescent output. The model demonstrates that the biosensor can sense a wide range of intracellular ppGpp concentrations and dynamically respond to perturbations of bacterial metabolism, such as nutrient upshifts and log to stat phase transition. The model predicts that both types of nutrient limitations, either related to nutrient depletion due to the increase of cell number (growth phase-specific) or to high consumption of Acyl-ACP for synthesis of FA in engineered lines (non-growth phase-specific) can be easily sensed by the ppGpp sensor. The use of quickly-degraded variants of GFP protein would increase the responsiveness of ppGpp sensing, which can be additionally adjusted by changing the expression level of the inhibitor of GFP expression. We further demonstrate that the level of ppGpp-dependent fluorescence inversely correlates with FA yields, suggesting that the sensor might be a useful instrument in testing FA productivity of engineered lines of E. coli.
ACP:
Activated acyl carrier protein
Acyl-ACP:
Acyl-FA bound to ACP
exp. phase:
exponential growth phase
FA:
FAS:
Fatty acid synthesis
GFP:
Green fluorescent protein
P1/P2:
ribosomal RNA gene promoter
PLS:
Phospholipids synthesis
PlsB:
glycerol-3-phosphate acyltransferase
ppGpp:
guanosine tetraphosphate
RelA:
ppGpp synthase
rib :
ribosomal activity (relative number of active ribosomes)
rRNA:
ribosomal RNA
SpoT:
ppGpp hydrolase
stat phase:
stationary growth phase
Tes:
Thioesterase
Tes-ox:
Tes-overexpressing line
Rosano GL, Ceccarelli EA. Recombinant protein expression in Escherichia Coli: advances and challenges. Front Microbiol. 2014;5:172.
Lennen RM, Braden DJ, West RA, Dumesic JA, Pfleger BF. A process for microbial hydrocarbon synthesis: overproduction of fatty acids in Escherichia Coli and catalytic conversion to alkanes. Biotechnol Bioeng. 2010;106(2):193–202.
Zhang X, Li M, Agrawal A, San KY. Efficient free fatty acid production in Escherichia Coli using plant acyl-ACP thioesterases. Metab Eng. 2011;13(6):713–22.
Hauryliuk V, Atkinson GC, Murakami KS, Tenson T, Gerdes K. Recent functional insights into the role of (p) ppGpp in bacterial physiology. Nat Rev Microbiol. 2015;13(5):298–309.
Potrykus K, Cashel M. (p)ppGpp: still magical. Annu Rev Microbiol. 2008;62:35–51.
Murray HD, Schneider DA, Gourse RL. Control of rRNA expression by small molecules is dynamic and nonredundant. Mol Cell. 2003;12(1):125–34.
Murray KD, Bremer H. Control of spoT-dependent ppGpp synthesis and degradation in Escherichia Coli. J Mol Biol. 1996;259(1):41–57.
Buckstein MH, He J, Rubin H. Characterization of nucleotide pools as a function of physiological state in Escherichia Coli. J Bacteriol. 2008;190(2):718–26.
Ochi K. Occurrence of the stringent response in Streptomyces sp. and its significance for the initiation of morphological and physiological differentiation. J Gen Microbiol. 1986;132(9):2621–31.
Battesti A, Bouveret E. Acyl carrier protein/SpoT interaction, the switch linking SpoT-dependent stress response to fatty acid metabolism. Mol Microbiol. 2006;62(4):1048–63.
Murray HD, Gourse RL. Unique roles of the rrn P2 rRNA promoters in Escherichia Coli. Mol Microbiol. 2004;52(5):1375–87.
Sarubbi E, Rudd KE, Cashel M. Basal ppGpp level adjustment shown by new spoT mutants affect steady state growth rates and rrnA ribosomal promoter regulation in Escherichia Coli. Mol Gen Genet. 1988;213(2–3):214–22.
Rhee HW, Lee CR, Cho SH, Song MR, Cashel M, Choy HE, Seok YJ, Hong JI. Selective fluorescent chemosensor for the bacterial alarmone (p)ppGpp. J Am Chem Soc. 2008;130(3):784–5.
Trigui H, Dudyk P, Oh J, Hong JI, Faucher SP. A regulatory feedback loop between RpoS and SpoT supports the survival of legionella pneumophila in water. Appl Environ Microbiol. 2015;81(3):918–28.
Ancona V, Lee JH, Chatnaparat T, Oh J, Hong JI, Zhao Y. The bacterial alarmone (p)ppGpp activates the type III secretion system in Erwinia amylovora. J Bacteriol. 2015;197(8):1433–43.
Zhang P, Wang Y, Chang Y, Xiong ZH, Huang CZ. Highly selective detection of bacterial alarmone ppGpp with an off-on fluorescent probe of copper-mediated silver nanoclusters. Biosens Bioelectron. 2013;49:433–7.
Wang J, Chen W, Liu X, Wesdemiotis C, Pang Y. A mononuclear zinc complex for selective detection of diphosphate via fluorescence ESIPT turn-on. J Mater Chem B, Mater Biology Med. 2014;2(21):3349–54.
Lu X, Vora H, Khosla C. Overproduction of free fatty acids in E. Coli: implications for biodiesel production. Metab Eng. 2008;10(6):333–9.
Handke P, Lynch SA, Gill RT. Application and engineering of fatty acid biosynthesis in Escherichia Coli for advanced fuels and chemicals. Metab Eng. 2011;13(1):28–37.
DiRusso CC, Nystrom T. The fats of Escherichia Coli during infancy and old age: regulation by global regulators, alarmones and lipid intermediates. Mol Microbiol. 1998;27(1):1–8.
Seyfzadeh M, Keener J, Nomura M. spoT-dependent accumulation of guanosine tetraphosphate in response to fatty acid starvation in Escherichia Coli. Proc Natl Acad Sci U S A. 1993;90(23):11004–8.
Janssen HJ, Steinbuchel A. Fatty acid synthesis in Escherichia Coli and its applications towards the production of fatty acid based biofuels. Biotechnol Biofuels. 2014;7(1):7.
Fu WJ, Chi Z, Ma ZC, Zhou HX, Liu GL, Lee CF, Chi ZM. Hydrocarbons, the advanced biofuels produced by different organisms, the evidence that alkanes in petroleum can be renewable. Appl Microbiol Biotechnol. 2015;99(18):7481–94.
Choi YJ, Lee SY. Microbial production of short-chain alkanes. Nature. 2013;502(7472):571–4.
Barker MM, Gaal T, Josaitis CA, Gourse RL. Mechanism of regulation of transcription initiation by ppGpp. I. Effects of ppGpp on transcription initiation in vivo and in vitro. J Mol Biol. 2001;305(4):673–88.
Heath RJ, Jackowski S, Rock CO. Guanosine tetraphosphate inhibition of fatty acid and phospholipid synthesis in Escherichia Coli is relieved by overexpression of glycerol-3-phosphate acyltransferase (plsB). J Biol Chem. 1994;269(42):26584–90.
Cho H, Cronan JE Jr. Defective export of a periplasmic enzyme disrupts regulation of fatty acid synthesis. J Biol Chem. 1995;270(9):4216–9.
Elowitz MB, Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature. 2000;403(6767):335–8.
Houser JR, Ford E, Chatterjea SM, Maleri S, Elston TC, Errede B. An improved short-lived fluorescent protein transcriptional reporter for Saccharomyces Cerevisiae. Yeast. 2012;29(12):519–30.
Rahman Z, Rashid N, Nawab J, Ilyas M, Sung BH, Kim SC. Escherichia Coli as a fatty acid and biodiesel factory: current challenges and future directions. Environ Sci Pollut Res Int. 2016;23(12):12007–18.
Jiang M, Sullivan SM, Wout PK, Maddock JR. G-protein control of the ribosome-associated stress response protein SpoT. J Bacteriol. 2007;189(17):6140–7.
Raskin DM, Judson N, Mekalanos JJ. Regulation of the stringent response is the essential function of the conserved bacterial G protein CgtA in vibrio cholerae. Proc Natl Acad Sci U S A. 2007;104(11):4636–41.
Angelini S, My L, Bouveret E. Disrupting the acyl carrier protein/SpoT interaction in vivo: identification of ACP residues involved in the interaction and consequence on growth. PLoS One. 2012;7(4):e36111.
Nomura M, Gourse R, Baughman G. Regulation of the synthesis of ribosomes and ribosomal components. Annu Rev Biochem. 1984;53:75–117.
Swint-Kruse L, Matthews KS. Allostery in the LacI/GalR family: variations on a theme. Curr Opin Microbiol. 2009;12(2):129–37.
Nystrom T. Stationary-phase physiology. Annu Rev Microbiol. 2004;58:161–81.
Monod J. The growth of bacterial cultures. Annu Rev Microbiol. 1949;3:371–94.
Kaplan R, Apirion D. The fate of ribosomes in Escherichia Coli cells starved for a carbon source. J Biol Chem. 1975;250(5):1854–63.
Deutscher MP. Degradation of stable RNA in bacteria. J Biol Chem. 2003;278(46):45041–4.
Zundel MA, Basturea GN, Deutscher MP. Initiation of ribosome degradation during starvation in Escherichia Coli. RNA. 2009;15(5):977–83.
Heath RJ, Rock CO. Inhibition of beta-ketoacyl-acyl carrier protein synthase III (FabH) by acyl-acyl carrier protein in Escherichia Coli. J Biol Chem. 1996;271(18):10996–1000.
Cormack BP, Valdivia RH, Falkow S: FACS-optimized mutants of the green fluorescent protein (GFP). Gene 1996, 173(1 Spec No):33–38.
Colloms SD, Merrick CA, Olorunniji FJ, Stark WM, Smith MC, Osbourn A, Keasling JD, Rosser SJ. Rapid metabolic pathway assembly and modification using serine integrase site-specific recombination. Nucleic Acids Res. 2014;42(4):e23.
I thank Martin Boocock, Sean Colloms and Marshall Stark for helpful discussions and comments on the manuscript.
This work was supported by Biotechnology and Biosciences Research Council (grant number BB/K003356/1). Funding for open access charge: The University of Glasgow.
The Matlab code of the model is included in additional file 1.
Institute of Molecular, Cell and Systems Biology, University of Glasgow, Glasgow, Scotland, UK
Correspondence to Alexandra Pokhilko.
The author declares that there are no competing interests.
Additional file 1: Figure S1.
Model equations with estimation of the model parameters. Table with parameter values. Analysis of the parameter sensitivity. Model code in Matlab. (DOC 348 kb)
Pokhilko, A. Monitoring of nutrient limitation in growing E. coli: a mathematical model of a ppGpp-based biosensor. BMC Syst Biol 11, 106 (2017). https://doi.org/10.1186/s12918-017-0490-5
ppGpp
Biosensor | CommonCrawl |
Learning to Simplify: Thevenin and Norton Equivalent Circuits
October 01, 2015 by Robert Keim
This article reviews the basics of finding Thevenin and Norton equivalents and discusses how to apply Thevenin's theorem to a practical circuit.
Thevenin and Norton equivalent circuits are fundamental approaches to analyzing both AC and DC circuits. It is important to understand the steps involved in converting a circuit to its Thevenin or Norton equivalent, but more important still is understanding how these techniques can help you to analyze and design actual electronic devices.
Thevenin's theorem states that any circuit composed of linear elements can be simplified to a single voltage source and a single series resistance (or series impedance for AC analysis). Norton's theorem is the same except that the voltage source and series resistance are replaced by a current source and parallel resistance. In this article we will focus on Thevenin's theorem because the voltage-plus-series-resistance model is more intuitive and more applicable to real-life circuit design. Furthermore, it is easy to convert a Thevenin equivalent to a Norton equivalent and vice versa:
To Thevenin from Norton
To Norton from Thevenin
$$R_{Th}=R_N$$
$$R_N=R_{Th}$$
$$V_{Th}=(I_N) (R_N)$$
$$I_N=\frac{V_{Th}}{R_{Th}}$$
The basic procedure for finding a Thevenin equivalent circuit is the following: First, determine which nodes in your original circuit will correspond to the Thevenin circuit's two output terminals. Second, modify the original circuit so that there is no load connection between these two nodes (for example, by removing a resistor that now corresponds to a load resistor considered external to the circuit). Then, determine VTh by calculating the voltage across the output terminals. Finally, determine RTh by calculating the equivalent resistance assuming all independent sources are removed (this means that voltage sources are replaced by a short circuit and current sources are replaced by an open circuit). For detailed information on how to calcuate a Thevenin or Norton equivalent circuit, see Thevenin's Theorem or Norton's Theorem in the textbook section.
Why Thevenize?
The process described above seems simple enough, but calculating the Thevenin equivalent circuit can become quite complicated when the circuit includes numerous components or dependent sources. Also, it is important to keep in mind that the Thevenin equivalent circuit is only an accurate representation of the circuit from the perspective of the load connected to the two output terminals; it doesn't tell you anything about the internal functionality of the circuit. Even so, there are good reasons for making the effort to figure out a circuit's Thevenin equivalent.
The biggest reason for doing this is that circuits are easier to deal with when they are divided into digestible portions. No one would ever dream of designing a microprocessor by starting with a billion transistors and wiring them together one by one; likewise, even a relatively simple mixed-signal design is best analyzed as a collection of interconnected blocks. This is the essence of Thevenin's theorem: reduce a circuit to the simplest representation that allows you to determine how that circuit block will interact with another circuit block. Consider the following example:
If RL is removed,
$$V_{out}=(12V)(\frac{R_2}{R_1+R_2})=7.2V$$
If RL is included,
$$V_{out}=(12V)(\frac{R_2||R_L}{R_1+R_2||R_L})=6.36V$$
This simple circuit indicates something important related to Thevenin's theorem: the load affects the circuit. If you remove the load resistor and simply calculate the voltage at Vout, you get 7.2 V. So is it correct to say that the network composed of V1, R1, and R2 is a circuit that supplies 7.2 V to a load resistor? No, because the supplied voltage changes according to the resistance of the load. As shown above, Vout drops to 6.36 V when we insert a 1 kΩ load resistor. With a 100 Ω load resistor, Vout is only 3.1 V. So if you are using the original circuit and you want to know the output voltage for a certain load, you have to repeat the second calculation shown above. If the circuit were more complicated, this task would only get more tiresome. Now we can see the fundamental value of the Thevenin equivalent circuit: it is a simple model that tells you how the original circuit interacts with the load, because the combination of Thevenin voltage and Thevenin resistance ensures that the correct output voltage is provided for all values of load resistance.
Experimental vs. Analytical
One major obstacle to applying the concepts of Thevenin equivalence is the difficulty of determining the Thevenin equivalent voltage and resistance for a complicated circuit. This is especially problematic considering that interesting circuits are generally rather complicated. Fortunately, you can find the Thevenin equivalent experimentally. The first step is to remove the load and measure the open circuit voltage at the output terminals; this is the Thevenin voltage. Next, you need to test the circuit while varying the load resistance until you find the load resistance at which the output voltage is half of the open circuit voltage (you could use a potentiometer here). This load resistance is equal to the Thevenin resistance.
For the simple circuit shown above, here is a plot of output voltage vs. load resistance varied from 1 Ω to 1 kΩ:
The cursor shows that at 3.6 V (which is half the open circuit voltage), load resistance equals 132 Ω. This agrees with the theoretical Thevenin resistance of
Thevenin's Theorem Applied
Let's say that we need to design a high-precision circuit that digitizes signals from a sensor whose output voltage varies between 0 and 1 mV. We will use an op-amp to amplify the signal by a factor of 10, then the output from the op-amp circuit will be sent to the analog-to-digital converter. We are considering different ADCs with different input characteristics, and we want to assess how changes in input impedance will affect the output of the op-amp. To do this, we can find the Thevenin equivalent for the op-amp circuit and treat the ADC input impedance as the load resistance. It would be impractical to accomplish this with an actual circuit because op-amps have very low output resistance, but we can get good results with a simulation.
The load resistor is varied from 10 mΩ to 0.5 Ω in steps of 10 mΩ. The gain of this op-amp circuit is 10, so we know that the open circuit output voltage will be 10 mV. Thus, we are looking for the load resistance that gives us an output voltage of 5 mV.
As indicated by the cursor, the output voltage equals half the open circuit voltage at a resistance of 155 mΩ. Notice also that the output voltage levels off at 10 mV once the load resistance is sufficiently large; this confirms our expected value for the open circuit voltage. Now that we know the Thevenin voltage and Thevenin resistance for this op-amp circuit, we can analyze it using a Thevenin equivalent:
Thevenin and Norton Equivalent Circuits
Learn About Thevenin Theorem and Dependent Source Circuits
Using Verilog to Describe a Sequential Circuit
Learn About Fourier Coefficients
How to Drive Large Capacitive Loads with an Op-Amp Circuit
6 Ways to Design for Circuit Board Rework
norton equivalent
thevenin equivalent
network theorems
measure thevenin resistance
rmohanx September 17, 2017
Another great one but…how much does this change for AC case with general impedance? Thanks….
Like. Reply
System Advantages of High-Resolution Wheel Speed Sensing
In Partnership with Allegro MicroSystems
ST Rolls 32-bit MCU Aimed at Cost-conscious 8-bit System Designs
Simplify Your Automotive Reverse Battery Voltage Testing of Electronic Modules
An Introduction to Current Sources
by Harry Trietley
The Memory Market Heats Up With SK hynix's 238-layer 4D NAND | CommonCrawl |
12.2: Two-State System
[ "article:topic", "authorname:rfitzpatrick", "showtoc:no" ]
https://phys.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fphys.libretexts.org%2FBookshelves%2FQuantum_Mechanics%2FBook%253A_Introductory_Quantum_Mechanics_(Fitzpatrick)%2F12%253A_Time-Dependent_Perturbation_Theory%2F12.02%253A_Two-State_System
Book: Introductory Quantum Mechanics (Fitzpatrick)
12: Time-Dependent Perturbation Theory
Contributed by Richard Fitzpatrick
Professor (Physics) at University of Texas at Austin
Consider a system in which the time-independent Hamiltonian possesses two eigenstates, denoted \[\begin{aligned} H_0\,\psi_1 &= E_1\,\psi_1,\\[0.5ex] H_0\,\psi_2&=E_2\,\psi_2.\end{aligned}\] Suppose, for the sake of simplicity, that the diagonal elements of the interaction Hamiltonian, \(H_1\), are zero: that is, \[\langle 1|H_1|1\rangle = \langle 2|H_1|2\rangle = 0.\] The off-diagonal elements are assumed to oscillate sinusoidally at some frequency \(\omega\): that is, \[\langle 1|H_1|2\rangle = \langle 2|H_1|1\rangle^\ast = \gamma\,\hbar\,\exp(\,{\rm i}\,\omega\,t),\] where \(\gamma\) and \(\omega\) are real. Note that it is only the off-diagonal matrix elements which give rise to the effect which we are interested in: namely, transitions between states 1 and 2.
For a two-state system, Equation ([e13.12]) reduces to
\[\begin{aligned} {\rm i}\,\frac{dc_1}{dt} &= \gamma\,\exp\left[+{\rm i}\,(\omega-\omega_{21})\,t\right]c_2,\\[0.5ex] {\rm i}\,\frac{dc_2}{dt} &=\gamma\,\exp\left[-{\rm i}\,(\omega-\omega_{21})\,t\right]c_1,\label{e13.20}\end{aligned}\] where \(\omega_{21}=(E_2-E_1)/\hbar\). The previous two equations can be combined to give a second-order differential equation for the time-variation of the amplitude \(c_2\): that is,
\[\label{e13.21} \frac{d^{\,2} c_2}{dt^{\,2}} + {\rm i}\,(\omega-\omega_{21})\,\frac{dc_2}{dt}+\gamma^{\,2}\,c_2=0.\] Once we have solved for \(c_2\), we can use Equation ([e13.20]) to obtain the amplitude \(c_1\). Let us search for a solution in which the system is certain to be in state 1 (and, thus, has no chance of being in state 2) at time \(t=0\). Thus, our initial conditions are \(c_1(0)=1\) and \(c_2(0)=0\). It is easily demonstrated that the appropriate solutions to ([e13.21]) and ([e13.20]) are
\begin{equation}c_{2}(t)=\left(\frac{-\mathrm{i} \gamma}{\Omega}\right) \exp \left[\frac{-\mathrm{i}\left(\omega-\omega_{21}\right) t}{2}\right] \sin (\Omega t)\end{equation}
\begin{equation}\begin{aligned}
c_{1}(t)=& \exp \left[\frac{\mathrm{i}\left(\omega-\omega_{21}\right) t}{2}\right] \cos (\Omega t) \\
&-\left[\frac{\mathrm{i}\left(\omega-\omega_{21}\right)}{2 \Omega}\right] \exp \left[\frac{\mathrm{i}\left(\omega-\omega_{21}\right) t}{2}\right] \sin (\Omega t)
\end{aligned}\end{equation}
where \begin{equation}\Omega=\sqrt{\gamma^{2}+\left(\omega-\omega_{21}\right)^{2} / 4}\end{equation}
Now, the probability of finding the system in state 1 at time \(t\) is simply \(P_1(t)=|c_1(t)|^{\,2}\). Likewise, the probability of finding the system in state 2 at time \(t\) is \(P_2(t)= |c_2(t)|^{\,2}\). It follows that \[\begin{aligned} P_1(t)&=1-P_2(t),\\[0.5ex] P_2(t)&= \left[\frac{\gamma^{\,2}}{\gamma^{\,2} + (\omega-\omega_{21})^{\,2}/4}\right] \sin^2({\mit\Omega}\,t).\label{e13.25}\end{aligned}\] This result is known as Rabi's formula .
Equation ([e13.25]) exhibits all the features of a classic resonance . At resonance, when the oscillation frequency of the perturbation, \(\omega\), matches the frequency \(\omega_{21}\), we find that
\[\begin{aligned} P_1(t)&=\cos^2(\gamma\,t),\\[0.5ex] P_2(t)&=\sin^2(\gamma\,t).\label{e13.28}\end{aligned}\] According to the previous result, the system starts off in state 1 at \(t=0\). After a time interval \(\pi/(2\,\gamma)\) it is certain to be in state 2. After a further time interval \(\pi/(2\,\gamma)\) it is certain to be in state 1 again, and so on. Thus, the system periodically flip-flops between states 1 and 2 under the influence of the time-dependent perturbation. This implies that the system alternatively absorbs and emits energy from the source of the perturbation.
The absorption-emission cycle also takes place away from the resonance, when \(\omega\neq \omega_{21}\). However, the amplitude of the oscillation in the coefficient \(c_2\) is reduced. This means that the maximum value of \(P_2(t)\) is no longer unity, nor is the minimum of \(P_1(t)\) zero. In fact, if we plot the maximum value of \(P_2(t)\) as a function of the applied frequency, \(\omega\), then we obtain a resonance curve whose maximum (unity) lies at the resonance, and whose full-width half-maximum (in frequency) is \(4\,\gamma\). Thus, if the applied frequency differs from the resonant frequency by substantially more than \(2\,\gamma\) then the probability of the system jumping from state 1 to state 2 is always very small. In other words, the time-dependent perturbation is only effective at causing transitions between states 1 and 2 if its frequency of oscillation lies in the approximate range \(\omega_{21}\pm 2\,\gamma\). Clearly, the weaker the perturbation (i.e., the smaller \(\gamma\) becomes), the narrower the resonance.
Richard Fitzpatrick (Professor of Physics, The University of Texas at Austin)
\( \newcommand {\ltapp} {\stackrel {_{\normalsize<}}{_{\normalsize \sim}}}\) \(\newcommand {\gtapp} {\stackrel {_{\normalsize>}}{_{\normalsize \sim}}}\) \(\newcommand {\btau}{\mbox{\boldmath$\tau$}}\) \(\newcommand {\bmu}{\mbox{\boldmath$\mu$}}\) \(\newcommand {\bsigma}{\mbox{\boldmath$\sigma$}}\) \(\newcommand {\bOmega}{\mbox{\boldmath$\Omega$}}\) \(\newcommand {\bomega}{\mbox{\boldmath$\omega$}}\) \(\newcommand {\bepsilon}{\mbox{\boldmath$\epsilon$}}\)
12.1: Preliminary Analysis
12.3: Spin Magnetic Resonance | CommonCrawl |
Synthesis, Crystal Structure, Thermal, Magnetic Properties and DFT Computations of a Ytterbium(III) Complex Derived from Pyridine-2,6-Dicarboxylic Acid
Shahzad Sharif, Maham Saeed, Necmi Dege, Rehana Bano, Mazhar Amjad Gilani, and 3 more
posted 02 Nov, 2021
A new ytterbium(III) complex, (DMAH2)3[Yb(Pydc)3].4H2O (1) {Pydc = Pyridine-2,6-dicarboxylate anion, DMAH = Dimethylamine} has been prepared under mild solvothermal conditions and characterized by elemental analysis, IR spectroscopy, thermal analysis and single crystal X-ray diffraction. The DMAH molecules in 1, generated in situ from hydrolysis of N,N-dimethylformamide are responsible to assemble 2D coordination polymers through N-H∙∙∙O and O-H∙∙∙O hydrogen bonding. Magnetic susceptibility measurements indicate that the complex (1) obeys the Curie Weiss law and the overall magnetic behavior is typical for the presence of weak antiferromagnetic exchange coupling interactions. Theoretical data for geometrical parameters of complex 1 agree well with the experimental data. Large HOMO-LUMO energy gap of 4.33 eV has provided kinetic stability to the complex 1. NBO analysis reflects that intramolecular charge transfer occurred between ligand and metal orbitals with the highest stabilization energy of 1024.04 kcal/mol. The negative electrostatic potential at the nitrogen and dianionic pyridine-2,6-dicarboxylate regions confirms that these are dynamic locations for Yb(III) binding.
Ytterbium(III)
Pyridine-2
6-dicarboxylic acid
X-ray structure
magnetic properties
DFT computation
Due to its rigidity and high symmetry, pyridine-2,6-dicarboxylic acid (H2Pydc) is a typical ligand for constructing lanthanide coordination polymers [1–12]. It can adopt varied coordination modes such as monodentate, chelating bidentate, bridging bidentate and multidentate. The carboxylate groups on the pyridine ring can be fully or partially deprotonated for coordination to metals. The carboxylic O atoms are also beneficial to form hydrogen bonds, while the rigid pyridyl ring is a general origin of π-π stacking interactions. The diverse coordination patterns of N or O atoms can result in formation of discrete or infinite structures as evidenced by the crystal structures of several H2Pydc-based lanthanide complexes [1–12].
Lanthanide compounds based on Pyridine-2,6-dicarboxylic acid (H2Pydc) have attracted extensive attention in florescence materials due to their diverse applications, such as sensing materials, lighting and bio-medical imaging devices [1]. Lanthanide compounds with visible light emission mostly concentrate on Sm(III), Eu(III), Tb(III) and Dy(III) [5, 12–14], while the compounds containing Er(III) exhibit NIR luminescence [15]. Zhao and co-workers have reported a variety of lanthanide compounds with H2Pydc, which include 4f, 4f-2p, 4f-3d and 4f-4d systems, and reveal the relationship between the structures and luminescent properties [1, 16–19]. The luminescent investigations indicate that H2pydc ligand can act as an antenna to stimulate the photosensitization of lanthanides.
Considering the possible role of lanthanide complexes for material applications, we have earlier reported the crystal structures of several lanthanide complexes with pyridine-2,6-dicarboxylic acid (H2Pydc) ligand [9, 20–23]. In these complexes the lanthanide atoms exhibit a nine-coordination environment, while H2Pydc coordinates in neutral, monoanionic and dianionic forms. To further explore the coordination chemistry of lanthanide-Pydc complexes to generate supramolecular networks through H-bonding. We report here the synthesis, structural characterization, thermal and magnetic properties as well as comprehensive DFT computations of a new anionic ytterbium(III) complex stabilized by dimethylammonium ions.
2.1 Materials and Measurements
All reagents were purchased commercially and were used without further purification. The percentage detection of H, C and N was performed on the Elemental Analyzer, Vario Micro Cube, Elementar, Germany. IR spectra were recorded on Perkin-Elmer FT-IR 180 spectrophotometer using KBr pellets over the range of 4000–400 cm−1. Thermal analysis (25-1000°C) was performed under continuous nitrogen flow, with a ramp-rate of 10 °C min−1 using a SDT Q600 instrument (TA Instruments, USA). The temperature dependence magnetic susceptibility measurements were made in the temperature range of 5-300 K using Superconducting Quantum Interference Device (SQUID-MPMS-5, USA) at an applied magnetic field of 1000Oe. The results are shown as plots of χm, χm−1 and χmT versus T (χm = molar magnetic susceptibility) using Origin software [24]. The effective magnetic moment, µeff, were calculated by applying the relation µeff = 2.828 (χmT)1/2 in Bohr magneton (µB) [25, 26].
2.2 Synthesis of (DMAH2)3[Yb(Pydc)3].4H2O (1)
A mixture of H2Pydc (84 mg, 0.50 mmol), 6 mL of water and 3 mL of DMF was stirred at 90°C for half an hour followed by drop-wise addition of 5 mL aqueous solution of ytterbium chloride hexahydrate (YbCl3.6H2O; 136 mg, 0.35 mmol). The mixture was refluxed for four hours along with stirring to obtain a clear solution, Off-white block-like crystals slowly grew in the filtered solution as solvent evaporated over one month and recovered by filtration, rinsed with mixture of water and DMF, and dried at room temperature. Analysis (%) Calc. for C27H41N6O16Yb (878.70): N, 9.56; C, 36.87; H, 4.67; Found: N, 9.42; C, 37.07; H, 4.71. Yield ca. 50 %. IR (cm−1): 3419–3253 ν(O–H, N–H) broad, strong); 1607 νasym(C=O); 1447 νsym(C=O); 1560 νasym(–NH2).
2.3 X-ray Structure Determination
Suitable crystals of complex 1 were selected for data collection, which was performed on a Bruker KAPA APEX II CCD diffractometer equipped with a graphite-monochromatic Mo-Kα radiation at 296 K. The structure was solved by direct methods using SHELXS-97 and CRYSTALS [27, 28], and refined by full-matrix least-squares methods on F2 using SHELXL-97 and CRYSTALS [27, 28] within the WINGX [29] suite of software. All non-hydrogen atoms were refined with anisotropic parameters. Water H atoms were located in a difference map and refined subject to a DFIX restraint of O-H = 0.83(2) Å. All other H atoms were located from different maps and then treated as riding atoms with C-H distances of 0.93-0.96Å and N-H distances of 0.90Å. Molecular diagrams were created using MERCURY [30]. Supramolecular analyses were made and the diagrams were prepared with the aid of PLATON and CrystalMaker® [31, 32]. Details of data collection and crystal structure determinations are given in Table 1.
Crystal data and structure refinement parameters for 1.
Empirical formula
C27H41N6O16Yb
Crystal system
Monoclinic
P21/c
a (Å)
b (Å)
c (Å)
β (º)
V (Å3)
Dc (g cm−3)
µ (mm−1)
θ range (º)
Measuredrefls.
Independent refls.
Rint
R1/wR2
Δρmax/Δρmin (eÅ−3)
0.67/−0.93
Computational studies
All the DFT calculations of synthesized complex 1 are performed by using Gaussian 16 program Package [33] and are visualized by using GaussView 6.1.1 software [34]. Geometry optimization of complex 1 is carried out at B3LYP/SDD and M06-2X/SDD levels of theory to compare results of both methods with experimental data. Frequency calculations are also performed at the respective levels of theory to assure local minima. The B3LYP/SDD level of theory is most reliable and has been widely used for the ytterbium complexes [35]. The B3LYP/SDD employs three-parameter Becke (B3) exchange functional [36], Lee-Yang-Parr (LYP) nonlocal correctional functional [37], and Stuttgart SDD basis set [38]. Frontier molecular orbital (FMO) analysis, reactivity parameters, natural bond orbital (NBO) analysis and molecular electrostatic potential (MEP) analysis of complex 1 are also performed at the same level of theory. Visualization of FMOs and MEP surface is carried out by using Multiwfn [39] and VMD [40] softwares. Furthermore, Hirshfeld surface analysis of complex 1 is performed by using Crystal Explorer software [41].
3.1 IR and Thermal Studies
The reactions of YbCl3.6H2O; with H2Pydc in a 1:1 molar ratio in a mixture of water and DMF under solvothermal conditions resulted in formation of a crystalline complex, (DMAH2)3[Yb(Pydc)3].4H2O (1). The DMAH molecules in 1 were generated in situ from hydrolysis of N,N-dimethylformamide [21]. In the IR spectrum of complex 1, a broad peak in the region of 3419–3253 cm−1 was observed, which was assigned to the O–H and N–H stretching vibrations of water molecules and dimethylaminium ions respectively. The band at 1560 cm−1 is related to the N-H bending vibration of dimethylaminium ions. The characteristic absorptions for the asymmetric and symmetric stretches of carboxylate group are found at 1607 and 1447 cm−1 respectively, while in the IR spectrum of free ligand, these bands are observed at 1688 and 1375 cm−1 respectively. A significant shift of the asymmetric mode, νas(COO) towards the lower wavenumber upon coordination and that of the symmetric mode, νs(COO) towards the higher wavenumber indicates the binding of Pydc ligand to the metal through the carboxylate oxygen atoms [20].
The thermal decomposition of complex 1 is illustrated in Figure 1. At the first stage four non-coordinated water molecules are released at 90°C corresponding to the weight loss of 9.3 % (calculated 8.2 %). The loss of water is associated with an endothermic transition in DSC. The next weight loss of 39 % between 210°C and 350°C corresponds to the removal of two Pydc ligands (calculated 37.6 %). The difference indicates the removal of some other volatile component like ammonia or dimethylamine.The DSC curve exhibits an endothermic transition at 305 oC. The elimination of the third Pydc ligand takes place after 500°C (19.5 % wt loss against the theoretical value of 18.8 %) and is marked by an exothermic transition in DSC at 520°C. Beyond this point the remaining organic moieties are lost up to 950°C leaving behind a residue of 24 %, which is attributed to Yb2O3 (calcd. 22.5%). The thermal data agrees well the results obtained from the elemental analysis.
3.2 Crystal structure of Complex 1
The molecular structure of complex 1, (DMAH2)3[Yb(Pydc)3].4H2O with the atom labeling is shown in Figure 2. The selected bond lengths and angles are given in Table S1. The complex 1 exists as a monomeric ionic species consisting of an ionic complex [Yb(Pydc)3]3−, three dimethylammonium counter ions and four non-coordinated water molecules (Figure 2). The DMAH molecules are generated in situ from hydrolysis of N,N-dimethylformamide [21]. The Yb(III) ion in 1 is coordinated by three dianionic pyridine-2,6-dicarboxylate ligands (Pydc2−) and attains a distorted tricapped trigonal prismatic YbN3O6 coordination geometry with the N atoms serving as the caps protruding through the prismatic side-faces (Figure S1). The N-Yb-N bond angles are closer to 120° (116.28(8)°-122.72(9)°), while the O-Yb-O bond angles ranged between 74.94(8)°-146.01(8)°. The distortion in tricapped trigonal prismatic geometry is attributed to the upper and lower distorted triangular faces with mean deviations of -1.802º and 6.426º from regular triangular faces respectively. Of the 14 triangular faces of YbN3O6 polyhedron, the dihedral angles between O3-O9-N3 and O9-O7-N2 faces are 59.902º and 63.507º, while between the relatively distorted triangular faces O5-O11-N3 and O5-O1-N2, they are 52.435º and 54.713º. The dihedral angles between N3-N2-N1, N2-N1-N3 and N1-N3-N2 triangular faces are 61.161º, 60.468º and 58.372º respectively. The Yb-N and Yb-O bond lengths fall in the ranges, 2.361(2)Å-2.386(2) Å and 2.440(2)-2.461(3) Å, respectively. These data are in agreement with the corresponding values of the similar reported Yb(III) complexes [42–45]. The pyridine ring mean planes are approximately planar, with the maximum deviations of 0.0102(22) Å for C(5) atom (C(8) and C(16) atoms are deviated by 0.0066(25) Å and 0.0052(26) Å respectively). The structural features of 1 are closely related to the other [Ln(Pydc)3]3− type complexes [6, 11, 21, 23].
We have earlier reported the crystal structures of a similar series of the Pydc-based coordination polymers with the formula [Ln(Pydc)3](DMAH2).H(DMAH)2 (Ln = Ce, Ho, Nd, Sm) [21]. Their structural analysis reveals that their metal coordination sphere is quite identical to that of compound 1. Pydc ligand in these complexes as well as in 1 adopts only one kind of coordination modes, where the oxygen atoms of the carboxylate groups and the nitrogen atom of a pyridine ring form a chelate with the metal atom.
Within the structure of 1, extensive hydrogen-bonding interactions take place between the carboxylate groups, water molecules and ammonium ions. The molecules of 1 are linked to each other by a combination of N-H∙∙∙O, O-H∙∙∙O and C-H∙∙∙O hydrogen bonds (Table S2). The amnonium nitrogen atom N(5) in the molecule at (x, y, z) acts as hydrogen-bond donor, via atoms H(5A) and H(5B), to carboxylate oxygen atoms O(6)ii and O(7), forming a C22(8) chain running parallel to the [100] direction. The water oxygen atom O(14) in the reference molecule at (x, y, z) acts as hydrogen-bond donor, via atoms H(14A) and H(14B), to atoms O(4)v and O(12)vi, forming a C22(10) chain running parallel to the [100] direction. Similarly, the carboxylate atoms O(6) and O(8)vii accept hydrogen bonds from H(16A) and H(16B) of water atom O(16) yielding a C22(10) chain running parallel to the [100] direction. Finally, dimethylamine and water molecules link neighboring polymeric chains via N-H∙∙∙O and O-H∙∙∙O hydrogen bonds into a two-dimensional framework parallel to the ac plane (Figure S2).
3.3 Magnetic Measurement
The magnetic behavior of complex 1 is represented in the forms of χm, χm−1 and χmT vs. T plots (χm = molar magnetic susceptibility) shown in Figure S3. The susceptibility can be well described by the Curie–Weiss law above 40 K with a Curie constant C = 3.11 ± 0.0022 cm3K mol−1 and Weiss constant θ = – 35.52 K. In the high temperature end (300 K), χmT = 2.69 cm3K mol−1 provides an effective magnetic moment µeff of 4.64 µB, which is slightly larger than the expected multiplet 2F7/2 value of 4.50 µB per formula for one Yb(III) ion of one uncoupled (gJ = 1.1) Yb(III) ion [46]. The product of χmT was found to decrease with decreasing temperature to reach a final value of 0.97 cm3Kmol−1 at 5 K with an effective magnetic moment of 2.79 µB. The overall behavior of χmT with temperature and the negative value of θ is typical for the presence of weak antiferromagnetic exchange coupling interactions.
3.4. DFT computations
3.4.1 Geometrical parameters
Molecular geometry of complex 1 is optimized by taking its X-ray crystallographic CIF file. The results of geometrical parameters obtained at two different methods B3LYP and M06-2X are compared with experimental data and are listed in Table 2.
Experimental and theoretical bond lengths (Å) and bond angles (degree) of Complex 1
Complex 1 Bond Lengths
B3LYP
M06-2X
N1-Yb1
O7-Yb1
O11-Yb1
Complex 1 Bond Angles
Bond Angle
O5-Yb1-O9
N3-Yb1-N2
O11-Yb1-O3
O5-Yb1-N3
O11-Yb1-N1
The results clearly depict that the bond lengths and bond angles computed at B3LYP method are in good agreement with the experimental data than M06-2X method. The theoretical bond lengths at B3LYP/SDD for N1-Yb1, O3-Yb1, N2-Yb1 and O11-Yb1 b match well with the experimental data and are 2.471 Å (Exp. 2.453 Å), 2.406 Å (Exp. 2.386 Å), 2.471 Å (Exp. 2.461 Å) and 2.348 Å (Exp. 2.362 Å). However, there is minor deviation less than 1 Å in some bond lengths between theoretical and experimental X-ray crystallographic data. The highest deviation (0.045 Å) in theoretical bond length is observed for O1-Yb1 bond from the experimental one. Similarly, the theoretical bond angles that correlate better with the experimental data are O5-Yb1-O9, O7-Yb1-O1, N3-Yb1-N2, and O5-Yb1-N3 and their values are 88.12° (Exp. 88.52°), 90.45° (Exp. 88.24°), 120.58° (Exp. 120.99°) and 70.89° (Exp. 73.63°). Some theoretical bond angles are bigger than the experimental ones such as O11-Yb1-O3 (92.35°) and N1-Yb1-N2 (122.08°). The reason behind the fact is that DFT calculations are performed in gas phase while experimental data is obtained in solid phase. The optimized structure of complex 1 is displayed in Figure 3.
3.4.2 Frontier molecular orbital analysis
Electronic characteristics and reactivity of any complex can be estimated via frontier molecular orbital analysis. Molecular orbital energies are computed at the B3LYP method along with SDD basis set and pictorial representation of HOMO (highest occupied molecular orbital) and LUMO (lowest unoccupied molecular orbital) electron densities along their energy gap is provided in Figure 4. The electron density of HOMO is localized at the carboxylate group of dianionic pyridine-2,6-dicarboxylate ligand (Pydc2−) while the LUMO orbital electron density is localized at the pyridinic moiety of complex 1. Energies of HOMO and LUMO are -6.12 eV and -1.78 eV, respectively. Complex 1 has a 4.33 eV HOMO-LUMO energy gap (Eg) indicating that it has excellent kinetic stability and low chemical reactivity.
Some reactivity parameters including ionization potential (I), electron affinity (A), chemical potential (µ), chemical hardness (η), chemical softness (S) and electrophilicity index (ω) are also calculated for the complex 1 and listed in Table 3. The chemical potential of -3.95 eV shows that the complex 1 is thermodynamically stable. It is an essential parameter for calculation of other electronic parameters. A larger chemical hardness value (2.16 eV) than that of chemical softness (1.08 eV) reflects that the complex 1 is thermodynamically stable and less reactive. A lower electrophilicity index (ω) value of complex 1 supports its nucleophilic nature.
HOMO, LUMO energies, energy gap Eg (eV) and their quantum reactivity parameters
Energies (au)
Energies (eV)
Other thermodynamic parameters of complex 1 have been computed theoretically to confirm the chemical stability. Total energy, heat capacity at constant volume, entropy and zero point vibrational energy of complex 1 are 483.38 kcal/mol, 206.43 cal/mol-kelvin, 332.109 cal/mol-kelvin and 446.67 kcal/mol. Similarly, rotational constants are also computed for complex 1 and are listed in Table S3.
3.4.3 Natural bond orbital (NBO) analysis
NBO analysis of the complex 1 is performed at the B3LYP/SDD level of theory by using built-in Gaussian NBO Version 3.1. Natural bond orbital analysis provides a valuable understanding of the intermolecular and intra-molecular interactions, hydrogen bonding and charge transfer between atoms of any molecular structure [47, 48]. It also depicts electrical charge displacement and conjugative interactions. The loss of occupancy from a localized NBO (donor) of Lewis structure to an empty non-Lewis NBO (acceptor) causes interactions. Second order perturbation theory analysis of Fock matrix has been carried out to observe the donor-acceptor NBO transitions for complex 1. The stabilization energy for donor (i) to acceptor (j) delocalization can be calculated as follows:
$${E}^{\left(2\right)}={q}_{i}\frac{{F}^{2} (i,j)}{{\epsilon }_{j}-{\epsilon }_{i}} \left(1\right)$$
Where qi is occupancy of donor orbital, εj, εi are diagonal elements and F (i,j) is off-diagonal Fock matrix element. Larger value of stabilization energy E(2) between electron donor and acceptor orbitals increases the stability of the synthesized complex. Some major donor to acceptor NBO transitions starting from the highest E(2) are listed in Table S4.
These results reflect that many NBO transitions have occurred between different energy levels of complex 1. The highest stabilization energy value of 1024.04 kcal/mol is observed from O89-H91 donor to antibonding O80-H82 acceptor orbital. The second largest E(2) of 606.13 kcal/mol is obtained by the charge transfer from lone pair of O89 to antibonding O80-H82 orbital and so on. These stronger intramolecular interactions with larger stabilization energies might be responsible for the stability of complex 1.
3.4.4 Molecular electrostatic potential (MEP) analysis
Molecular electrostatic potential analysis is the best tool to analyze the charge distribution of a molecular structure [49]. The electrophilic and nucleophilic sites in a molecule are described by MEP which is associated with electron density. In complex 1, more prominent red color patches at dianionic pyridine-2,6-dicarboxylate portion reflect the nucleophilic region with more negative potential. The blue color patches at dimethylammonium ions show the electrophilic nature. The molecular electrostatic potential surface of complex 1 is shown in Figure 5.
3.4.5 Hirshfeld surface analysis
Hirshfeld surface analysis is a useful tool for describing the space occupied by molecules in a crystal for partitioning of crystals electron density into molecular fragments [50]. It has a key contribution in defining the surface properties of molecules and provides information more about the intermolecular interactions of molecular crystals. Hirshfeld surface investigations of complex 1 is performed by using Crystal Explorer program from the X-ray crystallographic CIF file. Intermolecular interactions of complex 1 are best quantified by using Hirshfeld surfaces and their corresponding two-dimensional fingerprint plots. The dnorm mapped surface of complex 1 is shown in Figure 6.
The red patches in the surface map of complex 1 correspond to the hydrogen bonding between O-H…O and C-H…O atoms. The white regions indicate weak van der Waals interaction and blue patches covering a large area are for the longer than van der Waals interactions. Furthermore, 2D fingerprint plots for the intermolecular percentage contribution of atom to atom in complex 1 is shown in Figure S4. The results show that the highest intermolecular percentage contribution for complex 1 is from H…H atom up to 45.5%. The %age contributions from O…H and H…O atoms are 20.7% and 18%, respectively. Similarly, intermolecular %age contributions of C…H and H…C atoms are 6.4% and 5.6% and so on.
In this paper, a new zero dimensional ytterbium(III) complex with Pydc ligand has been synthesized under mild solvothermal condition. Extensive N-H···O and O-H···O bonding due to dimethylamine and water molecules are responsible for two two-dimensional framework. Solvent molecules and in situ generation of molecules like dimethylamine can be tuned for preferred structural topologies and cross-linking in to 2D and 3D coordination networks. Density functional theory simulations have been carried out for computing the geometrical parameters of complex 1 and the results are compared with the experimental data for reliability. Larger values of HOMO-LUMO energy gap (4.33 eV) and chemical hardness (2.16 eV) reflected that the complex is stable and less reactive in nature. NBO analysis has provided information about the nature of interactions and intermolecular charge transfer transitions from donor to acceptor orbitals. MEP analysis of complex 1 reveals that more electron density with negative electrostatic potential at the nitrogen and dianionic pyridine-2,6-dicarboxylate regions is responsible for ligand strong bonding with Yb(III) metal. Furthermore, dnorm Hirshfeld surface map indicates the nature of intermolecular interactions of molecular crystal. Its 2D fingerprint plots have the highest %age contribution of molecular surface from H…H (45.5%) followed by O…H (20.7%).
We gratefully acknowledge financial assistance from the Higher Education Commission of Pakistan, HEC project no. 01-ISULL/09/GCUL/ACAD/HEC/2017/166 and Office of Research Innovation & Commercialization GC University, Lahore, ORIC project no. 108/ORIC/19.
Xiang, D.-X. Bao, J. Wang, J., Y.-C. Li, X.-Q. Zhao, J. Lumin. 186, 273-282 (2017)
-L. Gao, L. Yi, B. Zhao, X.-Q. Zhao, P. Cheng, D.-Z. Liao, S.-P. Yan, Inorg. Chem. 45, 5980–5988 (2006)
K. Prasad, M.V. Rajasekharan, Inorg. Chem.48, 11543–11550 (2009)
Ay, E. Yildiz, I. Kani, Polyhedron. 142, 1-8 (2018)
Zhu, K. Ikarashi, T. Ishigaki, K. Uematsu, K. Toda, H. Okawa, M. Sato, Inorg. Chim. Act. 362, 3407-3414 (2009)
A. Brayshaw, A.K. Hall, W.T.A. Harrison, J.M. Harrowfield, D. Pearce, T.M. Shand, B.W. Skelton, C.R. Whitaker, A.H. White, Eur. J. Inorg. Chem. 1127-1141 (2005)
Zhao, L. Yi, Y. Dai, X.Y. Chen, P. Cheng, D.Z. Liao, S.P. Yan, Z.H. Jiang, Inorg. Chem. 44, 911-920 (2005)
Chuasaard, K. Panyarat, P. Rodlamul, K. Chainok, S. Yimklan and A. Rujiwatra, Cryst. Growth Des. 17, 1045-1054 (2017)
Sharif, I.U. Khan, O. Şahin, S. Ahmad, O. Buyukgungor, S. Ali, J. Inorg. Organomet. Polym. Mater. 22, 1165-1173 (2012)
-Z. Du, Y.-Y. Wang, Y.-Y. Xie, H.-T. Li, T.-F. Liu, J. Mol. Struct. 1108, 96 (2016)
Hojnik, M. Kristl, A. Golobic, A. Golobic, Z. Jaglicic, M. Drofenik, J. Mol. Struct. 1079, 54-60 (2015)
G. Huang, D.Q. Yuan, Y.Q. Gong, F.L. Jiang, M.C. Hong, J. Mol. Struct. 872, 99–104 (2008)
M. Chen, Q. Gao, D. Gao, D. Wang, Y. Li, W. Liu, W. Li, J. Coord. Chem. 66, 3829–3838 (2013)
Chen, W. Chen, Z. Ju, Q. Gao, T. Lei, W. Liu, Y.H. Li, D.D. Gao, W. Li, Dalton Trans. 42, 10495–10502 (2013)
Feng, J.-S. Zhao, L.-Y. Wang, X.-G. Shi, Inorg. Chem. Commun. 12, 388–391 (2009)
Zhao, X.-Q. Zhao, Z. Chen, W. Shi, P. Cheng, S.-P. Yan, D.-Z. Liao, CrystEngComm. 10, 1144–1146 (2008)
-Q. Zhao, Y. Zuo, D.-L. Gao, B. Zhao, W. Shi, P. Cheng, Cryst. Growth Des. 9, 3948–3957 (2009)
-Q. Zhao, B. Zhao, W. Shi, P. Cheng, CrystEngComm. 11, 1261–1269 (2009)
-Q. Zhao, P. Cui, B. Zhao, W. Shi, P. Cheng, DaltonTrans. 40, 805–819 (2011)
Sharif, O. Şahin, B. Khan, I.U. Khan, J. Coord. Chem. 68, 2725-2738 (2015)
Sharif, B. Khan, O. Şahin, I.U. Khan, Russ. J. Coord. Chem. 42, 56-65 (2016)
Sharif, I.U. Khan, O. Sahin, N. Jabeen, S. Ahmad, B. Khan, Z. Naturforsch. B. 74, 255-260 (2019)
Sharif, S. Zaheer, M.W. Mumtaz, O. Sahin, S. Ahmad, S. Zulfiqar, A. Adnan, I.U. Khan, Russ. J. Coord. Chem. 47, 95-104 (2021)
OriginPro 8 SRO, V 8.0724 (B724), Northampton MA 01060, USA. (originLab.com).
Blundell, D. Thouless, Magnetism in condensed matter, Oxford University Press: U.K., 29 (2001)
Want, F. Ahmad, P.N. Kotru. J. Alloys Comp. 448, L5-L6 (2008)
M. Sheldrick, Acta Cryst. A64, 112-122 (2008)
W. Betteridge, J.R. Carruthers, R.I. Cooper, K. Prout, D.J. Watkin. J. Appl. Cryst. 36, 1487 (2003)
J. Farrugia, J. Apply. Cryst. 32, 837-838 (1999)
F. Macrae, I.J. Bruno, J.A. Chisholm, P.R. Edgington, P. McCabe, E. Pidcock, P.A. Wood, J. Appl. Cryst. 41, 466-470 (2008)
L. Spek, J. Appl. Crystallogr. 26, 1-7 (2003)
Crystal and molecular structures program for Mac and Windows. CrystalMaker® software, Ltd, Oxford, England (crystalmaker.com).
J. Frisch, G.W. Trucks, H.B. Schlegel, G.E. Scuseria, M.A. Robb, J.R. Cheeseman, G. Scalmani, V. Barone, G.A. Petersson, H. Nakatsuji, X. Li, M. Caricato, A.V. Marenich, J. Bloino, B.G. Janesko, R. Gomperts, B. Mennucci, H.P. Hratchian, J.V. Ortiz, A.F. Izmaylov, J.L. Sonnenberg, D. Williams-Young, F. Ding, F. Lipparini, F. Egidi, J. Goings, B. Peng, A. Petrone, T. Henderson, D. Ranasinghe, V.G. Zakrzewski, J. Gao, N. Rega, G. Zheng, W. Liang, M. Hada, M. Ehara, K. Toyota, R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda, O. Kitao, H. Nakai, T. Vreven, K. Throssell, J.A. Montgomery, Jr, J.E. Peralta, F. Ogliaro, M.J. Bearpark, J.J. Heyd, E.N. Brothers, K.N. Kudin, V.N. Staroverov, T.A. Keith, R. Kobayashi, J. Normand, K. Raghavachari, A.P. Rendell, J.C. Burant, S.S. Iyengar, J. Tomasi, M. Cossi, J.M. Millam, M. Klene, C. Adamo, R. Cammi, J.W. Ochterski, R.L. Martin, K. Morokuma, O. Farkas, J.B. Foresman, and D.J. Fox, Gaussian 16, revision B. 01, Gaussian, Inc.,Wallingford CT, (2016)
GaussView, Version 6.1.1, Roy Dennington, Todd Keith, and John Millam, Semichem Inc., Shawnee Mission, KS, 2019.
Paswan, A. Anjum, N. Yadav, N. Jaiswal, R.K.P. Singh, J. Coord. Chem. 73, 686-701 (2020)
D. Becke, Int. J. Quantum Chem. 36, 599-609. (1989)
Lee, W. Yang, R.G. Parr, Phys. Rev. B. 37, 785 (1988)
Andrae, U. Haeussermann, M. Dolg, H. Stoll, H. Preuss, Theor. Chim. Acta,77, 123-141 (1990)
Lu, F. Chen, J. Comput. Chem. 33, 580-592 (2012)
Humphrey, A. Dalke, K. Schulten, J. Mol. Graph. 14, 33-38 (1996)
R. Spackman, M.J. Turner, J.J. McKinnon, S.K. Wolff, D.J. Grimwood, D. Jayatilaka, M.A. Spackman, J. App. Crystallogr. 54 (2021)
Hojnik, M. Kristl, A. Golobic, Z. Jaglicic, M. Drofenik, Cent. Eur. J. Chem. 12, 220-226 (2014)
Hussain, I.U. Khan, M. Akkurt, S. Ahmad, M.N. Tahir, Russ. J. Coord. Chem. 40, 686-694 (2014)
-B. Bo, G.-X. Sun, D.-L.Geng, Inorg. Chem. 49, 561–571 (2009)
G. Reddy, N. Mamidi, C.P. Pradeep, CrystEngComm. 18, 4272-4276 (2016)
D. L. Carlin, Magnetochemistry, Springer Verlag, Berlin, (1986)
E. Reed, F. Weinhold, J. Chem. Phys. 78, 4066-4073 (1983)
S. Murray, P. Politzer, Wiley Interdiscip. Rev. Comput. Mol. Sci. 1, 153-163 (2011)
M.A. Spackman, D. Jayatilaka, CrystEngComm, 11, 19-32 (2009)
JIOPM2021Supplementarymaterial.docx
Ybcheckcif.pdf
YbCCDC.txt | CommonCrawl |
Channel Estimation Techniques for Diffusion-Based Molecular Communications
Vahid Jamali†, Arman Ahmadzadeh†, Christophe Jardin‡, Heinrich Sticht‡, and Robert Schober†
†Institute for Digital Communications, ‡Institute for Biochemistry
Friedrich-Alexander University (FAU), Erlangen, Germany This paper has been submitted for presentation at IEEE International Conference on Communications (ICC) 2016.
In molecular communication (MC) systems, the expected number of molecules observed at the receiver over time after the instantaneous release of molecules by the transmitter is referred to as the channel impulse response (CIR). Knowledge of the CIR is needed for the design of detection and equalization schemes. In this paper, we present a training-based CIR estimation framework for MC systems which aims at estimating the CIR based on the observed number of molecules at the receiver due to emission of a sequence of known numbers of molecules by the transmitter. In particular, we derive maximum likelihood (ML) and least sum of square errors (LSSE) estimators. We also study the Cramer Rao (CR) lower bound and training sequence design for the considered system. Simulation results confirm the analysis and compare the performance of the proposed estimation techniques with the CR lower bound.
\EndPreamble
Recent advances in biology, nanotechnology, and medicine have enabled the possibility of communication in nano/micrometer scale environments [1] . Thereby, employing molecules as information carriers, molecular communication (MC) has quickly emerged as a bio-inspired approach for man-made communication systems in such environments. In fact, calcium signaling among neighboring cells, the use of neurotransmitters for communication across the synaptic cleft of neurons, and the exchange of autoinducers as signaling molecules in bacteria for quorum sensing are among the many examples of MC in nature [1] .
I-a Motivation
The design of any communication system crucially depends on the characteristics of the channel under consideration. In MC systems, the impact of the channel on the number of observed molecules can be captured by the channel impulse response (CIR) which is defined as the expected number of molecules counted at the receiver at time t after the instantaneous release of a known number of molecules by the transmitter at time t=0. The CIR, denoted by ¯c(t), can be used as the basis for the design of equalization and detection schemes for MC systems [2, 3, 4] . For diffusion-based MC, the released molecules move randomly according to Brownian motion which is caused by thermal vibration and collisions with other molecules in the fluid environment. Thereby, the average concentration of the molecules at a given coordinate a=[ax,ay,az] and at time t after release by the transmitter, denoted by ¯C(a,t), is governed by Fick's second law of diffusion [3] . Finding ¯C(a,t) analytically involves solving partial differential equations and depends on initial and boundary conditions. Therefore, one possible approach for determining the CIR, which is widely employed in the literature [4] , is to first derive a sufficiently accurate analytical expression for ¯C(a,t) for the considered MC channel from Fick's second law, and to subsequently integrate it over the receiver volume, Vrec, i.e.,
¯c(t)=∭a∈Vrec¯C(a,t)daxdaydaz. (1)
However, this approach may not be applicable in many practical scenarios as discussed in the following.
The CIR can be obtained based on (1) only for the special case of a fully transparent receiver where it is assumed that the molecules move through the receiver as if it was not present in the environment. The assumption of a fully transparent receiver is a valid approximation only for some particular scenarios where the interaction of the receiver with the molecules can be neglected. However, for general receivers, the relationship between the concentration ¯C(a,t) and the number of observed molecules ¯c(t) may not be as straightforward.
Solving the differential equation associated with Fick's second law is possible only for simple and idealistic environments. For example, assuming a point source located at the origin of an unbounded environment and impulsive molecule release, ¯C(a,t) is obtained as [4]
¯C(a,t)=NTX(4πDt)3/2exp(−|a|24Dt)[\small molecules\small m3], (2)
where NTX is the number of molecules released by the transmitter at t=0 and D is the diffusion coefficient of the signaling molecule. However, ¯C(a,t) cannot be obtained in closed form for most practical MC environments which may involve difficult boundary conditions, non-instantaneous molecule release, flow, etc. Additionally, as has been shown in [5] , the classical Fick's diffusion equation might even not be applicable in complex MC environments as physicochemical interactions with other objects in the channel, such as other molecules, cells, and microvessels, are not accounted for.
Even if an expression for ¯C(a,t) can be obtained for a particular MC system, e.g. (2), it will be a function of several channel parameters such as the distance between the transmitter and the receiver and the diffusion coefficient. However, in practice, these parameters may not be known a priori and also have to be estimated [6, 7] . This complicates finding the CIR based on ¯C(a,t).
Fortunately, for most communication problems, including equalization and detection, only the expected number of molecules that the receiver observed at the sampling times is needed [3, 4] . Therefore, knowledge of how the average concentration is related to the channel parameters is not required, and hence, the difficulties associated with deriving ¯C(a,t) can be avoided by directly estimating the CIR. Analytical expressions for the CIR for specific assumptions for the transmitter, channel, and receiver are available in the literature. For example, the CIR for an unbounded environment and a fully absorbing receiver is given in [8] . However, for general channel environments and receivers, a simple closed-form expression for the expected number of observed molecules ¯c(t) may not exist. Even if such an expression can be derived, it is only valid for a particular MC environment and is still a function of several unknown parameters. Motivated by the above discussion, our goal in this paper is to develop a general CIR estimation framework for MC systems which is not limited to a particular MC channel model or a specific receiver type and does not require knowledge of the channel parameters.
I-B Related Work
In most existing works on MC, the CIR is assumed to be perfectly known for receiver design [2, 3, 4, 9] . In the following, we review the relevant MC literature that focused on channel characterization. Estimation of the distance between a transmitter and a receiver was studied in [6, 7] for diffusive MC. In [10] , an end-to-end mathematical model, including transmitter, channel, and receiver, was presented, and in [11] , a stochastic channel model was proposed for flow-based and diffusion-based MC. For active transport MC, a Markov chain channel model was derived in [12] . Additionally, a unifying model including the effects of external noise sources and inter-symbol interference (ISI) was proposed for diffusive MC in [13] . In [14] , the authors analyzed a microfluidic MC channel, propagation noise, and channel memory. However, the focus of [6, 7, 10, 11, 12, 13, 14] is either channel modeling or the estimation of channel parameters, i.e., the obtained results are not directly applicable to CIR acquisition.
In contrast to MC, for conventional wireless communication, there is a rich literature on channel estimation, mainly for linear channel models and impairment by additive white Gaussian noise (AWGN), see [15, 16] , and the references therein. Channel estimation was also studied for non-linear and/or non-AWGN channels especially in optical communication. For example, for a photon-counting receiver, a linear time-invariant channel model with Poisson noise was considered in [17] and a non-linear channel model with Poisson noise was investigated in [18] . However, the MC channel model considered in this paper is neither linear nor impaired by AWGN and is also different from that in [18] . Therefore, the results known from conventional wireless communication are not directly applicable to MC.
I-C Contributions
In contrast to [6, 9, 8, 7, 10, 11, 12, 13, 14] , in this paper, we directly estimate the CIR based on the channel output, i.e., the number of molecules observed at the receiver. To the best of the authors' knowledge, this problem has not been studied in the MC literature, yet. In particular, we present a training-based CIR estimation framework which aims at estimating the CIR based on the detected number of molecules at the receiver due to the emission of a sequence of known numbers of molecules by the transmitter. To this end, we first derive the optimal maximum likelihood (ML) CIR estimator. Subsequently, we obtain the suboptimal least sum of square errors (LSSE) CIR estimator which entails a lower computational complexity than the ML estimator. Additionally, we derive the Cramer Rao (CR) bound which constitutes a lower bound on the estimation error variance of any unbiased estimator. We also study training sequence design for the considered MC system. Simulation results confirm the analysis and evaluate the performance of the proposed estimation techniques with respect to the CR lower bound.
Notations: We use the following notations throughout this paper: \mathbbmssEx{⋅} denotes expectation with respect to random variable (RV) x and [x]+=max{0,x}. Bold capital and small letters are used to denote matrices and vectors, respectively. 1 and 0 are vectors whose elements are all ones and zeros, respectively, AT denotes the transpose of A, ∥a∥ represents the norm of vector a, [A]mn denotes the element in the m-th row and n-th column of matrix A, tr{A} is the trace of matrix A, diag{a} denotes a diagonal matrix with the elements of vector a on its main diagonal, vdiag{A} is a vector which contains the diagonal entries of matrix A, eig{A} is the set of eigen-values of matrix A, A⪰0 denotes a positive semidefinite matrix A, and a≥0 means that all the elements of vector a are non-negative. Additionally, Poiss(λ) denotes a Poisson RV with mean λ, and Bin(n,p) denotes a binomial RV for n trials and success probability p.
Ii Problem Formulation
In this section, we first present the considered MC channel model, and subsequently, formulate the CIR estimation problem.
Ii-a System Model
We consider an MC system consisting of a transmitter, a channel, and a receiver. At the beginning of each symbol interval, the transmitter releases either NTX or zero molecules, i.e., ON-OFF keying is performed. In this paper, we assume that the transmitter emits only one type of molecule. The released molecules propagate through the medium between the transmitter and the receiver. We assume that the movements of individual molecules are independent from each other. The receiver counts the number of observed molecules in each symbol interval. We note that this is a rather general model for the MC receiver which includes well-known receivers such as the transparent receiver [4] and the absorbing receiver [8] .
Due to the memory of the MC channel, inter-symbol interference (ISI) occurs [13, 14] . In particular, ISI-free communication is only possible if the symbol intervals are chosen sufficiently large such that the CIR fully decays to zero within one symbol interval which severely limits the transmission rate and results in an inefficient MC system design. Therefore, taking into account the effect of ISI, we assume the following input-output relation for the MC system
r[k]=L∑l=1cl[k]+cn[k], (3)
where r[k] is the number of molecules detected at the receiver in symbol interval k, L is the number of memory taps of the channel, and cl[k] is the number of molecules observed at the receiver in symbol interval k due to the release of s[k−l+1]NTx molecules by the transmitter in symbol interval k−l+1, where s[k]∈{0,1} holds. Thereby, cl[k] can be well approximated by a Poisson RV with mean ¯cls[k−l+1], i.e., cl[k]∼Poiss(¯cls[k−l+1]), see [2, 3] . Moreover, cn[k] is the number of external noise molecules detected by the receiver in symbol interval k but not released by the transmitter. Noise molecules may originate from interfering sources which employ the same type of molecule as the considered MC system. Hence, cn[k] can also be modeled as a Poisson RV, i.e., cn[k]∼Poiss(¯cn), where ¯cn=\mathbbmssE{cn[k]}.
From a probabilistic point of view, we can assume that each molecule released by the transmitter in symbol interval k−l+1 is observed at the receiver in symbol interval k with a certain probability, denoted by pl. Thereby, the probability that n molecules are observed at the receiver in symbol interval k due to the emission of NTx molecules in symbol interval k−l+1 follows a binomial distribution, i.e., n∼Bin(NTx,pl). Moreover, assuming NTx→∞ while NTxpl≜¯cl is fixed, the binomial distribution Bin(NTx,pl) converges to the Poisson distribution Poiss(¯cl) [19] . This is a valid assumption in MC since the number of released molecules is often very large to ensure that a sufficient number of molecules reaches the receiver. The same reasoning applies to the noise molecules.
Unlike the conventional linear input-output model for channels with memory in wireless communication systems [15, 16] , the channel model in (3) is not linear since s[k−l+1] does not affect the observation r[k] directly but via Poisson RV cl[k]. However, the expectation of the received signal is linearly dependent on the transmitted signal, i.e.,
¯r[k]=\mathbbmssE{r[k]}=L∑l=1¯cls[k−l+1]+¯cn. (4)
We note that for a given s[k], in general, the actual number of molecules observed at the receiver, r[k], will differ from the expected number of observed molecules, ¯r[k], due to the intrinsic noisiness of diffusion.
Ii-B CIR Estimation Problem
Let s=[s[1],s[2],…,s[K]]T be a training sequence of length K. Here, we assume continuous transmission. Therefore, in order to ensure that the received signal is only affected by the training sequence s and not by the transmissions in previous symbol intervals, we only employ r[k],k≥L, for CIR estimation. Thereby, the K−L+1 samples used for CIR estimation are given by
r[L]=Poiss(¯c1s[L])+Poiss(¯c2s[L−1])+⋯
+Poiss(¯cLs[1])+Poiss(¯cn) (5a)
r[L+1]=Poiss(¯c1s[L+1])+Poiss(¯c2s[L])+⋯
\scalebox0.95$⋮$\scalebox0.95$⋮$
r[K]=Poiss(¯c1s[K])+Poiss(¯c2s[K−1])+⋯
+Poiss(¯cLs[K−L+1])+Poiss(¯cn). (7a)
For convenience of notation, we define r=[r[L],r[L+1],…,r[K]]T and ¯c=[¯c1,¯c2,…,¯cL,¯cn]T, and fr(r|¯c,s) is the probability density distribution (PDF) of observation r conditioned on a given channel ¯c and a given training sequence s. We assume that the CIR111With a slight abuse of notation, in the following, we refer to vector ¯c as the CIR although ¯c also contains the mean of the noise ¯cn., ¯c, remains unchanged for a sufficiently large block of symbol intervals during which CIR estimation and data transmission are performed. However, the CIR may change from one block to the next due to e.g. a change in the distance between transmitter and receiver. To summarize, in each block, the stochastic model in (3) is characterized by ¯c and our goal in this paper is to estimate ¯c based on the vector of random observations r.
Iii CIR Estimation
In this section, we derive the ML and LSSE estimators as well as the CR lower bound for CIR estimation in MC.
Iii-a ML CIR Estimation
The ML CIR estimator chooses the CIR which maximizes the likelihood of observation vector r [19] . In particular, the ML estimator is given by
^¯cML=argmax¯c≥0fr(r|¯c,s). (8)
We assume that the observations in different symbol intervals are independent, i.e., r[k] is independent of r[k′] for k≠k′. This assumption is valid in practice if the time interval between two consecutive samples is sufficiently large, see [3] for a detailed discussion. Moreover, from (3), we observe that r[k] is a sum of Poisson RVs. Hence, r[k] is also a Poisson RV with its mean equal to the sum of the means of the summands, i.e., r[k]∼Poiss(¯r[k]) with and sk=[s[k],s[k−1],…,s[k−L+1],1]T. Therefore, fr(r|¯c,s) is given by
fr(r|¯c,s) =K∏k=L(¯cTsk)r[k]exp(−¯cTsk)r[k]!. (9)
Maximizing fr(r|¯c,s) is equivalent to maximizing ln(fr(r|¯c,s)) since ln(⋅) is a monotonically increasing function. Hence, the ML estimate can be rewritten as
^¯cML=argmax¯c≥0g(¯c)where (10)
g(¯c)≜K∑k=L[−¯cTsk+r[k]ln(¯cTsk)].
To present the solution of the above optimization problem rigorously, we first define some auxiliary variables. Let A={A1,A2,…,AN} denote a set which contains all possible N=2L+1−1 subsets of set F={1,2,⋯,L,n} except the empty set. Here, An,n=1,2,…,N, denotes the n-th subset of A. Moreover, let ¯cAn and sAnk denote reduced-dimension versions of ¯c and sk, respectively, which contain only the elements of ¯c and sk whose indices are in set An, respectively.
Lemma 1
The ML estimator of the CIR for the considered MC channel is given by Algorithm 1 where the following non-linear system of equations is solved222The system of nonlinear equations in (11) can be solved using standard mathematical software packages such as Mathematica. for different An
K∑k=L⎡⎣r[k](¯cAn)TsAnk−1⎤⎦sAnk=0. (11)
Proof:
The problem in (10) is a convex optimization problem in variable ¯c because g(¯c) is a concave function in ¯c and the feasible set ¯c≥0 is linear in ¯c. In particular, ln(¯cTsk) is concave since ¯cTsk is affine and the log-function is concave [20, Chapter 3] . Therefore, g(¯c) is a sum of weighted concave terms r[k]ln(¯cTsk) and affine terms ¯cTsk which in turn yields a concave function [20, Chapter 3] . For the constrained convex problem in (10), the optimal solution falls into one of the following two categories:
Stationary Point: In this case, the optimal solution is found by taking the derivative of g(¯c) with respect to ¯c and setting ¯cF=¯c and sFk=sk which leads to (11) for An=F. Note that this stationary point is the global optimal solution of the unconstrained version of the problem in (10), i.e., when constraint ¯c≥0 is dropped. Therefore, if ¯cF is in the feasible set, i.e., ¯cF≥0 holds, it is also the optimal solution of the constrained problem in (10) and hence, we obtain ^¯cML=¯cF.
Boundary Point: In this case, for the optimal solution, some of the elements of ¯c are zero. Since it is not a priori known which elements are zero, we have to consider all possible cases. To do so, we use auxiliary variables ¯cAn and sAnk where set An specifies the indices of the non-zero elements of ¯c. For a given An, we formulate a new problem by substituting ¯cAn and sAnk for ¯c and sk in (10), respectively. The solution of the new problem is now a stationary point not a boundary point since a boundary point implies that some of the elements of ¯cAn are zero which yields a contradiction because we assumed that ¯cAn includes the non-zero elements of ¯c. The stationary point of the new problem can be found by taking the derivative of g(¯cAn) with respect to ¯cAn which leads to (11). Here, if ¯cAn≥0 does not hold, we discard ¯cAn, otherwise, it is a candidate for the optimal solution. Therefore, we construct the candidate ML CIR estimate, denoted by ^¯cCAN, such that the elements of ^¯cCAN whose indices are in An are equal to the values of the corresponding elements in ¯cAn and the remaining elements are equal to zero. The resulting ^¯cCAN is saved in the candidate set C. Finally, the ML estimate, ^¯cML, is given by that ^¯cCAN in set C which maximizes g(¯c).
The above results are concisely summarized in Algorithm 1 which concludes the proof. \qed
initialize An=F and solve (11)/(14) to find ¯cF
if ¯cF≥0 then
Set ^¯cML=¯cF/^¯cLSSE=¯cF
for ∀An≠F do
Solve (11)/(14) to find ¯cAn
if ¯cAn≥0 holds then
Set the values of the elements of ^¯cCAN, whose indices are in An, equal to the values of the corresponding elements in ¯cAn and the remaining elements equal to zero;
Save ^¯cCAN in the candidate set C
Discard ¯cAn
end for
Choose ^¯cML/^¯cLSSE equal to that ^¯cCAN in the candidate set C which maximizes g(¯c)/minimizes ∥ϵ∥2
Algorithm 1 ML/LSSE CIR Estimate ^¯cML/^¯cLSSE
Let us assume a priori that all L taps and the noise mean are non-zero, i.e., ¯c>0. Thereby, the consistency property of ML estimation [19, Chapter 4] implies that under some regularity conditions, notably that the likelihood is a continuous function of ¯c and that ¯c is not on the boundary of the parameter set ¯c≥0, we obtain \mathbbmssE{^¯cML}→¯c as K→∞. In other words, the ML estimator is asymptotically unbiased. Therefore, for large values of K, the ML estimator becomes sufficiently accurate such that none of the elements of ^¯cML is zero. In this case, Algorithm 1 reduces to directly solving (11) for An=F.
Iii-B LSSE CIR Estimation
The LSSE CIR estimator chooses that ¯c which minimizes the sum of the square errors for the observation vector r. Thereby, the error vector is defined as ϵ=r−\mathbbmssE{r}=r−S¯c where S=[sL,sL+1,…,sK]T. In particular, the LSSE CIR estimate can be written as
^¯cLSSE=argmin¯c≥0∥ϵ∥2=∥r−S¯c∥2. (12)
The square of the norm of the error vector is obtained as
∥ϵ∥2 =tr{ϵϵT}=tr{(r−S¯c)(r−S¯c)T} (13)
=tr{STS¯c¯cT}−2tr{rTS¯c}+tr{rrT},
where we used the following properties of the trace: tr{A}=tr{AT} and tr{AB}=tr{BA} [21] . The LSSE estimate is given in the following lemma where we use the auxiliary matrix SAn=[sAnL,sAnL+1,…,sAnK]T.
The LSSE estimator of the CIR for the considered MC channel is given by Algorithm 1 where for a given set An, ¯cAn is obtained as
¯cAn=((SAn)TSAn)−1(SAn)Tr. (14)
The optimization problem in (12) is convex since ∥ϵ∥2 is quadratic in variable ¯c, STS⪰0 holds, and the feasibility set ¯c≥0 is linear in ¯c [20, Chapter 4] . Hence, the constrained convex problem in (12) can be solved using a similar methodology as was used to find the ML estimate in Lemma 1. This leads to Lemma 2. \qed
The LSSE estimator employs in fact a linear filter to compute ¯cAn, i.e., ¯cAn=FAnr where FAn=((SAn)TSAn)−1(SAn)T. Moreover, since the training sequence s is fixed, matrix FAn can be calculated offline and then be used for online CIR estimation. Therefore, the calculation of ^¯cAn for the LSSE estimator in (14) is considerably less computationally complex than the computation of ^¯cAn for the ML estimator in (11) which requires solving a system of nonlinear equations.
Iii-C CR Lower Bound
The CR bound is a lower bound on the variance of any unbiased estimator of a deterministic parameter [19] . In particular, under some regularity conditions, the covariance matrix of any unbiased estimate of parameter ¯c, denoted by C(^¯c), satisfies
C(^¯c)−I−1(¯c)⪰0, (15)
where I(¯c) is the Fisher information matrix of parameter vector ¯c where the elements of I(¯c) are given by
[I(¯c)]i,j =−\mathbbmssEr|¯c{∂2ln{fr(r|¯c,s)}∂¯c[i]∂¯c[j]}. (16)
We note that for a positive semidefinite matrix, the diagonal elements are non-negative, i.e., [C(^¯c)−I−1(¯c)]i,i≥0. Therefore, for an unbiased estimator, i.e., \mathbbmssE{^¯c}=¯c holds, with the estimation error vector defined as e=¯c−^¯c, the CR bound provides the following lower bound on the sum of the expected square errors
\mathbbmssEr|¯c{∥e∥2}≥tr{I−1(¯c)}=tr⎧⎨⎩[K∑k=LsksTk¯cTsk]−1⎫⎬⎭. (17)
We note that the ML and LSSE estimators in Algorithm 1 are biased in general. Hence, the error variances of the ML and LSSE estimates may fall below the CR bound. However, as K→∞, the ML and LSSE estimators become asymptotically unbiased, cf. Remark 2, and the CR bound becomes a valid lower bound. The asymptotic unbiasedness of the proposed estimators is also numerically verified in Section V, cf. Fig. 1.
Iv Training Sequence Design
In the following, we present two different training sequence designs for CIR estimation in MC systems.
Iv-a LSSE-Based Training Sequence Design
We first consider a training sequence design which minimizes an upper bound on the average estimation error for the LSSE estimator. First, we note that for training sequence design, the estimation error has to be averaged over both r and ¯c since both are unknown, and hence, have to be modeled as RVs. Again, we assume a priori that all L taps and the noise mean are non-zero. Therefore, neglecting the information that ¯c≥0 has to hold in (12) yields an upper bound on the estimation error for the LSSE estimator. This upper bound is adopted here for the problem of sequence design since the solution of (12) after dropping constraint ¯c≥0 lends itself to an elegant closed-form solution for the estimated CIR given by ^¯cLSSEup=(STS)−1Sr, which can be used as the basis for either a computer-based search or even a systematic approach to find good training sequences. Moreover, this upper bound is tight as K→∞ since ^¯cLSSE>0 holds and we obtain ^¯cLSSE=^¯cLSSEup. In Fig. 3, we show numerically that even for short sequence lengths, this upper bound is not loose.
Defining the estimation error as eLSSEup=¯c−^¯cLSSEup, the expected square error norm is obtained as
\mathbbmssEr,¯c{∥eLSSEup∥2}
=\mathbbmssEr,¯c{tr{(¯c−(STS)−1STr)(¯c−(STS)−1STr)T}}
=\mathbbmssEr,¯c{tr{(STS)−1STrrTS(STS)−1}
−2tr{¯crTS(STS)−1}+tr{¯c¯cT}}. (18)
Next, we calculate the expectation over (r,¯c) in (IV-A) in two steps, first with respect to r conditioned on ¯c and then with respect to ¯c. To this end, we use , which is valid for general matrices A, B, and X, and \mathbbmssEx{xxT}=λλT+diag{λ}, which is valid for multivariate Poisson random vectors x with covariance matrix C(x)=diag{λ}. Hence, \mathbbmssE{∥eLSSEup∥2} can be calculated as
\mathbbmssE¯c\mathbbmssEr|¯c{∥eLSSEup∥2}
−2tr{¯c¯cTSTS(STS)−1}+tr{¯c¯cT}
+tr{(STS)−1STdiag{S¯c}S(STS)−1}}
=tr{STvdiag{S(STS)−2ST}μT¯c}, (19)
where μ¯c=\mathbbmssE¯c{¯c}.
The evaluation of the expression in (IV-A) can be numerically challenging due to the required inversion of matrix STS, especially when one of the eigen-values of STS is close to zero. One way to cope with this problem is to eliminate all sequences resulting in close-to-zero eigen-values for matrix STS during the search. Formally, we can adopt the following search criterion for training sequence design
s∗=argmins∈Str{STvdiag{S(STS)−2ST}μT¯c}, (20)
where S={s∣∣|x|>ε,∀x∈eig{STS}} and ε is a small number which guarantees that the eigen-values of matrix STS are not close to zero, e.g., in Section V, we choose ε=10−9.
Iv-B ISI-Free Training Sequence Design
One simple approach to estimate the CIR is to construct a training sequence such that ISI is avoided during estimation. In this case, in each symbol interval, the receiver will observe molecules which have been released by the transmitter in only one symbol interval and not in multiple symbol intervals. To this end, the transmitter releases NTx molecules every L+1 symbol intervals and remains silent for the rest of the symbol intervals. In particular, the sequence s is constructed as follows:
s[k]={1,ifk−k0L+1∈Z0,otherwise (21)
where k∈{1,…,K}, and k0 is the index of the first symbol interval in which the transmitter releases molecules. Moreover, for this training sequence, the CIR can be straightforwardly estimated as
(22a)
^¯cISIFn=1|Kn|∑k∈Knr[k], (23a)
where Kl={k|k−k0−l+1L+1∈Z∧k∈{1,…,K}}, Kn={k|k−k0−LL+1∈Z∧k∈{1,…,K}}, and [⋅]+ is needed to ensure that all estimated channel taps are non-negative, i.e., ^¯cISIF≥0 holds.
V Performance Evaluation
In this section, we evaluate the performances of the different estimation techniques and training sequence designs developed in this paper. For simplicity, for the results provided in this section, we generate the CIR ¯c based on (1) and (2). However, we emphasize that the proposed estimation framework is not limited to the particular channel and receiver models assumed in (1) and (2). We use (1) and (2) only to obtain a ¯c which is representative of a typical CIR in MC. In particular, we assume a point source with impulsive molecule release and NTx=105, a fully transparent spherical receiver with radius 45 nm, and an unbounded environment with D=4.365×10−10m2s [9] . Additionally, we assume that the distance between the transmitter and the receiver is given by |a|=|¯a|+~a nm where |¯a|=500 nm and ~a is a RV uniformly distributed in the interval [−^a,^a]. The receiver counts the number of molecules once per symbol interval at time Tsmp=argmaxt¯C(¯a,t) after the beginning of the symbol interval where ¯C(¯a,t) is computed based on (2). The noise mean is chosen as . Furthermore, the symbol duration and the number of taps L are chosen such that ¯cL+1<0.1¯c1.
In order to compare the performances of the considered estimators quantitatively, we define the normalized mean and variance of the estimation error e=^¯c−¯c as
¯¯¯¯¯¯¯¯¯¯¯¯¯Meane =∥∥\mathbbmssE{e}∥∥2∥\mathbbmssE{¯c}∥2and (24)
¯¯¯¯¯¯¯¯¯Vare =\mathbbmssE{∥e∥2}−∥\mathbbmssE{e}∥2∥\mathbbmssE{¯c}∥2, (25)
respectively. In Fig. 1, we show the normalized mean of the estimation error, ¯¯¯¯¯¯¯¯¯¯¯¯¯Meane, in dB vs. the training sequence length, K, for L∈{1,3,5}. The training sequences are constructed by concatenating n copies of the binary sequence [1100100101] of length 10, i.e., K=10n. Furthermore, for clarity of presentation, we assume ^a=0 which corresponds to a time-invariant environment with deterministic CIR. The results reported in Fig. 1 are Monte Carlo simulations where each point of the curves is obtained by averaging over 106 random realizations of observation vector r. We observe that the normalized error mean decreases as the sequence length increases. Therefore, the ML and LSSE estimators are biased for short sequence lengths but as the sequence length increases, both the ML and LSSE estimators become asymptotically unbiased, i.e., \mathbbmssE{^¯c}→¯c as K→∞. Furthermore, from Fig. 1, we observe that the error mean increases as the number of channel taps increases.
In Fig. 2, we show the normalized estimation error variance, ¯¯¯¯¯¯¯¯¯Vare, in dB vs. the training sequence length, K, for L∈{1,3,5}. The parameters used in Fig. 2 are identical to those used in Fig. 1. As expected, the variance of the estimation error decreases with increasing training sequence length. Moreover, for L∈{3,5}, we observe that the variance of the estimation error for the LSSE estimator is slightly higher than that for the ML estimator, whereas for L=1, the variance of the estimation error for the LSSE estimator coincides with that of the ML estimator. These results suggest that the simple LSSE estimator provides a favorable complexity-performance tradeoff for CIR estimation in the considered MC system. For short sequence lengths, the variances of the ML and LSSE estimators can even be lower than the CR bound as these estimators are biased and the CR bound is a valid lower bound only for unbiased estimators, see Fig. 1. However, as K increases, both the ML and LSSE estimators become asymptotically unbiased, see Fig. 1. Fig. 2 shows that, for large K, the error variance of the ML estimator coincides with the CR bound and the error variance of the LSSE estimator is very close to the CR bound. We note that for the adopted training sequence, the matrix inversion required in (17) cannot be computed for K=10 and L=5 since matrix ∑Kk=LsksTk¯cTsk has one zero eigen-value. Therefore, we do not report the value of the CR bound for this case in Fig. 2.
\psfragfig
Fig/MeanDet/MeanDet
Fig. 1: Normalized estimation error mean, ¯¯¯¯¯¯¯¯¯¯¯¯¯Meane, in dB vs. the training sequence length, K, for L∈{1,3,5}.
Fig/DeterSeq/DeterSeq
Fig. 2: Normalized estimation error variance, ¯¯¯¯¯¯¯¯¯Vare, in dB vs. the training sequence length, K, for L∈{1,3,5}.
Next, we investigate the performances of the optimal and ISI-free training sequence designs developed in Section IV. Here, we employ a computer-based search to find the optimal sequence based on the criterion in (20) where ε=10−9. We consider short sequence lengths, i.e., K≤20, due to exponential increase of the computational complexity of the exhaustive search with respect to the sequence length. Moreover, since there are L+1 unknown parameters, we require at least L+1 observations for estimation, i.e., K−L+1≥L+1 or equivalently K≥2L. In Table I, we present the optimal sequences obtained for L∈{1,2,3,4,5}, K∈{10,16}, and ^a=100 nm. We note that the optimal sequence which is obtained from (20) is not unique and only one of the optimal sequences is shown in Table I for each value of K and L. The optimal sequences shown in blue font in Table I are identical to the ISI-free sequences proposed in (21). In particular, for L=1, the optimal sequences for both K=10 and 16 are ISI-free, whereas for L>1, none of the optimal sequences is ISI-free.
In Fig. 3, we show the normalized LSSE estimation error, ¯¯¯¯¯¯¯¯¯Vare, in dB vs. the training sequence length, K, for L∈{1,2,3,5} and ^a=100 nm. Thereby, we report the analytical results for the upper bound in (IV-A) and Monte Carlo simulation results for 106 random realizations. Fig. 3 confirms that (IV-A) is a tight upper bound even for short sequence lengths. Moreover, we observe from Fig. 3 that the performance of the ISI-free sequence coincides with that of the optimal sequence for all sequence lengths when L=1, and for L>1, the difference between the error variances of the ISI-free sequence and the optimal sequence increases as L increases. This result suggests that for MC channels with small numbers of taps, a simple ISI-free training sequence is a suitable option. Furthermore, as expected, the estimation error decreases with increasing training sequence length.
K=10
L=1 s∗=[1010101010]T s∗=[0101010101010101]T
TABLE I: Examples of Optimal LSSE Sequences Obtained by a Computer-Based Search for L∈{1,…,5} and K∈{10,16}.
Vi Conclusions
In this paper, we developed a training-based CIR estimation framework which enables the acquisition of the CIR based on the observed number of molecules at the receiver due to emission of a sequence of known numbers of molecules by the transmitter. We derived the optimal ML estimator, the suboptimal LSSE estimator, and the CR lower bound. Furthermore, we studied both an optimal and a suboptimal training sequence design for the considered MC system. Simulation results confirmed the analysis and compared the performance of the proposed estimation techniques with the CR lower bound.
Fig/SeqDes/SeqDes
Fig. 3: Normalized LSSE estimation error variance, ¯¯¯¯¯¯¯¯¯Vare, in dB vs. the training sequence length, K, for L∈{1,2,3,5}.
[1] T. Nakano, M. Moore, F. Wei, A. Vasilakos, and J. Shuai, "Molecular Communication and Networking: Opportunities and Challenges," IEEE Trans. NanoBiosci, vol. 11, no. 2, pp. 135–148, June 2012.
[2] H. Arjmandi, A. Gohari, M. Kenari, and F. Bateni, "Diffusion-Based Nanonetworking: A New Modulation Technique and Performance Analysis," IEEE Commun. Lett., vol. 17, no. 4, pp. 645–648, Apr. 2013.
[3] A. Noel, K. Cheung, and R. Schober, "Optimal Receiver Design for Diffusive Molecular Communication with Flow and Additive Noise," IEEE Trans. NanoBiosci., vol. 13, no. 3, pp. 350–362, Sept. 2014.
[4] M. Mahfuz, D. Makrakis, and H. Mouftah, "A Comprehensive Study of Sampling-Based Optimum Signal Detection in Concentration-Encoded Molecular Communication," IEEE Trans. NanoBiosci., vol. 13, no. 3, pp. 208–222, Sept. 2014.
[5] G. Wei and R. Marculescu, "Miniature Devices in the Wild: Modeling Molecular Communication in Complex Extracellular Spaces," IEEE J. Select. Areas Commun., vol. 32, no. 12, pp. 2344–2353, Dec 2014.
[6] M. Moore, T. Nakano, A. Enomoto, and T. Suda, "Measuring Distance From Single Spike Feedback Signals in Molecular Communication," IEEE Trans. Sig. Proc., vol. 60, no. 7, pp. 3576–3587, July 2012.
[7] A. Noel, K. Cheung, and R. Schober, "Bounds on Distance Estimation via Diffusive Molecular Communication," in IEEE Globecom, Dec. 2014, pp. 2813–2819.
[8] A. Akkaya, H. Yilmaz, C.-B. Chae, and T. Tugcu, "Effect of Receptor Density and Size on Signal Reception in Molecular Communication via Diffusion With an Absorbing Receiver," IEEE Commun. Lett., vol. 19, no. 2, pp. 155–158, Feb. 2015.
[9] A. Ahmadzadeh, A. Noel, A. Burkovski, and R. Schober, "Amplify-and-Forward Relaying in Two-Hop Diffusion-Based Molecular Communication Networks," accepted for presentation in IEEE Globecom, 2015.
[10] M. Pierobon and I. Akyildiz, "A Physical End-to-End Model for Molecular Communication in Nanonetworks," IEEE J. Sel. Areas Commun., vol. 28, no. 4, pp. 602–611, May 2010.
[11] D. Miorandi, "A Stochastic Model for Molecular Communications," Nano Commun. Netw., vol. 2, no. 4, pp. 205–212, 2011.
[12] N. Farsad, A. Eckford, and S. Hiyama, "A Markov Chain Channel Model for Active Transport Molecular Communication," IEEE Trans. Signal. Process., vol. 62, no. 9, pp. 2424–2436, May 2014.
[13] A. Noel, K. Cheung, and R. Schober, "A Unifying Model for External Noise Sources and ISI in Diffusive Molecular Communication," IEEE J. Sel. Areas Commun., vol. 32, no. 12, pp. 2330–2343, Dec. 2014.
[14] A. Bicen and I. Akyildiz, "End-to-End Propagation Noise and Memory Analysis for Molecular Communication over Microfluidic Channels," IEEE Trans. Commun., vol. 62, no. 7, pp. 2432–2443, July 2014.
[15] S. Crozier, D. Falconer, and S. Mahmoud, "Least Sum of Squared Errors (LSSE) Channel Estimation," IEE Proc. F Radar Sig. Process., vol. 138, no. 4, pp. 371–378, Aug 1991.
[16] M. Ozdemir and H. Arslan, "Channel Estimation for Wireless OFDM Systems," IEEE Commun. Surveys Tutorials, vol. 9, no. 2, pp. 18–48, 2007.
[17] C. Gong and Z. Xu, "Channel Estimation and Signal Detection for Optical Wireless Scattering Communication with Inter-Symbol Interference," IEEE Trans. Wireless Commun., vol. PP, no. 99, pp. 1–1, 2015.
[18] X. Zhang, C. Gong, and Z. Xu, "Estimation of NLOS Optical Wireless Communication Channels with Laser Transmitters," in Proc. Asilomar Conf. Signals, Syst., Comput., Nov 2014, pp. 268–272.
[19] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis. Taylor & Francis, 2014, vol. 2.
[20] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, U.K.: Cambridge Univ. Press, 2004.
[21] P. H. Schonemann, "On the Formal Differentiation of Traces and Determinants," Multivariate Behavioral Research, vol. 20, no. 2, pp. 113–139, 1985. | CommonCrawl |
The word "nootropic" was coined in 1972 by a Romanian scientist, Corneliu Giurgea, who combined the Greek words for "mind" and "bending." Caffeine and nicotine can be considered mild nootropics, while prescription Ritalin, Adderall and Provigil (modafinil, a drug for treating narcolepsy) lie at the far end of the spectrum when prescribed off-label as cognitive enhancers. Even microdosing of LSD is increasingly viewed as a means to greater productivity.
Bacopa is a supplement herb often used for memory or stress adaptation. Its chronic effects reportedly take many weeks to manifest, with no important acute effects. Out of curiosity, I bought 2 bottles of Bacognize Bacopa pills and ran a non-randomized non-blinded ABABA quasi-self-experiment from June 2014 to September 2015, measuring effects on my memory performance, sleep, and daily self-ratings of mood/productivity. Because of the very slow onset, small effective sample size, definite temporal trends probably unrelated to Bacopa, and noise in the variables, the results were as expected, ambiguous, and do not strongly support any correlation between Bacopa and memory/sleep/self-rating (+/-/- respectively).
One of the most widely known classes of smart drugs on the market, Racetams, have a long history of use and a lot of evidence of their effectiveness. They hasten the chemical exchange between brain cells, directly benefiting our mental clarity and learning process. They are generally not controlled substances and can be purchased without a prescription in a lot of locations globally.
Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect.
Nicotine absorption through the stomach is variable and relatively reduced in comparison with absorption via the buccal cavity and the small intestine. Drinking, eating, and swallowing of tobacco smoke by South American Indians have frequently been reported. Tenetehara shamans reach a state of tobacco narcosis through large swallows of smoke, and Tapirape shams are said to eat smoke by forcing down large gulps of smoke only to expel it again in a rapid sequence of belches. In general, swallowing of tobacco smoke is quite frequently likened to drinking. However, although the amounts of nicotine swallowed in this way - or in the form of saturated saliva or pipe juice - may be large enough to be behaviorally significant at normal levels of gastric pH, nicotine, like other weak bases, is not significantly absorbed.
The Nootroo arrives in a shiny gold envelope with the words "proprietary blend" and "intended for use only in neuroscience research" written on the tin. It has been designed, says Matzner, for "hours of enhanced learning and memory". The capsules contain either Phenylpiracetam or Noopept (a peptide with similar effects and similarly uncategorised) and are distinguished by real flakes of either edible silver or gold. They are to be alternated between daily, allowing about two weeks for the full effect to be felt. Also in the capsules are L-Theanine, a form of choline, and a types of caffeine which it is claimed has longer lasting effects.
Ngo has experimented with piracetam himself ("The first time I tried it, I thought, 'Wow, this is pretty strong for a supplement.' I had a little bit of reflux, heartburn, but in general it was a cognitive enhancer. . . . I found it helpful") and the neurotransmitter DMEA ("You have an idea, it helps you finish the thought. It's for when people have difficulty finishing that last connection in the brain").
Too much caffeine may be bad for bone health because it can deplete calcium. Overdoing the caffeine also may affect the vitamin D in your body, which plays a critical role in your body's bone metabolism. However, the roles of vitamin D as well as caffeine in the development of osteoporosis continue to be a source of debate. Significance: Caffeine may interfere with your body's metabolism of vitamin D, according to a 2007 Journal of Steroid Biochemistry & Molecular Biology study. You have vitamin D receptors, or VDRs, in your osteoblast cells. These large cells are responsible for the mineralization and synthesis of bone in your body. They create a sheet on the surface of your bones. The D receptors are nuclear hormone receptors that control the action of vitamin D-3 by controlling hormone-sensitive gene expression. These receptors are critical to good bone health. For example, a vitamin D metabolism disorder in which these receptors don't work properly causes rickets.
Hericium erinaceus (Examine.com) was recommended strongly by several on the ImmInst.org forums for its long-term benefits to learning, apparently linked to Nerve growth factor. Highly speculative stuff, and it's unclear whether the mushroom powder I bought was the right form to take (ImmInst.org discussions seem to universally assume one is taking an alcohol or hotwater extract). It tasted nice, though, and I mixed it into my sleeping pills (which contain melatonin & tryptophan). I'll probably never know whether the $30 for 0.5lb was well-spent or not.
It isn't unlikely to hear someone from Silicon Valley say the following: "I've just cycled off a stack of Piracetam and CDP-Choline because I didn't get the mental acuity I was expecting. I will try a blend of Noopept and Huperzine A for the next two weeks and see if I can increase my output by 10%. We don't have immortality yet and I would really like to join the three comma club before it's all over."
Please browse our website to learn more about how to enhance your memory. Our blog contains informative articles about the science behind nootropic supplements, specific ingredients, and effective methods for improving memory. Browse through our blog articles and read and compare reviews of the top rated natural supplements and smart pills to find everything you need to make an informed decision.
It's not clear that there is much of an effect at all. This makes it hard to design a self-experiment - how big an effect on, say, dual n-back should I be expecting? Do I need an arduous long trial or an easy short one? This would principally determine the value of information too; chocolate seems like a net benefit even if it does not affect the mind, but it's also fairly costly, especially if one likes (as I do) dark chocolate. Given the mixed research, I don't think cocoa powder is worth investigating further as a nootropic.
So, I have started a randomized experiment; should take 2 months, given the size of the correlation. If that turns out to be successful too, I'll have to look into methods of blinding - for example, some sort of electronic doohickey which turns on randomly half the time and which records whether it's on somewhere one can't see. (Then for the experiment, one hooks up the LED, turns the doohickey on, and applies directly to forehead, checking the next morning to see whether it was really on or off).
I have personally found that with respect to the NOOTROPIC effect(s) of all the RACETAMS, whilst I have experienced improvements in concentration and working capacity / productivity, I have never experienced a noticeable ongoing improvement in memory. COLURACETAM is the only RACETAM that I have taken wherein I noticed an improvement in MEMORY, both with regards to SHORT-TERM and MEDIUM-TERM MEMORY. To put matters into perspective, the memory improvement has been mild, yet still significant; whereas I have experienced no such improvement at all with the other RACETAMS.
In 3, you're considering adding a new supplement, not stopping a supplement you already use. The I don't try Adderall case has value $0, the Adderall fails case is worth -$40 (assuming you only bought 10 pills, and this number should be increased by your analysis time and a weighted cost for potential permanent side effects), and the Adderall succeeds case is worth $X-40-4099, where $X is the discounted lifetime value of the increased productivity due to Adderall, minus any discounted long-term side effect costs. If you estimate Adderall will work with p=.5, then you should try out Adderall if you estimate that 0.5 \times (X-4179) > 0 ~> $X>4179$. (Adderall working or not isn't binary, and so you might be more comfortable breaking down the various how effective Adderall is cases when eliciting X, by coming up with different levels it could work at, their values, and then using a weighted sum to get X. This can also give you a better target with your experiment- this needs to show a benefit of at least Y from Adderall for it to be worth the cost, and I've designed it so it has a reasonable chance of showing that.)
More recently, the drug modafinil (brand name: Provigil) has become the brain-booster of choice for a growing number of Americans. According to the FDA, modafinil is intended to bolster "wakefulness" in people with narcolepsy, obstructive sleep apnea or shift work disorder. But when people without those conditions take it, it has been linked with improvements in alertness, energy, focus and decision-making. A 2017 study found evidence that modafinil may enhance some aspects of brain connectivity, which could explain these benefits.
Serotonin, or 5-hydroxytryptamine (5-HTP), is another primary neurotransmitter and controls major features of the mental landscape including mood, sleep and appetite. Serotonin is produced within the body by exposure, which is one reason that the folk-remedy of "getting some sun" to fight depression is scientifically credible. Many foods contain natural serotonergic (serotonin-promoting or releasing) compounds, including the well-known chemical L-Tryptophan found in turkey, which can promote sleep after big Thanksgiving dinners.
Table 4 lists the results of 27 tasks from 23 articles on the effects of d-AMP or MPH on working memory. The oldest and most commonly used type of working memory task in this literature is the Sternberg short-term memory scanning paradigm (Sternberg, 1966), in which subjects hold a set of items (typically letters or numbers) in working memory and are then presented with probe items, to which they must respond "yes" (in the set) or "no" (not in the set). The size of the set, and hence the working memory demand, is sometimes varied, and the set itself may be varied from trial to trial to maximize working memory demands or may remain fixed over a block of trials. Taken together, the studies that have used a version of this task to test the effects of MPH and d-AMP on working memory have found mixed and somewhat ambiguous results. No pattern is apparent concerning the specific version of the task or the specific drug. Four studies found no effect (Callaway, 1983; Kennedy, Odenheimer, Baltzley, Dunlap, & Wood, 1990; Mintzer & Griffiths, 2007; Tipper et al., 2005), three found faster responses with the drugs (Fitzpatrick, Klorman, Brumaghim, & Keefover, 1988; Ward et al., 1997; D. E. Wilson et al., 1971), and one found higher accuracy in some testing sessions at some dosages, but no main effect of drug (Makris et al., 2007). The meaningfulness of the increased speed of responding is uncertain, given that it could reflect speeding of general response processes rather than working memory–related processes. Aspects of the results of two studies suggest that the effects are likely due to processes other than working memory: D. E. Wilson et al. (1971) reported comparable speeding in a simple task without working memory demands, and Tipper et al. (2005) reported comparable speeding across set sizes.
20 March, 2x 13mg; first time, took around 11:30AM, half-life 3 hours, so halved by 2:30PM. Initial reaction: within 20 minutes, started to feel light-headed, experienced a bit of physical clumsiness while baking bread (dropped things or poured too much thrice); that began to pass in an hour, leaving what felt like a cheerier mood and less anxiety. Seems like it mostly wore off by 6PM. Redosed at 8PM TODO: maybe take a look at the HRV data? looks interestingly like HRV increased thanks to the tianeptine 21 March, 2x17mg; seemed to buffer effects of FBI visit 22 March, 2x 23 March, 2x 24 March, 2x 25 March, 2x 26 March, 2x 27 March, 2x 28 March, 2x 7 April, 2x 8 April, 2x 9 April, 2x 10 April, 2x 11 April, 2x 12 April, 2x 23 April, 2x 24 April, 2x 25 April, 2x 26 April, 2x 27 April, 2x 28 April, 2x 29 April, 2x 7 May, 2x 8 May, 2x 9 May, 2x 10 May, 2x 3 June, 2x 4 June, 2x 5 June, 2x 30 June, 2x 30 July, 1x 31 July, 1x 1 August, 2x 2 August, 2x 3 August, 2x 5 August, 2x 6 August, 2x 8 August, 2x 10 August, 2x 12 August: 2x 14 August: 2x 15 August: 2x 16 August: 1x 18 August: 2x 19 August: 2x 21 August: 2x 23 August: 1x 24 August: 1x 25 August: 1x 26 August: 2x 27 August: 1x 29 August: 2x 30 August: 1x 02 September: 1x 04 September: 1x 07 September: 2x 20 September: 1x 21 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 28 September: 2x 29 September: 2x 5 October: 2x 6 October: 1x 19 October: 1x 20 October: 1x 27 October: 1x 4 November: 1x 5 November: 1x 8 November: 1x 9 November: 2x 10 November: 1x 11 November: 1x 12 November: 1x 25 November: 1x 26 November: 1x 27 November: 1x 4 December: 2x 27 December: 1x 28 December: 1x 2017 7 January: 1x 8 January: 2x 10 January: 1x 16 January: 1x 17 January: 1x 20 January: 1x 24 January: 1x 25 January: 2x 27 January: 2x 28 January: 2x 1 February: 2x 3 February: 2x 8 February: 1x 16 February: 2x 17 February: 2x 18 February: 1x 22 February: 1x 27 February: 2x 14 March: 1x 15 March: 1x 16 March: 2x 17 March: 2x 18 March: 2x 19 March: 2x 20 March: 2x 21 March: 2x 22 March: 2x 23 March: 1x 24 March: 2x 25 March: 2x 26 March: 2x 27 March: 2x 28 March: 2x 29 March: 2x 30 March: 2x 31 March: 2x 01 April: 2x 02 April: 1x 03 April: 2x 04 April: 2x 05 April: 2x 06 April: 2x 07 April: 2x 08 April: 2x 09 April: 2x 10 April: 2x 11 April: 2x 20 April: 1x 21 April: 1x 22 April: 1x 23 April: 1x 24 April: 1x 25 April: 1x 26 April: 2x 27 April: 2x 28 April: 1x 30 April: 1x 01 May: 2x 02 May: 2x 03 May: 2x 04 May: 2x 05 May: 2x 06 May: 2x 07 May: 2x 08 May: 2x 09 May: 2x 10 May: 2x 11 May: 2x 12 May: 2x 13 May: 2x 14 May: 2x 15 May: 2x 16 May: 2x 17 May: 2x 18 May: 2x 19 May: 2x 20 May: 2x 21 May: 2x 22 May: 2x 23 May: 2x 24 May: 2x 25 May: 2x 26 May: 2x 27 May: 2x 28 May: 2x 29 May: 2x 30 May: 2x 1 June: 2x 2 June: 2x 3 June: 2x 4 June: 2x 5 June: 1x 6 June: 2x 7 June: 2x 8 June: 2x 9 June: 2x 10 June: 2x 11 June: 2x 12 June: 2x 13 June: 2x 14 June: 2x 15 June: 2x 16 June: 2x 17 June: 2x 18 June: 2x 19 June: 2x 20 June: 2x 22 June: 2x 21 June: 2x 02 July: 2x 03 July: 2x 04 July: 2x 05 July: 2x 06 July: 2x 07 July: 2x 08 July: 2x 09 July: 2x 10 July: 2x 11 July: 2x 12 July: 2x 13 July: 2x 14 July: 2x 15 July: 2x 16 July: 2x 17 July: 2x 18 July: 2x 19 July: 2x 20 July: 2x 21 July: 2x 22 July: 2x 23 July: 2x 24 July: 2x 25 July: 2x 26 July: 2x 27 July: 2x 28 July: 2x 29 July: 2x 30 July: 2x 31 July: 2x 01 August: 2x 02 August: 2x 03 August: 2x 04 August: 2x 05 August: 2x 06 August: 2x 07 August: 2x 08 August: 2x 09 August: 2x 10 August: 2x 11 August: 2x 12 August: 2x 13 August: 2x 14 August: 2x 15 August: 2x 16 August: 2x 17 August: 2x 18 August: 2x 19 August: 2x 20 August: 2x 21 August: 2x 22 August: 2x 23 August: 2x 24 August: 2x 25 August: 2x 26 August: 1x 27 August: 2x 28 August: 2x 29 August: 2x 30 August: 2x 31 August: 2x 01 September: 2x 02 September: 2x 03 September: 2x 04 September: 2x 05 September: 2x 06 September: 2x 07 September: 2x 08 September: 2x 09 September: 2x 10 September: 2x 11 September: 2x 12 September: 2x 13 September: 2x 14 September: 2x 15 September: 2x 16 September: 2x 17 September: 2x 18 September: 2x 19 September: 2x 20 September: 2x 21 September: 2x 22 September: 2x 23 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 27 September: 2x 28 September: 2x 29 September: 2x 30 September: 2x October 01 October: 2x 02 October: 2x 03 October: 2x 04 October: 2x 05 October: 2x 06 October: 2x 07 October: 2x 08 October: 2x 09 October: 2x 10 October: 2x 11 October: 2x 12 October: 2x 13 October: 2x 14 October: 2x 15 October: 2x 16 October: 2x 17 October: 2x 18 October: 2x 20 October: 2x 21 October: 2x 22 October: 2x 23 October: 2x 24 October: 2x 25 October: 2x 26 October: 2x 27 October: 2x 28 October: 2x 29 October: 2x 30 October: 2x 31 October: 2x 01 November: 2x 02 November: 2x 03 November: 2x 04 November: 2x 05 November: 2x 06 November: 2x 07 November: 2x 08 November: 2x 09 November: 2x 10 November: 2x 11 November: 2x 12 November: 2x 13 November: 2x 14 November: 2x 15 November: 2x 16 November: 2x 17 November: 2x 18 November: 2x 19 November: 2x 20 November: 2x 21 November: 2x 22 November: 2x 23 November: 2x 24 November: 2x 25 November: 2x 26 November: 2x 27 November: 2x 28 November: 2x 29 November: 2x 30 November: 2x 01 December: 2x 02 December: 2x 03 December: 2x 04 December: 2x 05 December: 2x 06 December: 2x 07 December: 2x 08 December: 2x 09 December: 2x 10 December: 2x 11 December: 2x 12 December: 2x 13 December: 2x 14 December: 2x 15 December: 2x 16 December: 2x 17 December: 2x 18 December: 2x 19 December: 2x 20 December: 2x 21 December: 2x 22 December: 2x 23 December: 2x 24 December: 2x 25 December: 2x ran out, last day: 25 December 2017 –>
Up to 20% of Ivy League college students have already tried "smart drugs," so we can expect these pills to feature prominently in organizations (if they don't already). After all, the pressure to perform is unlikely to disappear the moment students graduate. And senior employees with demanding jobs might find these drugs even more useful than a 19-year-old college kid does. Indeed, a 2012 Royal Society report emphasized that these "enhancements," along with other technologies for self-enhancement, are likely to have far-reaching implications for the business world.
From the standpoint of absorption, the drinking of tobacco juice and the interaction of the infusion or concoction with the small intestine is a highly effective method of gastrointestinal nicotine administration. The epithelial area of the intestines is incomparably larger than the mucosa of the upper tract including the stomach, and the small intestine represents the area with the greatest capacity for absorption (Levine 1983:81-83). As practiced by most of the sixty-four tribes documented here, intoxicated states are achieved by drinking tobacco juice through the mouth and/or nose…The large intestine, although functionally little equipped for absorption, nevertheless absorbs nicotine that may have passed through the small intestine.
Similarly, Mehta et al 2000 noted that the positive effects of methylphenidate (40 mg) on spatial working memory performance were greatest in those volunteers with lower baseline working memory capacity. In a study of the effects of ginkgo biloba in healthy young adults, Stough et al 2001 found improved performance in the Trail-Making Test A only in the half with the lower verbal IQ.
Maj. Jamie Schwandt, USAR, is a logistics officer and has served as an operations officer, planner and commander. He is certified as a Department of the Army Lean Six Sigma Master Black Belt, certified Red Team Member, and holds a doctorate from Kansas State University. This article represents his own personal views, which are not necessarily those of the Department of the Army.
This mental stimulation is what increases focus and attention span in the user. The FDA permitted treatments for Modafinil include extreme sleepiness and shift work disorder. It can also get prescribed for narcolepsy, and obstructive sleep apnea. Modafinil is not FDA approved for the treatment of ADHD. Yet, many medical professionals feel it is a suitable Adderall alternative.
So the chi-squared believes there is a statistically-significant difference, the two-sample test disagrees, and the binomial also disagrees. Since I regarded it as a dubious theory, can't see a difference, and the binomial seems like the most appropriate test, I conclude that several months of 1mg iodine did not change my eye color. (As a final test, when I posted the results on the Longecity forum where people were claiming the eye color change, I swapped the labels on the photos to see if anyone would claim something along the lines when I look at the photos, I can see a difference!. I thought someone might do that, which would be a damning demonstration of their biases & wishful thinking, but no one did.)
Phenylpiracetam (Phenotropil) is one of the best smart drugs in the racetam family. It has the highest potency and bioavailability among racetam nootropics. This substance is almost the same as Piracetam; only it contains a phenyl group molecule. The addition to its chemical structure improves blood-brain barrier permeability. This modification allows Phenylpiracetam to work faster than other racetams. Its cognitive enhancing effects can last longer as well.
One of the other suggested benefits is for boosting serotonin levels; low levels of serotonin are implicated in a number of issues like depression. I'm not yet sure whether tryptophan has helped with motivation or happiness. Trial and error has taught me that it's a bad idea to take tryptophan in the morning or afternoon, however, even smaller quantities like 0.25g. Like melatonin, the dose-response curve is a U: ~1g is great and induces multiple vivid dreams for me, but ~1.5g leads to an awful night and a headache the next day that was worse, if anything, than melatonin. (One morning I woke up with traces of at least 7 dreams, although I managed to write down only 2. No lucid dreams, though.)
The real-life Limitless Pill? One of the newer offerings in the nootropic industry, Avanse Laboratories' new ingenious formula has been generating quite much popularity on the internet, and has been buzzing around on dedicated nootropic forums. Why do we pick this awesome formula to be the #1 nootropic supplement of 2017 and 2018? Simple, name another supplement that contains "potent 1160mg capsule" including 15 mg of world's most powerful nootropic agent (to find out, please click on Learn More). It is cheap, in our opinion, compared to what it contains. And we don't think their price will stay this low for long. Avanse Laboratories is currently playing… Learn More...
In a broad sense, this is enhancement; in a stricter one, it's optimisation. "I think people think about smart drugs the way they think about steroids in athletics," Arnsten says, "but it's not a proper analogy, because with steroids you're creating more muscle. With smart drugs, all you're doing is taking the brain that you have and putting it in its optimal chemical state. You're not taking Homer Simpson and making him into Einstein."
The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the null hypothesis files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A.
On 8 April 2011, I purchased from Smart Powders (20g for $8); as before, some light searching seemed to turn up SP as the best seller given shipping overhead; it was on sale and I planned to cap it so I got 80g. This may seem like a lot, but I was highly confident that theanine and I would get along since I already drink so much tea and was a tad annoyed at the edge I got with straight caffeine. So far I'm pretty happy with it. My goal was to eliminate the physical & mental twitchiness of caffeine, which subjectively it seems to do.
With subtle effects, we need a lot of data, so we want at least half a year (6 blocks) or better yet, a year (12 blocks); this requires 180 actives and 180 placebos. This is easily covered by $11 for Doctor's Best Best Lithium Orotate (5mg), 200-Count (more precisely, Lithium 5mg (from 125mg of lithium orotate)) and $14 for 1000x1g empty capsules (purchased February 2012). For convenience I settled on 168 lithium & 168 placebos (7 pill-machine batches, 14 batches total); I can use them in 24 paired blocks of 7-days/1-week each (48 total blocks/48 weeks). The lithium expiration date is October 2014, so that is not a problem
A fancier method of imputation would be multiple imputation using, for example, the R library mice (Multivariate Imputation by Chained Equations) (guide), which will try to impute all missing values in a way which mimicks the internal structure of the data and provide several possible datasets to give us an idea of what the underlying data might have looked like, so we can see how our estimates improve with no missingness & how much of the estimate is now due to the imputation:
Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'.
Not included in the list below are prescription psychostimulants such as Adderall and Ritalin. Non-medical, illicit use of these drugs for the purpose of cognitive enhancement in healthy individuals comes with a high cost, including addiction and other adverse effects. Although these drugs are prescribed for those with attention deficit hyperactivity disorder (ADHD) to help with focus, attention and other cognitive functions, they have been shown to in fact impair these same functions when used for non-medical purposes. More alarming, when taken in high doses, they have the potential to induce psychosis.
Weyandt et al. (2009) Large public university undergraduates (N = 390) 7.5% (past 30 days) Highest rated reasons were to perform better on schoolwork, perform better on tests, and focus better in class 21.2% had occasionally been offered by other students; 9.8% occasionally or frequently have purchased from other students; 1.4% had sold to other students
Harrisburg, NC -- (SBWIRE) -- 02/18/2019 -- Global Smart Pills Technology Market - Segmented by Technology, Disease Indication, and Geography - Growth, Trends, and Forecast (2019 - 2023) The smart pill is a wireless capsule that can be swallowed, and with the help of a receiver (worn by patients) and software that analyzes the pictures captured by the smart pill, the physician is effectively able to examine the gastrointestinal tract. Gastrointestinal disorders have become very common, but recently, there has been increasing incidence of colorectal cancer, inflammatory bowel disease, and Crohns disease as well.
Since my experiment had a number of flaws (non-blind, varying doses at varying times of day), I wound up doing a second better experiment using blind standardized smaller doses in the morning. The negative effect was much smaller, but there was still no mood/productivity benefit. Having used up my first batch of potassium citrate in these 2 experiments, I will not be ordering again since it clearly doesn't work for me.
There are some other promising prescription drugs that may have performance-related effects on the brain. But at this point, all of them seem to involve a roll of the dice. You may experience a short-term brain boost, but you could also end up harming your brain (or some other aspect of your health) in the long run. "To date, there is no safe drug that may increase cognition in healthy adults," Fond says of ADHD drugs, modafinil and other prescription nootropics.
Qualia Mind, meanwhile, combines more than two dozen ingredients that may support brain and nervous system function – and even empathy, the company claims – including vitamins B, C and D, artichoke stem and leaf extract, taurine and a concentrated caffeine powder. A 2014 review of research on vitamin C, for one, suggests it may help protect against cognitive decline, while most of the research on artichoke extract seems to point to its benefits to other organs like the liver and heart. A small company-lead pilot study on the product found users experienced improvements in reasoning, memory, verbal ability and concentration five days after beginning Qualia Mind.
My intent here is not to promote illegal drugs or promote the abuse of prescription drugs. In fact, I have identified which drugs require a prescription. If you are a servicemember and you take a drug (such as Modafinil and Adderall) without a prescription, then you will fail a urinalysis test. Thus, you will most likely be discharged from the military.
NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of $1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab's production.) A year's supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one's eyes?), it could cost anywhere up to $10,000.
These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds."
Last spring, 100 people showed up at a Peak Performance event where psychedelic psychologist James Fadiman said the key to unleashing the cognition-enhancing effects of LSD — which he listed as less anxiety, better focus, improved sleep, greater creativity — was all in the dosage. He recommended a tenth of a "party dose" — enough to give you "the glow" and enhance your cognitive powers without "the trip."
So with these 8 results in hand, what do I think? Roughly, I was right 5 of the days and wrong 3 of them. If not for the sleep effect on #4, which is - in a way - cheating (one hopes to detect modafinil due to good effects), the ratio would be 5:4 which is awfully close to a coin-flip. Indeed, a scoring rule ranks my performance at almost identical to a coin flip: -5.49 vs -5.5419. (The bright side is that I didn't do worse than a coin flip: I was at least calibrated.)
Smart pills are defined as drugs or prescription medication used to treat certain mental disorders, from milder ones such as brain fog, to some more severe like ADHD. They are often referred to as 'nootropics' but even though the two terms are often used interchangeably, smart pills and nootropics represent two different types of cognitive enhancers.
Ginsenoside Rg1, a molecule found in the plant genus panax (ginseng), is being increasingly researched as an effect nootropic. Its cognitive benefits including increasing learning ability and memory acquisition, and accelerating neural development. It targets mainly the NMDA receptors and nitric oxide synthase, which both play important roles in personal and emotional intelligence. The authors of the study cited above, say that their research findings thus far have boosted their confidence in a "bright future of cognitive drug development."
There is a similar substance which can be purchased legally almost anywhere in the world called adrafinil. This is a prodrug for modafinil. You can take it, and then the body will metabolize it into modafinil, providing similar beneficial effects. Unfortunately, it takes longer for adrafinil to kick in—about an hour—rather than a matter of minutes. In addition, there are more potential side-effects to taking the prodrug as compared to the actual drug.
There is evidence to suggest that modafinil, methylphenidate, and amphetamine enhance cognitive processes such as learning and working memory...at least on certain laboratory tasks. One study found that modafinil improved cognitive task performance in sleep-deprived doctors. Even in non-sleep deprived healthy volunteers, modafinil improved planning and accuracy on certain cognitive tasks. Similarly, methylphenidate and amphetamine also enhanced performance of healthy subjects in certain cognitive tasks.
Cognition is a suite of mental phenomena that includes memory, attention and executive functions, and any drug would have to enhance executive functions to be considered truly 'smart'. Executive functions occupy the higher levels of thought: reasoning, planning, directing attention to information that is relevant (and away from stimuli that aren't), and thinking about what to do rather than acting on impulse or instinct. You activate executive functions when you tell yourself to count to 10 instead of saying something you may regret. They are what we use to make our actions moral and what we think of when we think about what makes us human.
Several studies have assessed the effect of MPH and d-AMP on tasks tapping various other aspects of spatial working memory. Three used the spatial working memory task from the CANTAB battery of neuropsychological tests (Sahakian & Owen, 1992). In this task, subjects search for a target at different locations on a screen. Subjects are told that locations containing a target in previous trials will not contain a target in future trials. Efficient performance therefore requires remembering and avoiding these locations in addition to remembering and avoiding locations already searched within a trial. Mehta et al. (2000) found evidence of greater accuracy with MPH, and Elliott et al. (1997) found a trend for the same. In Mehta et al.'s study, this effect depended on subjects' working memory ability: the lower a subject's score on placebo, the greater the improvement on MPH. In Elliott et al.'s study, MPH enhanced performance for the group of subjects who received the placebo first and made little difference for the other group. The reason for this difference is unclear, but as mentioned above, this may reflect ability differences between the groups. More recently, Clatworthy et al. (2009) undertook a positron emission tomography (PET) study of MPH effects on two tasks, one of which was the CANTAB spatial working memory task. They failed to find consistent effects of MPH on working memory performance but did find a systematic relation between the performance effect of the drug in each individual and its effect on individuals' dopamine activity in the ventral striatum.
Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I'm doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit.
Results: Women with high caffeine intakes had significantly higher rates of bone loss at the spine than did those with low intakes (−1.90 ± 0.97% compared with 1.19 ± 1.08%; P = 0.038). When the data were analyzed according to VDR genotype and caffeine intake, women with the tt genotype had significantly (P = 0.054) higher rates of bone loss at the spine (−8.14 ± 2.62%) than did women with the TT genotype (−0.34 ± 1.42%) when their caffeine intake was >300 mg/d…In 1994, Morrison et al (22) first reported an association between vitamin D receptor gene (VDR) polymorphism and BMD of the spine and hip in adults. After this initial report, the relation between VDR polymorphism and BMD, bone turnover, and bone loss has been extensively evaluated. The results of some studies support an association between VDR polymorphism and BMD (23-,25), whereas other studies showed no evidence for this association (26,27)…At baseline, no significant differences existed in serum parathyroid hormone, serum 25-hydroxyvitamin D, serum osteocalcin, and urinary N-telopeptide between the low- and high-caffeine groups (Table 1⇑). In the longitudinal study, the percentage of change in serum parathyroid hormone concentrations was significantly lower in the high-caffeine group than in the low-caffeine group (Table 2⇑). However, no significant differences existed in the percentage of change in serum 25-hydroxyvitamin D
Like caffeine, nicotine tolerates rapidly and addiction can develop, after which the apparent performance boosts may only represent a return to baseline after withdrawal; so nicotine as a stimulant should be used judiciously, perhaps roughly as frequent as modafinil. Another problem is that nicotine has a half-life of merely 1-2 hours, making regular dosing a requirement. There is also some elevated heart-rate/blood-pressure often associated with nicotine, which may be a concern. (Possible alternatives to nicotine include cytisine, 2'-methylnicotine, GTS-21, galantamine, Varenicline, WAY-317,538, EVP-6124, and Wellbutrin, but none have emerged as clearly superior.)
** = Important note - whilst BrainZyme is scientifically proven to support concentration and mental performance, it is not a replacement for a good diet, moderate exercise or sleep. BrainZyme is also not a drug, medicine or pharmaceutical. It is a natural-sourced, vegan food supplement with ingredients that are scientifically proven to support cognition, concentration, mental performance and reduction of tiredness. You should always consult with your Doctor if you require medical attention.
Another interpretation of the mixed results in the literature is that, in some cases at least, individual differences in response to stimulants have led to null results when some participants in the sample are in fact enhanced and others are not. This possibility is not inconsistent with the previously mentioned ones; both could be at work. Evidence has already been reviewed that ability level, personality, and COMT genotype modulate the effect of stimulants, although most studies in the literature have not broken their samples down along these dimensions. There may well be other as-yet-unexamined individual characteristics that determine drug response. The equivocal nature of the current literature may reflect a mixture of substantial cognitive-enhancement effects for some individuals, diluted by null effects or even counteracted by impairment in others.
A week later: Golden Sumatran, 3 spoonfuls, a more yellowish powder. (I combined it with some tea dregs to hopefully cut the flavor a bit.) Had a paper to review that night. No (subjectively noticeable) effect on energy or productivity. I tried 4 spoonfuls at noon the next day; nothing except a little mental tension, for lack of a better word. I think that was just the harbinger of what my runny nose that day and the day before was, a head cold that laid me low during the evening.
My first dose on 1 March 2017, at the recommended 0.5ml/1.5mg was miserable, as I felt like I had the flu and had to nap for several hours before I felt well again, requiring 6h to return to normal; after waiting a month, I tried again, but after a week of daily dosing in May, I noticed no benefits; I tried increasing to 3x1.5mg but this immediately caused another afternoon crash/nap on 18 May. So I scrapped my cytisine. Oh well.
(If I am not deficient, then supplementation ought to have no effect.) The previous material on modern trends suggests a prior >25%, and higher than that if I were female. However, I was raised on a low-salt diet because my father has high blood pressure, and while I like seafood, I doubt I eat it more often than weekly. I suspect I am somewhat iodine-deficient, although I don't believe as confidently as I did that I had a vitamin D deficiency. Let's call this one 75%.
In paired-associates learning, subjects are presented with pairs of stimuli and must learn to recall the second item of the pair when presented with the first. For these tasks, as with tasks involving memory for individual items, there is a trend for stimulants to enhance performance with longer delays. For immediate measures of learning, no effects of d-AMP or MPH were observed by Brumaghim and Klorman (1998); Fleming et al. (1995); Hurst, Radlow, and Weidner (1968); or Strauss et al. (1984). However, when Hurst et al.'s subjects were tested a week later, they recalled more if their initial learning had been carried out with d-AMP than with placebo. Weitzner (1965) assessed paired-associates learning with an immediate cued-recall test and found facilitation when the associate word was semantically related to the cue, provided it was not also related to other cue words. Finally, Burns, House, French, and Miller (1967) found a borderline-significant impairment of performance with d-AMP on a nonverbal associative learning task.
One study of helicopter pilots suggested that 600 mg of modafinil given in three doses can be used to keep pilots alert and maintain their accuracy at pre-deprivation levels for 40 hours without sleep.[60] However, significant levels of nausea and vertigo were observed. Another study of fighter pilots showed that modafinil given in three divided 100 mg doses sustained the flight control accuracy of sleep-deprived F-117 pilots to within about 27% of baseline levels for 37 hours, without any considerable side effects.[61] In an 88-hour sleep loss study of simulated military grounds operations, 400 mg/day doses were mildly helpful at maintaining alertness and performance of subjects compared to placebo, but the researchers concluded that this dose was not high enough to compensate for most of the effects of complete sleep loss.
There is no shortage of nootropics available for purchase online that can be shipped to you nearly anywhere in the world. Yet, many of these supplements and drugs have very little studies, particularly human studies, confirming their results. While this lack of research may not scare away more adventurous neurohackers, many people would prefer to […]
Kratom (Erowid, Reddit) is a tree leaf from Southeast Asia; it's addictive to some degree (like caffeine and nicotine), and so it is regulated/banned in Thailand, Malaysia, Myanmar, and Bhutan among others - but not the USA. (One might think that kratom's common use there indicates how very addictive it must be, except it literally grows on trees so it can't be too hard to get.) Kratom is not particularly well-studied (and what has been studied is not necessarily relevant - I'm not addicted to any opiates!), and it suffers the usual herbal problem of being an endlessly variable food product and not a specific chemical with the fun risks of perhaps being poisonous, but in my reading it doesn't seem to be particularly dangerous or have serious side-effects.
This research is in contrast to the other substances I like, such as piracetam or fish oil. I knew about withdrawal of course, but it was not so bad when I was drinking only tea. And the side-effects like jitteriness are worse on caffeine without tea; I chalk this up to the lack of theanine. (My later experiences with theanine seems to confirm this.) These negative effects mean that caffeine doesn't satisfy the strictest definition of nootropic (having no negative effects), but is merely a cognitive enhancer (with both benefits & costs). One might wonder why I use caffeine anyway if I am so concerned with mental ability.
…researchers have added a new layer to the smart pill conversation. Adderall, they've found, makes you think you're doing better than you actually are….Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job….But the results of the new University of Pennsylvania study, funded by the U.S. Navy and not yet published but presented at the annual Society for Neuroscience conference last month, are consistent with much of the existing research. As a group, no overall statistically-significant improvement or impairment was seen as a result of taking Adderall. The research team tested 47 subjects, all in their 20s, all without a diagnosis of ADHD, on a variety of cognitive functions, from working memory-how much information they could keep in mind and manipulate-to raw intelligence, to memories for specific events and faces….The last question they asked their subjects was: How and how much did the pill influence your performance on today's tests? Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job on the tasks they'd been given, even though their performance did not show an improvement over that of those who had taken the placebo. According to Irena Ilieva…it's the first time since the 1960s that a study on the effects of amphetamine, a close cousin of Adderall, has asked how subjects perceive the effect of the drug on their performance.
It can easily pass through the blood-brain barrier and is known to protect the nerve tissues present in the brain. There is evidence that the acid plays an instrumental role in preventing strokes in adults by decreasing the number of free radicals in the body. It increases the production of acetylcholine, a neurotransmitter that most Alzheimer's patients are a deficit in.
Running low on gum (even using it weekly or less, it still runs out), I decided to try patches. Reading through various discussions, I couldn't find any clear verdict on what patch brands might be safer (in terms of nicotine evaporation through a cut or edge) than others, so I went with the cheapest Habitrol I could find as a first try of patches (Nicotine Transdermal System Patch, Stop Smoking Aid, 21 mg, Step 1, 14 patches) in May 2013. I am curious to what extent nicotine might improve a long time period like several hours or a whole day, compared to the shorter-acting nicotine gum which feels like it helps for an hour at most and then tapers off (which is very useful in its own right for kicking me into starting something I have been procrastinating on). I have not decided whether to try another self-experiment.
The advantage of adrafinil is that it is legal & over-the-counter in the USA, so one removes the small legal risk of ordering & possessing modafinil without a prescription, and the retailers may be more reliable because they are not operating in a niche of dubious legality. Based on comments from others, the liver problem may have been overblown, and modafinil vendors post-2012 seem to have become more unstable, so I may give adrafinil (from another source than Antiaging Central) a shot when my modafinil/armodafinil run out.
"I think you can and you will," says Sarter, but crucially, only for very specific tasks. For example, one of cognitive psychology's most famous findings is that people can typically hold seven items of information in their working memory. Could a drug push the figure up to nine or 10? "Yes. If you're asked to do nothing else, why not? That's a fairly simple function."
Some supplement blends, meanwhile, claim to work by combining ingredients – bacopa, cat's claw, huperzia serrata and oat straw in the case of Alpha Brain, for example – that have some support for boosting cognition and other areas of nervous system health. One 2014 study in Frontiers in Aging Neuroscience, suggested that huperzia serrata, which is used in China to fight Alzheimer's disease, may help slow cell death and protect against (or slow the progression of) neurodegenerative diseases. The Alpha Brain product itself has also been studied in a company-funded small randomized controlled trial, which found Alpha Brain significantly improved verbal memory when compared to adults who took a placebo.
"Such an informative and inspiring read! Insight into how optimal nutrients improved Cavin's own brain recovery make this knowledge-filled read compelling and relatable. The recommendations are easy to understand as well as scientifically-founded – it's not another fad diet manual. The additional tools and resources provided throughout make it possible for anyone to integrate these enhancements into their nutritional repertoire. Looking forward to more from Cavin and Feed a Brain!!!!!!"
The experiment then is straightforward: cut up a fresh piece of gum, randomly select from it and an equivalent dry piece of gum, and do 5 rounds of dual n-back to test attention/energy & WM. (If it turns out to be placebo, I'll immediately use the remaining active dose: no sense in wasting gum, and this will test whether nigh-daily use renders nicotine gum useless, similar to how caffeine may be useless if taken daily. If there's 3 pieces of active gum left, then I wrap it very tightly in Saran wrap which is sticky and air-tight.) The dose will be 1mg or 1/4 a gum. I cut up a dozen pieces into 4 pieces for 48 doses and set them out to dry. Per the previous power analyses, 48 groups of DNB rounds likely will be enough for detecting small-medium effects (partly since we will be only looking at one metric - average % right per 5 rounds - with no need for multiple correction). Analysis will be one-tailed, since we're looking for whether there is a clear performance improvement and hence a reason to keep using nicotine gum (rather than whether nicotine gum might be harmful).
Probably most significantly, use of the term "drug" has a significant negative connotation in our culture. "Drugs" are bad: So proclaimed Richard Nixon in the War on Drugs, and Nancy "No to Drugs" Reagan decades later, and other leaders continuing to present day. The legitimate demonization of the worst forms of recreational drugs has resulted in a general bias against the elective use of any chemical to alter the body's processes. Drug enhancement of athletes is considered cheating – despite the fact that many of these physiological shortcuts obviously work. University students and professionals seeking mental enhancements by taking smart drugs are now facing similar scrutiny.
Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage.
With something like creatine, you'd know if it helps you pump out another rep at the gym on a sustainable basis. With nootropics, you can easily trick yourself into believing they help your mindset. The ideal is to do a trial on yourself. Take identical looking nootropic pills and placebo pills for a couple weeks each, then see what the difference is. With only a third party knowing the difference, of course.
(As I was doing this, I reflected how modafinil is such a pure example of the money-time tradeoff. It's not that you pay someone else to do something for you, which necessarily they will do in a way different from you; nor is it that you have exchanged money to free yourself of a burden of some future time-investment; nor have you paid money for a speculative return of time later in life like with many medical expenses or supplements. Rather, you have paid for 8 hours today of your own time.)
AMP and MPH increase catecholamine activity in different ways. MPH primarily inhibits the reuptake of dopamine by pre-synaptic neurons, thus leaving more dopamine in the synapse and available for interacting with the receptors of the postsynaptic neuron. AMP also affects reuptake, as well as increasing the rate at which neurotransmitter is released from presynaptic neurons (Wilens, 2006). These effects are manifest in the attention systems of the brain, as already mentioned, and in a variety of other systems that depend on catecholaminergic transmission as well, giving rise to other physical and psychological effects. Physical effects include activation of the sympathetic nervous system (i.e., a fight-or-flight response), producing increased heart rate and blood pressure. Psychological effects are mediated by activation of the nucleus accumbens, ventral striatum, and other parts of the brain's reward system, producing feelings of pleasure and the potential for dependence.
The smart pill industry has popularized many herbal nootropics. Most of them first appeared in Ayurveda and traditional Chinese medicine. Ayurveda is a branch of natural medicine originating from India. It focuses on using herbs as remedies for improving quality of life and healing ailments. Evidence suggests our ancestors were on to something with this natural approach.
Systematic reviews and meta-analyses of clinical human research using low doses of certain central nervous system stimulants found enhanced cognition in healthy people.[21][22][23] In particular, the classes of stimulants that demonstrate cognition-enhancing effects in humans act as direct agonists or indirect agonists of dopamine receptor D1, adrenoceptor A2, or both types of receptor in the prefrontal cortex.[21][22][24][25] Relatively high doses of stimulants cause cognitive deficits.[24][25] | CommonCrawl |
density of solids and liquids
Aimed at a lower set year 8 class. Select one or more items in both lists to browse for the relevant content, Browse the selectedThemes and / or countries. The analysis will aid them in valuing the importance of accuracy and precision of data as they learn the rigor required to be consistent when collecting data during labs and investigations, a skill that is still very much in development at the middle grades. There exists one other phase of matter, plasma, which exists at very high temperatures. Matter most commonly exists as a solid, liquid, or gas; these states are known as the three common phases of matter. Here is a YouTube video on water displacement to show your students. Density is a dimensional property; therefore, when comparing the densities of two substances, the units must be taken into consideration. Alcohols and carboxylic acids - physical data - Molweight, melting and boiling point, density, pKa-values, as well as number of carbon and hydrogen atoms in each molecule are given for 150 different alcohols and acids 0 Suppose a block of brass and a block of wood have exactly the same mass. Ask each table group to come up with a working definition that they all agree on for mass, volume and density. Students will be able to measure the density of solids (regular and irregular shapes) and liquids. Local density can be obtained by a limiting process, based on the average density in a small volume around the point in question, taking the limit where the size of the volume approaches zero, \[\rho = \lim_{\Delta V \rightarrow 0} \frac{\Delta m}{\Delta V} \label{14.2}\]. As an extension to this lesson, I like to assign a few practice problems. The density values of some solids, liquids and gases near room temperature are listed below (Table 1). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). Because the atoms are closely packed, liquids, like solids, resist compression; an extremely large force is necessary to change the volume of a liquid. Therefore, the densities of liquids are often treated as constant, with the density equal to the average density. That is why a sharp needle is able to poke through skin when a small force is exerted, but applying the same force with a finger does not puncture the skin (Figure \(\PageIndex{4}\)). This occurs because the brass has a greater density than water, whereas the wood has a lower density than water. 6796 0 obj <>stream Calculate the density of the unknown liquid in g/mL to the correct number of significant digits. The cgs unit of density is the gram per cubic centimeter, g/cm3, where, \[1\; g/cm^{3} = 1000\; kg/m^{3} \ldotp\]. h�bbd``b`z $g�X]@�y=�$XK��>�� qՀ�=�� �n�5H\4g`b��d100RN�^� � ��, Introduction. The density of water increases with decreasing temperature, reaching a maximum at 4.0 °C, and then decreases as the temperature falls below 4.0 °C. Substance Density at 20 C Substance Density at 20 C Includes heavily scaffolded worksheet on density calculations and a cloze exercise. Liquids deform easily when stressed and do not spring back to their original shape once a force is removed. (Shearing forces are forces applied tangentially to a surface, as described in Static Equilibrium and Elasticity.). What is density? Most of them will now to use a scale for the mass but measuring the volume may produce some varied responses. The structure of this three-dimensional lattice is represented as molecules connected by rigid bonds (modeled as stiff springs), which allow limited freedom for movement. Show students a marble, a die and a screw and ask them how they would determine the mass and volume for each. BetterLesson reimagines professional learning by personalizing support for educators to support student-centered learning. endstream endobj 6767 0 obj <>/Metadata 101 0 R/Pages 6764 0 R/StructTreeRoot 114 0 R/Type/Catalog/ViewerPreferences 6781 0 R>> endobj 6768 0 obj <>/MediaBox[0 0 612 792]/Parent 6764 0 R/Resources<>/Font<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI]>>/Rotate 0/StructParents 0/Tabs/S/Type/Page>> endobj 6769 0 obj <>stream endstream endobj startxref A list of materials needed is included in the resources. They are a set of tools for professionals, used primarily in regulatory safety testing and subsequent chemical and chemical product notification, chemical registration and in chemical evaluation. Several methods are for liquid substance only: hydrometer, immersed body method (both are buoyancy methods) and oscillating densitometer. The difference between the densities of solids, liquids and gases is due to the distance between the particles in each state of matter. The density is constant throughout, and the density of any sample of the substance is the same as its average density. When a liquid is placed in a container with no lid, it remains in the container. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot.
C Nmr Camphor, Online Vape Shop, Air Fryer On Sale, Example Of Decision Making Process In Business, Roc Retinol Correxion Capsules, Carlton Council Contact, Bottle Gourd In Kannada, T-fal Stainless Steel Cookware Reviews, Cocl2 Polar Or Non-polar, Double Basket Deep Fryer, Leatherwood Funeral Home Brady Tx, New Look Maternity Dresses, Pineapple Roll Cake Recipe, Bed Head Tigi Superstar, Baked Pineapple Chicken Recipe, Korean Mart Online, Introduction To Mathematical Thinking Pdf, Godrej Pharma Products, French Hash Browns Name, Acyl Chloride To Carboxylic Acid, Jackson Township Grove City, Ohio, Absolut Berry Vodkarita, Kirkland Signature Extra Virgin Olive Oil California, How To Cook Frozen Bag Of Seafood Mix, Rgb Camera For Drone, Starbucks Via Instant Coffee, The Pragmatic Programmer Audible, How To Think In English And Stop Translating, | CommonCrawl |
how to make carbonic acid
Sign up. The decomposition of carbonic acid produces the characteristic soda fizz. During the making of soda, carbon dioxide is dissolved in water. (3) In this process, the acid equilibrium constant for removing the first proton from carbonic acid is small, k a 1 = 4.5 x 10-7, and to remove the second proton, (4) the second equilibrium constant is even smaller k a 2 = 4.7 x 10-11,. The carbonate buffer system controls the pH levels in blood. As stated, this process also creates carbonic acid. Make your products visible globally with Elite Membership. Carbon dioxide is an essential part of the carbonate buffer system 3. Reactions with other chemicals can release CO 2 into the air or mix it with water to form carbonic acid (H 2 CO … When carbon dioxide is dissolved in the blood, it creates a buffer composed of bicarbonate ions, HCO3- , carbonic acid, H2CO3, and carbon dioxide… Is there a way to know how much has been added? "Looking for a Similar Assignment? 2.9 Gas chemistry Ask the students what happens when water mixes with carbon dioxide. Also, I have found some carbonic acid powder in some commercial product such as this one. Once carbonic acid has been formed in water, then like all acids it dissociates. carbonic acid -K1 and K2 for freshwater, and K1' and K2' for seawater- and the consequences thereoff will appear spectacular. Carbon dioxide is constantly dissolving into the water that surrounds us, forming natural carbonic acid. Some says that carbonic acid is unstable and thus cannot supply it. When sodium bicarbonate (NaHCO 3), comes into contact with a strong acid, such as HCl, carbonic acid (H 2 CO 3), which is a weak acid… Carbonic acid is a chemical compound.Its chemical formula is H 2 CO 3.Carbonic acid is a weak acid.It forms two kinds of salts: the carbonates and the bicarbonates.In geology, carbonic acid causes limestone to dissolve, making calcium bicarbonate.. References From 10 to 15 tons of additional oil were obtained per ton of added carbonic acid. CO 2 + H 2 O ⇌ H 2 CO 3 The predominant species are simply loosely hydrated CO 2 molecules. Carbonic acid was injected in an area where one injection well and four production wells were available. Write the equation on the board: H2O + CO2 = H2CO3 (carbonic acid). Bicarbonate-Carbonic Acid Buffer. Carbon dioxide is responsible for the formation of bubbles and foam when bicarbonate of soda and vinegar are mixed. When the carbonic acid comes into contact with a small location on the steel, the acid dissolves the steel into free ions, causing that location to become positively charged. Carbonic acid is H 2 CO 3 while carbolic acid is C … Carbonic acid appears to have been the major acid volatile in ore fluids responsible for carbonate dissolution and hydrolysis of feldspars to illite and kaolinite. How many milliliters of 60% carbonic acid must be mixed with how many milliliters of 15% carbonic acid to make 650 milliliters of a 38% carbonic acid solution? Carbonic acid may trigger pitting, another specialized type of corrosion driven by electrochemical process. The conversion of ethanol into ethanoic acid would be a typical example. Carbonic acid is a weak acid that is produced when carbon dioxide is dissolved in water. The bicarbonate buffer system is an acid-base homeostatic mechanism involving the balance of carbonic acid (H 2 CO 3), bicarbonate ion (HCO − 3), and carbon dioxide (CO 2) in order to maintain pH in the blood and duodenum, among other tissues, to support proper metabolic function. For practical reasons the values of the dissociation constants are generally given as: pK = 10log K or K = 10 pK (9.21) The K0, K1 and K2 values for freshwater (ideal solution) and seawater as a … Carbonic acid and carbonate salts. Get Expert Help at an Amazing Discount!" Mix carbon-containing chemicals. The acid even appears in rain. Because of the strength of this acid, manufacturers often add a base such as sodium bicarbonate to reduce the acidity … In geology, limestone may react with rainwater, which is mildly acidic, to form a solution of calcium bicarbonate; evaporation of such solutions may result in the formation of stalactites and stalagmites. Buy Carbonic Acid Online (H2CO3) In Bulk Or By Phone: 512-668-9918 If you have questions about ordering carbonic acid online here at LabAlley.com or would like to place an order, call 512-668-9918 or email [email protected] to talk with an carbonic acid specialist. The chemical formula for carbonic acid is H2CO3. The … The alcohol is heated under reflux with an excess of a mixture of potassium dichromate(VI) solution and dilute sulphuric acid. During this process, carbonic acid forms in the water, giving carbonated water a pH between 3 and 4. Carbonic acid (H2CO3) is a relatively weak naturally occurring acid. Catalyzed by carbonic anhydrase, carbon dioxide (CO 2) reacts with water (H 2 O) to form carbonic acid … Register Now. The injection of the carbonated water resulted in a 6 percent increase in oil recovery compared with waterflood. I agree to the terms and conditions. Carbonic acid (H 2 CO 3) is formed in small amounts when its anhydride, carbon dioxide (CO 2), dissolves in water. The bicarbonate-carbonic acid buffer works in a fashion similar to phosphate buffers. So if I wanted $5\:\mathrm{g}$ of carbonic acid solution, how much $\ce{CO2}$ should I add to water? Easy way to make carbonic acid from other chemicals I would like to test for the reaction between carbonic acid and copper (to simulate the effect of acid rain on copper). "Looking for a Similar Assignment? PH is a measurement of acidity. Figure 23 shows that dissolution of carbonate rocks generates CO 2 such that fairly high confining pressures (e.g., 250 bars) are required to prevent fluid immiscibility … Get Expert Help at an Amazing Discount!" acid-base experimental-chemistry equilibrium ph. The carbonic acid thus formed decomposes immediately into a gas, carbon dioxide (CO2), and water (H2O). aus oder wählen Sie 'Einstellungen verwalten', um weitere Informationen zu erhalten und eine Auswahl zu treffen. Nitric acid is a type of potent mineral acid used to make things like fertilizers, dyes, and high explosives. Explain that the hydrogen ions are now bonded to carbon instead of oxygen. This means there are fewer carbonate ions for the creatures to attach a calcium ion to, making it harder for … The lungs partially expel the acid in the form of gas, and the kidneys expel it in the form of urine. The bicarbonate is regulated in the blood by sodium, as are the phosphate ions. It can be found in sodas, champagne, and blood. 2.9.9 investigate the chemical reactions of carbon dioxide with water producing carbonic acid and with calcium hydroxide (limewater) until carbon dioxide is in excess; and; CCEA Chemistry. The caustic, colorless liquid is typically produced on an industrial scale using highly specialized chemical processes and … Carbonated beverages contain an acidic molecule called carbonic acid that decomposes when you open a bottle or can of a fizzy drink. Carbolic acid gives urine a smoky quality. The equilibrium on the left is the association of the dissolved carbon dioxide with a water molecule to form carbonic acid In this experiment, you will use a neutralization reaction between a strong acid and a strong base to make a salt. Carbonic acid is a type of weak acid formed from the dissolving of carbon dioxide in water. Carbonic acid is water + $\ce{CO2}$ but not all the $\ce{CO2}$ dissolves in water. The reaction between the Carbon dioxide and the water is a synthesis reaction, sometimes known as a combination reaction, and creates an acid known as Carbonic acid. Science: How to make Carbonic acid When carbon dioxide is added to water, the gas adds acidity to the water. Carbonic acid can be considered to be a diprotic acid from which two series of … The key difference between carbonic acid and carbolic acid is that carbonic acid is a carboxylic acid compound, whereas carbolic acid is an alcohol.. Tell the students that carbonic acid is what makes soda … When the cylinder containing the liquid carbonic acid is attached to the fountain, rapid evaporation produces an intense cold, which reduces the temperature of the water to be charged, in a corresponding degree, and the absorption of the gas is more rapidly and easily accomplished at a much less pressure than is necessary … As you probably know, our atmosphere has a lot of carbon dioxide in it. Are there any simple ways to make carbonic acid from other chemicals (e.g. It would actually be quite uncommon to make an acid starting from an aldehyde, but very common to start from a primary alcohol. Undissociated carbonic acid will only be present (in significant concentration) in solutions that are mildly acidic. A medium dose of the substance will tend to halt, or paralyze respiration, whereas a larger amount of the acid will stop … But it is for bathing instead of … Though it garners few public headlines, carbonic acid, the hydrated form of carbon dioxide, is critical to both the health of the atmosphere and the human body. Please explain how you arrived at your answer. The lower the pH, the more acidic a solution is. Despite its acidic properties, there's no evidence to suggest that carbonic acid in beverages does you any harm. Carbolic acid affects respiration as it oxidizes in the body. Therefore, I want to make it myself. Unit 2: Further Chemical Reactions, Rates and Equilibrium, Calculations and Organic Chemistry. Carbonic acid appears frequently in the natural world. Manufacturers then use the carbonated water as an ingredient to make flavored carbonated drinks 2. Buy Carbonic Acid online from WorldOfChemicals. Wir und unsere Partner nutzen Cookies und ähnliche Technik, um Daten auf Ihrem Gerät zu speichern und/oder darauf zuzugreifen, für folgende Zwecke: um … sodium carbonate?). Lab Alley is a bulk carbonic acid supplier. Carbonic acid is added to drinks like soda to make them taste fizzy. It is also thoroughly saturated with water.From this, we might deduce that we live in a rather acidic environment — and we do. Science and Social Hub; Lab. It's so straightforward I'm not sure how to explain it. CO2 + H2O -> H2CO3 One interesting thing about it is that there is an enzyme that catalyzes it, Carbonic Anhydrase, and that has one of the fastest "turnover numbers" of … However, because it exists for only a fraction of a second before changing into a mix of hydrogen and bicarbonate ions, carbonic acid has remained an enigma. H2CO3 -> H2O + CO2 . We have the list of top Carbonic Acid suppliers, manufacturers, wholesalers and traders with the best price listed from worldwide. Please explain how you arrived at your answer. A hydrogen atom from the carbonic acid gets into the water as a hydrogen ion (H +). However, I find that many supplies failed to provide carbonic acid (either in powder or liquid form). The water is now acidic or a weak carbonic acid. This hydrogen ion bonds to the carbonate ion in ocean water and creates bicarbonate ion (HCO-3) which the shell-making organisms can't use. How many milliliters of 60% carbonic acid must be mixed with how many milliliters of 15% carbonic acid to make 650 milliliters of a 38% carbonic acid solution? Although the terms carbonic acid and carbolic acid sounds similar, they refer to two different chemical compounds. The carbon and oxygen that make up CO 2 are found in a number of chemicals and minerals classified as carbonates or, when hydrogen is also present, bicarbonates. The chemical formula of carbonic acid is H 2 CO 3 .
Mule Deer Shed Hunting Tips, The Scooby Doo Show Intro, The Twenty Years' Crisis Thirty Years On, Color Oops Black Hair, Personalization Mall Student Discount, Highest-grossing Movies Ever, Cartoon Kiwi Animal, | CommonCrawl |
Patrick Fasano
Graduate student, nuclear theorist, breaker of supercomputers.
Keybase
Posts by Collection
Many Fermion Dynamics - nuclear (MFDn)
MFDn is a configuration interaction code for performing no-core configuration interaction (NCCI) calculations for light nuclei using realistic 2-and 3-body interactions. It is an MPI/OpenMP hybrid code with Lanczos and LOBPCG eigensolvers. It has been extensively optimized for Cori KNL, and is undergoing reimplementation for Permutter GPU.
Symplectic No-Core Configuration Interaction (spncci) Permalink
spncci is a code for ab initio calculations in an $\mathrm{Sp}(3,\mathbb{R})$ symmetry-adapted basis, via the symplectic no-core configuration interaction (SpNCCI) approach. Many-body Hamiltonian matrix elements are evaluated through a laddering procedure, involving $\mathrm{Sp}(3,\mathbb{R})$ and $\mathrm{SU}(3)$ group theoretical coefficients, leading to a dense Hamiltonian matrix, which is then diagonalized via the Lanczos algorithm. Algorithms are structured for efficient parallelization, in collaboration with Lawrence Berkeley National Laboratory (LBNL) Scalable Solvers group. Currently the code is OpenMP parallelized, with exploratory work on MPI/OpenMP implementation. Full MPI/OpenMP parallelization and integration with a distributed, iterative eigensolver for dense matrices with block structure is anticipated by early in the new allocation year.
Perspectives on Nuclear Structure and Scattering with the Ab Initio No-Core Shell Model
Published in JPS Conference Proceedings, 2018
Nuclear structure and reaction theory are undergoing a major renaissance with advances in many-body methods, strong interactions with greatly improved links to Quantum Chromodynamics (QCD), the advent of high performance computing, and improved computational algorithms. Predictive power, with well-quantified uncertainty, is emerging from non-perturbative approaches along with the potential for new discoveries such as predicting nuclear phenomena before they are measured. We present an overview of some recent developments and discuss challenges that lie ahead. Our focus is on explorations of alternative truncation schemes in the harmonic oscillator basis, of which our Japanese–United States collaborative work on the No-Core Monte-Carlo Shell Model is an example. Collaborations with Professor Takaharu Otsuka and his group have been instrumental in these developments.
DOI: 10.7566/JPSCP.23.012001 | arXiv: 1804.10995
Recommended citation: J. P. Vary, P. Maris, P. J. Fasano, and M. A. Caprio, JPS Conf. Proc. 23, 012001 (2018) (download)
First Measurement of the B(E2;3/2−→1/2−) Transition Strength in 7Be: Testing Ab Initio Predictions for A=7 Nuclei
Published in Physical Review C, 2019
Electromagnetic observables are able to give insight into collective and emergent features in nuclei, including nuclear clustering. These observables also provide strong constraints for ab initio theory, but comparison of these observables between theory and experiment can be difficult due to the lack of convergence for relevant calculated values, such as $E2$ transition strengths. By comparing the ratios of $E2$ transition strengths for mirror transitions, we find that a wide range of ab initio calculations give robust and consistent predictions for this ratio. To experimentally test the validity of these ab initio predictions, we performed a Coulomb excitation experiment to measure the $B(E2;3/2^− \rightarrow 1/2^−)$ transition strength in 7Be for the first time. A $B(E2;3/2^− \rightarrow 1/2^−)$ value of 26(6)stat(3)syst e2 fm4 was deduced from the measured Coulomb excitation cross section. This result is used with the experimentally known 7Li $B(E2;3/2^− \rightarrow 1/2^−)$ value to provide an experimental ratio to compare with the ab initio predictions. Our experimental value is consistent with the theoretical ratios within $1\sigma$ uncertainty, giving experimental support for the value of these ratios. Further work in both theory and experiment can give insight into the robustness of these ratios and their physical meaning.
DOI: 10.1103/PhysRevC.99.064320 | arXiv: 2109.07312
Recommended citation: S. L. Henderson, T. Ahn, M. A. Caprio, P. J. Fasano, et al., Phys. Rev. C 99, 064320 (2019). (download)
Ab initio rotation in 10Be
Published in Bulgarian Journal of Physics, 2019
Ab initio theory describes nuclei from a fully microscopic formulation, with no presupposition of collective degrees of freedom, yet signatures of clustering and rotation nonetheless arise. We can therefore look to ab initio theory for an understanding of the nature of these emergent phenomena. To probe the nature of rotation in 10Be, we examine the predicted rotational spectroscopy from no-core configuration interaction (NCCI) calculations with the Daejeon16 internucleon interaction, and find spectra suggestive of coexisting rotational structures having qualitatively different intrinsic deformations: one triaxial and the other with large axial deformation arising primarily from the neutrons.
arXiv: 1912.06082
Recommended citation: M. A. Caprio, P. J. Fasano, A. E. McCoy, P. Maris, and J. P. Vary, Bulg. J. Phys. 46, 445 (2019). (download)
Probing ab initio emergence of nuclear rotation
Published in The European Physical Journal A, 2020
Structural phenomena in nuclei, from shell structure and clustering to superfluidity and collective rotations and vibrations, reflect emergent degrees of freedom. Ab initio theory describes nuclei directly from a fully microscopic formulation. We can therefore look to ab initio theory as a means of exploring the emergence of effective degrees of freedom in nuclei. For the illustrative case of emergent rotational bands in the Be isotopes, we establish an understanding of the underlying oscillator space and angular momentum (orbital and spin) structure. We consider no-core configuration interaction (NCCI) calculations for 7,9,11Be with the Daejeon16 internucleon interaction. Although shell model or rotational degrees of freedom are not assumed in the ab initio theory, the NCCI results are suggestive of the emergence of effective shell model degrees of freedom ($0\hbar\omega$ and $2\hbar\omega$ excitations) and $LS$-scheme rotational degrees of freedom, consistent with an $\mathrm{SU}(3)$ Elliott–Wilsdon description. These results provide some basic insight into the connection between emergent effective collective rotational and shell model degrees of freedom in these light nuclei and the underlying ab initio microscopic description.
DOI: 10.1140/epja/s10050-020-00112-0 | arXiv: 1912.00083
Recommended citation: M. A. Caprio, P. J. Fasano, P. Maris, A. E. McCoy, and J. P. Vary, Eur. Phys. J. A 56, 120 (2020). (download)
Emergent Sp(3,ℝ) Dynamical Symmetry in the Nuclear Many-Body System from an Ab Initio Description
Published in Physical Review Letters, 2020
Ab initio nuclear theory provides not only a microscopic framework for quantitative description of the nuclear many-body system, but also a foundation for deeper understanding of emergent collective correlations. A symplectic $\mathrm{Sp}(3,\mathbb{R}) \supset \mathrm{U}(3)$ dynamical symmetry is identified in ab initio predictions, from a no-core configuration interaction approach, and found to provide a qualitative understanding of the spectrum of 7Be. Low-lying states form an Elliott $\mathrm{SU}(3)$ spectrum, while an $\mathrm{Sp}(3,\mathbb{R})$ excitation gives rise to an excited rotational band with strong quadrupole connections to the ground state band.
DOI: 10.1103/PhysRevLett.125.102505 | arXiv: 2008.05522
Recommended citation: A. E. McCoy, M. A. Caprio, T. Dytrych, and P. J. Fasano, Phys. Rev. Lett. 125, 102505 (2020). (download)
Intrinsic operators for the translationally-invariant many-body problem
Published in Journal of Physics G: Nuclear and Particle Physics, 2020
The need to enforce fermionic antisymmetry in the nuclear many-body problem commonly requires use of single-particle coordinates, defined relative to some fixed origin. To obtain physical operators which nonetheless act on the nuclear many-body system in a Galilean-invariant fashion, thereby avoiding spurious center-of-mass contributions to observables, it is necessary to express these operators with respect to the translational intrinsic frame. Several commonly-encountered operators in nuclear many-body calculations, including the magnetic dipole and electric quadrupole operators (in the impulse approximation) and generators of $\mathrm{U}(3)$ and $\mathrm{Sp}(3,\mathbb{R})$ symmetry groups, are bilinear in the coordinates and momenta of the nucleons and, when expressed in intrinsic form, become two-body operators. To work with such operators in a second-quantized many-body calculation, it is necessary to relate three distinct forms: the defining intrinsic-frame expression, an explicitly two-body expression in terms of two-particle relative coordinates, and a decomposition into one-body and separable two-body parts. We establish the relations between these forms, for general (non-scalar and non-isoscalar) operators bilinear in coordinates and momenta.
DOI: 10.1088/1361-6471/ab9d38 | arXiv: 2004.1202
Recommended citation: M. A. Caprio, A. E. McCoy, and P. J. Fasano, J. Phys. G: Nucl. Part. Phys. 47, 122001 (2020). (download)
White paper: from bound states to the continuum
This white paper reports on the discussions of the 2018 Facility for Rare Isotope Beams Theory Alliance (FRIB-TA) topical program "From bound states to the continuum: Connecting bound state calculations with scattering and reaction theory". One of the biggest and most important frontiers in nuclear theory today is to construct better and stronger bridges between bound state calculations and calculations in the continuum, especially scattering and reaction theory, as well as teasing out the influence of the continuum on states near threshold. This is particularly challenging as many-body structure calculations typically use a bound state basis, while reaction calculations more commonly utilize few-body continuum approaches. The many-body bound state and few-body continuum methods use different language and emphasize different properties. To build better foundations for these bridges, we present an overview of several bound state and continuum methods and, where possible, point to current and possible future connections.
DOI: 10.1088/1361-6471/abb129 | arXiv: 1912.00451
Recommended citation: C. W. Johnson, K. D. Launey, et al., J. Phys. G: Nucl. Part. Phys. 47, 123001 (2020). (download)
Rotational bands beyond the Elliott model
Rotational bands are commonplace in the spectra of atomic nuclei. Inspired by early descriptions of these bands by quadrupole deformations of a liquid drop, Elliott constructed discrete nucleon representations of $\mathrm{SU}(3)$ from fermionic creation and annihilation operators. Ever since, Elliott's model has been foundational to descriptions of rotation in nuclei. Later work, however, suggested the symplectic extension $\mathrm{Sp}(3,\mathbb{R})$ provides a more unified picture. We decompose no-core shell-model nuclear wave functions into symmetry-defined subspaces for several beryllium isotopes, as well as 20Ne, using the quadratic Casimirs of both Elliott's $\mathrm{SU}(3)$ and $\mathrm{Sp}(3,\mathbb{R})$. The band structure, delineated by strong $B(E2)$ values, has a more consistent description in $\mathrm{Sp}(3,\mathbb{R})$ rather than $\mathrm{SU}(3)$. In particular, we confirm previous work finding in some nuclides strongly connected upper and lower bands with the same underlying symplectic structure.
DOI: 10.1088/1361-6471/abdd8e | arXiv: 2011.08307
Recommended citation: R. Zbikowski, C. W. Johnson, A. E. McCoy, M. A. Caprio, and P. J. Fasano, J. Phys. G: Nucl. Part. Phys. 48, 075102 (2021). (download)
Quadrupole moments and proton-neutron structure in p-shell mirror nuclei
Electric quadrupole (E2) matrix elements provide a measure of nuclear deformation and related collective structure. Ground-state quadrupole moments in particular are known to high precision in many p-shell nuclei. While the experimental electric quadrupole moment only measures the proton distribution, both proton and neutron quadrupole moments are needed to probe proton-neutron asymmetry in the nuclear deformation. We seek insight into the relation between these moments through the ab initio no-core configuration interaction (NCCI), or no-core shell model (NCSM), approach. Converged ab initio calculations for quadrupole moments are particularly challenging, due to sensitivity to long-range behavior of the wave functions. We therefore study more robustly-converged ratios of quadrupole moments: across mirror nuclides, or of proton and neutron quadrupole moments within the same nuclide. In calculations for mirror pairs in the p-shell, we explore how well the predictions for mirror quadrupole moments agree with experiment and how well isospin (mirror) symmetry holds for quadrupole moments across a mirror pair.
DOI: 10.1103/PhysRevC.104.034319 | arXiv: 2106.12128
Recommended citation: M. A. Caprio, P. J. Fasano, P. Maris, and A. E. McCoy, Phys. Rev. C 104, 034319 (2021). (download)
Accelerating quantum many-body configuration interaction with directives
Accepted for publication in Lecture Notes in Computer Science
Many-Fermion Dynamics-nuclear, or MFDn, is a configuration interaction (CI) code for nuclear structure calculations. It is a platform-independent Fortran 90 code using a hybrid MPI+X programming model. For CPU platforms the application has a robust and optimized OpenMP implementation for shared memory parallelism. As part of the NESAP application readiness program for NERSC's latest Perlmutter system, MFDn has been updated to take advantage of accelerators. The current mainline GPU port is based on OpenACC. In this work we describe some of the key challenges of creating an efficient GPU implementation. Additionally, we compare the support of OpenMP and OpenACC on AMD and NVIDIA GPUs.
Recommended citation: B. G. Cook, P. J. Fasano, P. Maris, C. Yang, and D. Oryspayev, arXiv:2110.10765 [cs.DC]
Symmetry and shape coexistence in 10Be
Accepted for publication in Bulgarian Journal of Physics
Within the low-lying spectrum of 10Be, multiple rotational bands are found, with strikingly different moments of inertia. A proposed interpretation has been that these bands variously represent triaxial rotation and prolate axially-deformed rotation. The bands are well-reproduced in ab initio no-core configuration interaction (NCCI) calculations. We use the calculated wave functions to elucidate the nuclear shapes underlying these bands, by examining the Elliott $\mathrm{SU}(3)$ symmetry content of these wave functions. The ab initio results support an interpretation in which the ground-state band, along with an accompanying $K=2$ side band, represent a triaxial rotor, arising from an $\mathrm{SU}(3)$ irreducible representation in the $0\hbar\omega$ space. Then, the lowest excited $K=0$ band represents a prolate rotor, arising from an $\mathrm{SU}(3)$ irreducible representation in the $2\hbar\omega$ space.
Recommended citation: M. A. Caprio, A.E. McCoy, P. J. Fasano, and T. Dytrych, arXiv:2112.04056 [nucl-th]
Natural orbitals for the ab initio no-core configuration interaction approach
Submitted to Physical Review C
Ab initio no-core configuration interaction (NCCI) calculations for the nuclear many-body problem have traditionally relied upon an antisymmetrized product (Slater determinant) basis built from harmonic oscillator orbitals. The accuracy of such calculations is limited by the finite dimensions which are computationally feasible for the truncated many-body space. We therefore seek to improve the accuracy obtained for a given basis size by optimizing the choice of single-particle orbitals. Natural orbitals, which diagonalize the one-body density matrix, provide a basis which maximizes the occupation of low-lying orbitals, thus accelerating convergence in a configuration-interaction basis, while also possibly providing physical insight into the single-particle structure of the many-body wave function. We describe the implementation of natural orbitals in the NCCI framework, and examine the nature of the natural orbitals thus obtained, the properties of the resulting many-body wave functions, and the convergence of observables. After taking 3He as an illustrative testbed, we explore aspects of NCCI calculations with natural orbitals for the ground state of the p-shell neutron halo nucleus 6He.
Recommended citation: P. J. Fasano, Ch. Constantinou, M. A. Caprio, P. Maris, and J. P. Vary, arXiv:2112.04027 [nucl-th]
Natural orbital methods for ab initio nuclear theory
Computational Methods in Physics
Undergraduate course, University of Notre Dame, Department of Physics, 2017
Grader/in-class TA for undergraduate computational physics course.
© 2022 Patrick Fasano. Powered by Jekyll & AcademicPages, a fork of Minimal Mistakes. | CommonCrawl |
Correction to: Numerical Simulation and Experimental Validation of Nondendritic Structure Formation in Magnesium Alloy Under Oscillation and Ultrasonic Vibration
Anshan Yu1,2,
Xiangjie Yang1,2,
HongMin Guo2,3,
Kun Yu1,2,
Xiuyuan Sun1,2 &
Zixin Li1,2
Metallurgical and Materials Transactions B volume 50, page3126(2019)Cite this article
The Original Article is available
Correction to: Metallurgical and Materials Transactions B https://doi.org/10.1007/s11663-019-01654-5
The equation 1 "1 − ϕ" should be "1 − ϕ2".
The equation 2 last "∂u/∂t" should be "∂ϕ/∂t" .
The corrected equations are
$$ \begin{aligned} &\tau \left( \varvec{n} \right)\left[ {1 + (1 - k)u} \right]\frac{\partial \varphi^{2}}{\partial t} \\ &\quad = \varphi (1 - \varphi ^{2}) - \lambda \left( {1 - \varphi^{2} } \right)^{2} \left( {u + \theta_{\text{sys}} } \right) + \nabla \cdot \left\{ {\left[ {W(\varvec{n})} \right]^{2} \nabla \varphi } \right\} \\ & \qquad - \,\frac{\partial }{\partial x}\left[ {W(\varvec{n})W^{\prime}(\varvec{n})\frac{\partial \varphi }{\partial y}} \right] + \frac{\partial }{\partial y}\left[ {W(\varvec{n})W^{\prime}(\varvec{n})\frac{\partial \varphi }{\partial x}} \right] \\ \end{aligned} $$
$$ \left( {\frac{1 + k}{2} - \frac{1 - k}{2}\varphi } \right)\left( {\frac{\partial u}{\partial t} + U \cdot \nabla u} \right) = \nabla \left( {D\frac{1 - \varphi }{2}\nabla u + J_{AT} } \right) + \frac{1}{2}\frac{\partial \varphi}{\partial t}\left[ {1 + \left( {1 - k} \right)u} \right] $$
The caption of Figure 21 should say, "Microstructure of solidified AZ91D at a fixed oscillation frequency (1 Hz), ultrasonic power (1000 W), and oscillation amplitude (4π/3) for inclined angles of (a) 25 deg, (b) 20 deg, and (c) 15 deg.
School of Mechanical and Electrical Engineering, Nanchang University, Nanchang, 330031, P.R. China
Anshan Yu, Xiangjie Yang, Kun Yu, Xiuyuan Sun & Zixin Li
Key Laboratory of Near Net Forming in Jiangxi Province, Nanchang, 330031, P.R. China
Anshan Yu, Xiangjie Yang, HongMin Guo, Kun Yu, Xiuyuan Sun & Zixin Li
Department of Materials Science and Engineering, Nanchang University, Nanchang, 330031, P.R. China
HongMin Guo
Anshan Yu
Xiangjie Yang
Kun Yu
Xiuyuan Sun
Zixin Li
Correspondence to Xiangjie Yang.
Yu, A., Yang, X., Guo, H. et al. Correction to: Numerical Simulation and Experimental Validation of Nondendritic Structure Formation in Magnesium Alloy Under Oscillation and Ultrasonic Vibration. Metall Mater Trans B 50, 3126 (2019). https://doi.org/10.1007/s11663-019-01694-x
DOI: https://doi.org/10.1007/s11663-019-01694-x
Not logged in - 3.238.70.175 | CommonCrawl |
How to construct tuples with a given order?
How do I create a list of tuples with an ordering imposed on them, where each element is from a generating set? Specifically, I'm trying to create a listing of tuples $(x_1, x_2, ..., x_n)$ such that $x_1 > x_2$, $x_2 \leq x_3 \leq \cdots x_n$ where each $x_i$ is in (say) $X = \{a,b,c,d\}$. The elements of $X$ are in lexicographic order.
rank = 4;
weight = 3;
X = Range[rank];
bc = Tuples[X, weight];
Cases[bc, {x1_, x2_, x3_} /; x1 > x2 && x2 <= x3]
Issues 1. rank and weight are independent; they don't need to be =. 2. I'd like to say x1_, .., xrank_, but I'm not sure how to specify x_rank. Also x2 <= x3 <= ... <= xrank. (I'll loop on rank and weight, so these won't be hard-coded values.) 3. At the end, 1, 2, ..., rank should be traded for a, b, ... (whatever letter).
list-manipulation string-manipulation
Mark Pedigo
Mark PedigoMark Pedigo
$\begingroup$ Considering that your tuples are of length of your list, you can try something like this,Block[{k = {a, b, c, d} /. {a -> 1, b -> 2, c -> 3, d -> 4}, ca}, {ca = Tuples[k, Length[k]]; Cases[ca, {a_, b_, c_, d_} /; a > b && b < c < d] /. {1 -> a, 2 -> b, 3 -> c, 4 -> d}}] $\endgroup$
– Sejwal
If I understand the question generating all Tuples and then filtering is wasteful, and will "blow up" for long sets. Since you want all but one element in canonical order I believe you should generate those, then prefix as needed to complete the sequence.
Here is a mundane loop-based way to build the tuples; I couldn't think of anything that ran faster.
tuples[rank_Integer, k_Integer] :=
Block[{i},
i[0] = 1;
Flatten[
Table @@ Join[{First /@ #}, #] &@Array[{i[#], i[# - 1], rank} &, k - 1],
] // Join @@ (Table[{i, ##}, {i, # + 1, rank}] & @@@ #) &
tuples[5, 3]
{{2, 1, 1}, {3, 1, 1}, {4, 1, 1}, {5, 1, 1}, {2, 1, 2}, {3, 1, 2}, {4, 1, 2}, {5, 1, 2},
{2, 1, 3}, {3, 1, 3}, {4, 1, 3}, {5, 1, 3}, {2, 1, 4}, {3, 1, 4}, {4, 1, 4}, {5, 1, 4},
{4, 3, 3}, {5, 3, 3}, {4, 3, 4}, {5, 3, 4}, {4, 3, 5}, {5, 3, 5}, {5, 4, 4}, {5, 4, 5}}
You can fill these tuples with arbitrary expressions like this:
x = {"a", "b", "c", "d", "e"};
x[[#]] & /@ tuples[5, 3]
{{"b", "a", "a"}, {"c", "a", "a"}, {"d", "a", "a"}, . . .
{"e", "c", "e"}, {"e", "d", "d"}, {"e", "d", "e"}}
The function will work on values that would not work with Tuples:
tuples[12, 8] // Length
Using Tuples before filtering would generate 12^8 = 429,981,696 sets.
Mr.WizardMr.Wizard
It's a brute force approach:
set = CharacterRange["A", "E"];
n = 4;
tuples = Tuples[set, n];
Select[tuples, Order[#[[1]], #[[2]]] == 1 && And @@ Negative@Differences@Ordering@Rest[#] &]
{"A", "C", "B", "A"}, {"A", "D", "B", "A"}, {"A", "D", "C", "A"}, {"A", "D", "C", "B"}, {"A", "E", "B", "A"}, {"A", "E", "C", "A"}, {"A", "E", "C", "B"}, {"A", "E", "D", "A"}, {"A", "E", "D", "B"}, {"A", "E", "D", "C"}, {"B", "C", "B", "A"}, {"B", "D", "B", "A"}, {"B", "D", "C", "A"}, {"B", "D", "C", "B"}, {"B", "E", "B", "A"}, {"B", "E", "C", "A"}, {"B", "E", "C", "B"}, {"B", "E", "D", "A"}, {"B", "E", "D", "B"}, {"B", "E", "D", "C"}, {"C", "D", "B", "A"}, {"C", "D", "C", "A"}, {"C", "D", "C", "B"}, {"C", "E", "B", "A"}, {"C", "E", "C", "A"}, {"C", "E", "C", "B"}, {"C", "E", "D", "A"}, {"C", "E", "D", "B"}, {"C", "E", "D", "C"}, {"D", "E", "B", "A"}, {"D", "E", "C", "A"}, {"D", "E", "C", "B"}, {"D", "E", "D", "A"}, {"D", "E", "D", "B"}, {"D", "E", "D", "C"}}
C. E.C. E.
Use Rest in the condition:
X = Range@rank;
subst = Thread[X -> {a, b, c, d}];
bc = Tuples[X, weight]
Cases[bc, x_ /; x[[1]] > x[[2]] && LessEqual @@ Rest@x] /. subst
{{b, a, a}, {b, a, b}, {b, a, c}, {b, a, d}, {c, a, a}, {c, a, b},
{c, a, c}, {c, a, d}, {c, b, b}, {c, b, c}, {c, b, d}, {d, a, a},
{d, a, b}, {d, a, c}, {d, a, d}, {d, b, b}, {d, b, c}, {d, b, d},
{d, c, c}, {d, c, d}}
István ZacharIstván Zachar
Block[{k =
CharacterRange["A", "D"] /.
Thread[CharacterRange["A", "D"] ->
Range[Length[CharacterRange["A", "D"]]]],
ca}, {ca = Tuples[k, Length[k]];
Cases[ca, {a_, b_, c_, d_} /; a > b && b < c < d] /.
Range[Length[CharacterRange["A", "D"]]] // Reverse]}]
{{{"B", "A", "B", "C"}, {"B", "A", "B", "D"}, {"B", "A", "C", "D"}, {"C", "A", "B", "C"}, {"C", "A", "B", "D"}, {"C", "A", "C", "D"}, {"C", "B", "C", "D"}, {"D", "A", "B", "C"}, {"D", "A", "B", "D"}, {"D", "A", "C", "D"}, {"D", "B", "C", "D"}}}
SejwalSejwal
Not the answer you're looking for? Browse other questions tagged list-manipulation string-manipulation or ask your own question.
unordered Tuples
Filter a list generated with Tuples
How do I construct a "named character" programmatically?
Non-descending Tuples
Generating Tuples with restrictions
Efficiently generating tuples with Outer
How to invert Differences[list, order] with order >1?
Tuples with more criteria | CommonCrawl |
Last edited by Goltitaur
Saturday, July 25, 2020 | History
2 edition of Sources and surface representation of the cardiac electric field. found in the catalog.
Sources and surface representation of the cardiac electric field.
Internationales Colloquium Vectorcardiographicum, 7th, Smolenice, Czechoslovak Republic, 1966
[Papers presented at the] 7th International Colloquium Vectorcardiographicum, Smolenice, 13-16 September, 1966. Editor: Ivan Ruttkay-Nedecký, assistant editor: Eva Kellerová.
by Internationales Colloquium Vectorcardiographicum, 7th, Smolenice, Czechoslovak Republic, 1966
Published 1970 by Pub. House of the Slovak Academy of Sciences in Bratislava .
Electrocardiography -- Congresses,
Vectorcardiography -- Congresses
Contributions Kellerová, Eva,, Ruttkay-Nedecký, Ivan,, Slovenská akadémia vied
LC Classifications RC683.5 E5 I5 1966
Electrocardiography is the process of producing an electrocardiogram (ECG or EKG).It is a graph of voltage versus time of the electrical activity of the heart using electrodes placed on the skin. These electrodes detect the small electrical changes that are a consequence of cardiac muscle depolarization followed by repolarization during each cardiac cycle (heartbeat).MedlinePlus: This argument breaks down at the surface of the conductor, because in that case, part of the Gaussian surface must lie outside the conducting object, where there is an electric field. Part C Assume that at some point just outside the surface of the conductor, the electric field has magnitude and is directed toward the surface of the conductor.
The heart valves on the inner surface of the heart are covered by the. endocardium. Blood returns to the heart from the coronary circulation via the. The action of cardiac muscle tissue contracting on its own in the absence of neural stimulation is called. automaticity. The use of direct current (DC) is another source of static electric fields. This is for example the case of rail systems using DC that can generate fields inside the train. Televisions and computer screens with cathode ray tubes1 can also generate electrostatic fields. These fields become visible, for instance, when screens attract dust.
Johann (Johan) Schweigger of Nuremberg increases the movement of magnetized needles in electromagnetic fields. He found that by wrapping the electric wire into a coil of turns the effect on the needle was multiplied. He proposed that a magnetic field revolved around a wire carrying a current which was later proven by Michael Faraday. The Earth's Electric Field provides you with an integrated and comprehensive picture of the generation of the terrestrial electric fields, their dynamics and how they couple/propagate through the Earth's Electric Field provides basic principles of terrestrial electric field related topics, but also a critical summary of electric field related observations and their significance.
Instructors manual and training disk to accompany Understanding and using MS-DOS/PC DOS
The Soviet political mind
Quebec.
Smaller life companies
woollen manufacture at Wellington, Somerset
Acres of diamonds.
No highway.
Understanding and Improving Your Information Technology Infrastructure Management (It Infrastructure Library)
Regional survey of New York and its environs
Programme of long courses for qualified teachers 1974/1975.
Janice Vancleaves Dinosaurs for Every Kid
cooperative approach to crafts
intramolecular ene reactions of some unsaturated acyloins
Practical guide for clinical neurophysiologic testing
Directory of applications software.
thousand acres of nothing
Studies in personnel and industrial psychology
Motorcycle basics techbook
Three hundred grand
Sources and surface representation of the cardiac electric field by Internationales Colloquium Vectorcardiographicum, 7th, Smolenice, Czechoslovak Republic, 1966 Download PDF EPUB FB2
Physiol Res. ;59 Suppl 1:S Analysis of the electrical heart field. Kittnar O(1), Mlcek M. Author information: (1)Institute of Physiology, Charles University in Prague, First Faculty of Medicine, Prague, Czech Republic. [email protected] There are three basic procedures used for an assessment of the electrical heart field from the body surface: standard electrocardiography Cited by: 7.
Book review Full text access Sources and surface representation of the cardiac electric field: Edited by Dr. Ivan Puttkay-Nedecky, CSs., and Eva Kellerova, Bratislava, Czechoslovakia,Publishing House of the Slovak Academy of Sciences.
pages Pages Download PDF. Author(s): International "Colloquium Vectorcardiographicum," 7th, Smolenice, ; Ústav normálnej a patologickej fyziológie (Slovenská akadémia vied) Title(s): Sources and surface representation of the cardiac electric field. Country of Publication: Slovakia Publisher: Bratislava, Publishing House of the Slovak Academy of Sciences, where each monopole is located at (x, y, z) while the field point is at (x', y', z').The field described by Equation for a point current source is identical to the electrostatic field from a point charge, provided that I 0 is replaced by Q 0 (the charge magnitude), σ is replaced by ε (the permittivity), and replaced result is not surprising since if the aforementioned exchanges.
The electric generators in the human heart give rise to electrocardiograms and magnetocardiograms. The information contained in ECGs and MCGs is interdependent, although the possibility that a measurable magnetic field is generated by sources giving no electric field (vortex sources) cannot be ex- cluded.
field due to the electric activity of the heart muscle. A simple model results from assuming that the cardiac sources are represented by a central on the left surface of the. Abstract. Throughout the cardiac cycle the heart cells deliver varying amounts of electric current to the surroundings tissues.
The effect of this at the body surface are Potentials which change continuously during the course of a heart : Adriaan van Oosterom. Despite these obvious advantages, body surface electrocardiographic mapping has not become a routinely used clinical method.
The number of research papers on BSMs, the cardiac electric field, and related mathematical models is increasing, (see for example Proceedings of NFSI in Biomedizinische Technik [Berlin] B Ergänzungband [supplement] 2, and Proceedings of. This equation evaluates the electric potential anywhere within an inhomogeneous volume conductor containing internal volume sources.
The first term on the right-hand side of Equation involving i corresponds exactly to Equation and thus represents the contribution of the volume source.
The effect of inhomogeneities is reflected in the second integral, where (σ j" - σ j ')Φ j is an. But for multicellular preparations, particularly cardiac ventricular muscle, the electrical source strength may be changed significantly by the presence of the interstitial potential field.
This statement of Coulomb's Law on the face of it sounds like "action at a distance". There is no reference to any intervening mechanism for transmitting the force between the charges.
The force simply exists because the particles are there, and there is no reference to a field. But it is precisely the intervening "electric field" between the two charges that is the mechanism by.
Spach MS, Boineau JP, Barr RC, Flaherty JT, Gallie TM, Long EC. Digital computer isopotential surface mapping studies in children. In Sources and Surface Representation of the Cardiac Electric Field. Amsterdam: Swets and Zeitlinger, Google ScholarCited by: The effects of an electric field on a charged particle don't depend on whether the source is a static charge or a changing magnetic field.
Either causes the same acceleration. The energy density of the electric field also doesn't depend on the source.
You have to add the fields from all sources then square the result to get the energy density. Contributions to Sources and Surface Representation of the Cardiac Electric Field, 7 th International Colloquium Vectorgraphicum, Smolenice, September, Ivan Ruttkay-Nedecky, editor.
Publishing House of the Slovak Academy of Sciences, Bratislavea, Swets. Good afternoon, I was self-studying Electricity (Gauss' Law) and I have a doubt regarding the electric field near the surface of a conductor. I know that, near the surface of an infinite plate made of a non-conductive material, the electric field can be given by: E=\\frac{\\sigma}{2\\epsilon_0}.
The observability of electrical, cardiac sourees has been studied based on non-invasive measurements. The presently available knowledge on the cardiac sourees and on the electrical behaviour of the human torso have been analysed critically in order to arrive at a proper modelling of the system.
Unfortunately, accurate quantitative dataCited by: 3. Wireless power transfer (WPT), wireless power transmission, wireless energy transmission (WET), or electromagnetic power transfer is the transmission of electrical energy without wires as a physical link.
In a wireless power transmission system, a transmitter device, driven by electric power from a power source, generates a time-varying electromagnetic field, which transmits power across space. surface, thus providing all the information on the cardiac electric field available at the body surface; 2) it is more sensitive in detecting local electrical events, such as local conduction disturbances or regional heterogeneities of ventricular recovery.
Nevertheless the results obtained using BSPM procedure cannotCited by: 7. An electric field is a vector field that associates to each point in space the Coulomb force experienced by a unit electric charge. Electric fields converge and diverge at electric charges and they can be induced by time-varying magnetic fields.
The electric field combines with the magnetic field to form the electromagnetic field. Modification of Electric and Magnetic Fields by Materials 78 In the absence of an applied electric field, the molecules of a dielectric are randomly oriented (Fig. b). This results from the disordering effects of thermal molecular motion and collisions.
Molecular ordering cannot occur spontaneously because a net electric field would result.We know that the electric field at the surface of a conductor only have a normal component equal to $\rho/\varepsilon$ (finite number).
But let's consider the point $\text{P}$ (at the surface of a conductor). Assume that there is a charge at an infinitesimal distance from the point $\text{p}$.EMS is a magnetic and electric field modeling and simulation software.
It is a versatile electromagnetic design tool as it calculates the magnetic and electric field and flux, electric potential, voltage, current, magnetic force, electric force, torque, eddy current and losses, resistance, inductance, capacitance, skin effect, proximity effect.
shareholderdemocracy.com - Sources and surface representation of the cardiac electric field. book © 2020 | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.