text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Increasing beam stability zone in synchrotron light sources using polynomial quasi-invariants
Steady-state solutions of split beams in electron storage rings
Marc Andre Jebramcik, Shaukat Khan & Wolfram Helml
Stable coherent terahertz synchrotron radiation from controlled relativistic electron bunches
C. Evain, C. Szwaj, … S. Bielawski
Effect of radiation-reaction on charged particle dynamics in a focused electromagnetic wave
Shivam Kumar Mishra, Sarveshwar Sharma & Sudip Sengupta
Matrix model for collective phenomena in electron beam's longitudinal phase space
Giovanni Perosa & Simone Di Mitri
Control of laser plasma accelerated electrons for light sources
T. André, I. A. Andriyash, … M.-E. Couprie
Coupling Effects in Multistage Laser Wake-field Acceleration of Electrons
Zhan Jin, Hirotaka Nakamura, … Tomonao Hosokai
Characterisation of microbunching instability with 2D Fourier analysis
A. D. Brynes, I. Akkermans, … S. Di Mitri
Intrinsic energy spread and bunch length growth in plasma-based accelerators due to betatron motion
Angel Ferran Pousa, Alberto Martinez de la Ossa & Ralph W. Assmann
High order mode structure of intense light fields generated via a laser-driven relativistic plasma aperture
M. J. Duff, R. Wilson, … P. McKenna
Edgar Andrés Sánchez1,
Alain Flores2,
Jorge Hernández-Cobos1,
Matías Moreno ORCID: orcid.org/0000-0001-8037-25063 &
Armando Antillón ORCID: orcid.org/0000-0002-7206-82291
Scientific Reports volume 13, Article number: 1335 (2023) Cite this article
Energy science and technology
The objective of this article is to propose a scheme to increase the stability zone of a charged particles beam in synchrotrons using a suitable objective function that, when optimized, inhibits the resonances onset in phase space and the dynamic aperture of electrons in storage rings can be improved. The proposed technique is implemented by constructing a quasi-invariant in a neighborhood of the origin of the phase space, then, by using symbolic computation software, sets of coupled differential equations for functions involved in nonlinear dynamics are obtained and solved numerically with periodic boundary conditions. The objective function is constructed by proposing that the innermost momentum solution branch of the polynomial quasi-invariant approaches to the corresponding ellipse of the linear dynamics. The objective function is optimized using a genetic algorithm, allowing the dynamic aperture to be increased. The quality of results obtained with this scheme are compared with particle tracking simulations performed with available software in the field, showing good agreement. The scheme is applied to a synchrotron light source model that can be classified as third generation due to its emittance.
Nowadays, the design of storage rings of synchrotron light sources is a major challenge, mainly because the dynamic aperture is reduced by non-linear properties of the lattice. In the first stage of the design, magnetic dipoles and quadrupoles are used to generate a linear achromatic lattice1 with a given emittance. The second step involves the introduction of magnetic sextupoles, and if needed, octupoles, which transform dynamics from linear to nonlinear, generating new phenomena that, if not controlled, are sources of instability of the electron beam. In such a case, the brilliance of the synchrotron radiation could be degraded affecting the ongoing experiments of technological, basic, and applied scientific research2.
When a synchrotron light source is operated, there can be several hundred electron bunches distributed along the ring. The bunches travel inside a metal tube at very high vacuum conditions, minimizing collisions with gas molecules. The tube passes through the center of all the magnets. The electron bunches must be stabilized by interacting with magnetic forces, provided by several magnetic multipoles. In operation, such stability must be guaranteed for several hours.
When searching for a good design, it is necessary to optimize the different lengths of all magnets, their field strengths (the latter are described by the functions \(b_1(s)\), \(b_2(s)\), and \(b_3(s)\) which are piecewise constant functions, as shown in Fig. 1), as well as the lengths of the free spaces between them, the so-called drift spaces. Usually, this process gives the ring a physical structure based on a periodic arrangement of magnetic cells, as indicated in Fig. 2. The process seeks for an arrangement of these magnets to allow electrons describe stable trajectories, traveling at speeds close to the speed of light.
In the continual search to reduce the emittance of new synchrotrons, the use of increasingly intense magnetic quadrupoles is required, giving rise to more noticeable chromatic effects. The use of high intensity magnetic sextupoles (chromatic), and higher order multipoles, is required to correct these effects. A different type of sextupoles, called geometric, is added to correct undesirable effects produced by chromatic sextupoles to improve the dynamic aperture. The more intense the sextupoles and higher order multipoles are, the more difficult it is to keep dynamic stability under control. At this level, the problem to be solved in the design process is to adjust the sextupoles families to simultaneously maximize both, the dynamic aperture, and the moment aperture. Additionally, the complexity increases if other types of important variables are included in the optimization3. Ultimately, the ring designs must be robust in the presence of non-linearities, intentional such as sextupoles, or unintentional, such as errors and imperfections in the field of the magnets. The conventional method to make these adjustments is to minimize many resonant terms4,5. These results are then validated with particle tracking simulation6,7. Moreover, it is common to use complementary tools such as frequency map analysis8 to have a better picture of the tune diffusion and resonance structures. Other effective optimization methods, very demanding in computational resources9, directly calculate the dynamic aperture by means of particle tracking10,11,12,13, including the calculation of resonant terms if required14. Nonlinear systems of great complexity in their analytical resolution procedures led some authors to analyze and develop novel methods to treat them numerically12,13. Some of those methods are based on highly accurate descriptive procedures6,7. Furthermore, the need to have machines with better performance has motivated researchers to propose new solutions such as integrable accelerators15,16,17, where the magnetic fields are modulated in such a way that integrals of motion are achieved.
It has also been suggested that averaged invariants could be useful to analyze the fully coupled dynamics over a single synchrotron oscillation18. There are also proposals for the use of approximate invariants, (or quasi-invariants)19,20,21,22, to advance the understanding of nonlinear dynamics in synchrotrons. Some of these proposals use particle tracking to make the system approximately integrable20.
Another proposed way of introducing approximate in- variants23,24,25, focusing on describing the phase space of design-friendly synchrotrons (such as a booster), seems appropriate to describe resonances appearing in the synchrotrons transverse phase space. These ideas had not been widely explored until recently, when an extension of this formalism to a 5th degree polynomial was applied to a 3rd. generation light source26. Our aim is to take a step forward from what has been presented in references23,24,25,26, supplying content and concepts that may be useful to understand nonlinear processes in synchrotrons and to improve synchrotron design performance. Now this formalism is employed to develop an algorithm for the exploration and manipulation of the phase space of these systems, focused on increasing the dynamic aperture of dynamically more complex synchrotrons. In this paper, an objective function, and an optimization scheme, based on it, are proposed to maximize the dynamic aperture of a synchrotron light-source. Although the optimization of the dynamic aperture of these systems using quasi-invariants has been addressed before20, the present work has a different approach and methodology. While Ref.20 uses particle tracking to make the system quasi-integrable, the present work proposes the search for a bounded stability zone using an objective function, forcing the phase space of the nonlinear system to resemble the linear one. This is based on the determination of the roots of a polynomial quasi-invariant, without using particle tracking nor considering the resonant terms. As far as we know, this approach has not been used before. In addition, this proposal has the advantage of locally considering, at a given approximation order, the possibility of inhibiting resonance onset. It will be shown that robust results are obtained as the amplitude of the oscillations increases and as the momentum dispersion is considered. In this way, it is shown that the proposed quasi-invariant based method can be a useful tool to increase the dynamic aperture of the electron beam in a synchrotron light source. Particle tracking methods (such as the one used by OPA) were used in the present work only for comparative purposes, showing a good agreement with the results obtained. A one-dimensional problem has been treated in this work, but it can be easily extended to 2D. This method provides a different strategy to address the problem of optimizing the dynamics of these systems and facilitate the design of new state-of-the-art synchrotron light sources.
Approximate constants of motion
References23,24,25 describe a method to extend the linear theoretical structure to the nonlinear case in the study of synchrotron dynamics. There, the existence of nonlinear functions that play a similar role to that of functions \( \alpha \), \( \beta \), and \( \gamma \) used in the linear case27 is proposed. With these nonlinear functions, quasi-invariants can be established, with validity in a local range of phase space.
Interest in quasi-invariants formalism continues18 because it could help in the design of modern particle accelerators,28,29, where the effects of nonlinear dynamics are increasingly important, adding complexity to the implementation of each design. Reducing these effects may contribute to have light sources with emittances that provide better quality and properties of the emitted light, enhancements that are very useful in cutting-edge experimental techniques in various areas of science.
Approximate constants of motion for the one dimensional problem
Since there are 3 physical variables in synchrotron dynamics, the phase space of this system is 6-dimensional. Hence, in the long term, our objective is to address a more complex structure than the one presented here. However, constructing a more general formulation to address this complexity is not trivial, so it is very important to proceed in small but firm steps. In this way, a one-dimensional approximation is very useful for a deep understanding of dynamical problems of accelerators in lower dimensions. Similar approaches can be found in references30,31,32.
Following the structure of a previous work26, and for completeness of this work, one can consider a one-dimensional linear motion where the Hamiltonian has the form
$$\begin{aligned} H_0 = \frac{1}{2} \left( p_x^2 + K(s)\, x^2\right) , \end{aligned}$$
where (\(x,p_x\)) are canonical conjugate variables, \( K (s) = K (s + c) \) is a periodic function of s, with period c, and represents the intensity of the magnetic quadrupoles of the storage ring. The red piece-wise constant function in Fig. 1 is related to function K(s). This system has the invariant
$$\begin{aligned} I_0 = \gamma _x(s)\, x^2 + 2 \alpha _x(s)\, x\, p_x + \beta _x(s)\, p_x^2, \end{aligned}$$
where \( \alpha _x(s)\), \( \beta _x(s) \) and \( \gamma _x(s)\) are the periodic Courant-Snyder functions, also with period c27.
As reference23 shows, the above description can be extended to the one-dimensional nonlinear case described by the Hamiltonian
$$\begin{aligned} {\bar{H}} = \frac{1}{2}\left( p_x^2 + K(s)\, x^2\right) + S(s)\, x^3, \end{aligned}$$
In reference23, it is proposed that this system has an approximate invariant of the form
$$\begin{aligned} {\bar{I}} = \sum _{i+j\ge 2, i,j=0} A^{(0)}_{ij}(s) x^i p_x^j, \end{aligned}$$
where \(A^{(0)}(s)=A^{(0)}(s+c)\) are periodic functions, which must satisfy differential equations imposed by the invariant condition
$$\begin{aligned} \frac{dI}{ds} = \{I,H\}+ \frac{\partial {I}}{\partial {s}}=0, \end{aligned}$$
and can be viewed as a generalization of the linear Courant-Snyder functions \(\alpha _x\), \(\beta _x\), and \(\gamma _x\), for the nonlinear regime. Therefore, the proposed approximate invariant in Eq. (4) is a generalization of the linear Courant-Snyder invariant of Eq. (2). This idea has been proposed in references23,25 to treat chromatic effects in these systems, i.e., taking into account the possibility that the particles have a momentum p different from the design momentum \(p_0\). It has been shown that when chromatic effects are included in this way, the results obtained are in good agreement with those obtained by simulations using numerical solutions of Hamilton equations.
The linear system (1) is relevant since its invariant (2) is well understood; thus, it is possible to compare the phase space structure of the nonlinear system with respect to the linear case by using an extension of the expression (2). Furthermore, this representation could be used to develop semi-analytical tools that can be useful in the nonlinear regime addressed in the accelerator design process.
This extension also allows the introduction of chromatic effects in the analytical framework, suggesting its usefulness as a complement within the analysis of Hamiltonian systems, as it will be discussed below.
Searching for an approximate constant of motion for the two-dimensional problem
The approach to study electron dynamics in these systems usually consists of transforming the well-known relativistic Hamiltonian of a charged particle in an electromagnetic field, originally expressed in the laboratory system, to a moving reference system. After this change of coordinates, a transformation of variables is made allowing the longitudinal coordinate s to be the independent variable instead of the time t. From the resulting Hamiltonian, the equations of motion can be obtained; these calculations are standard, and their details can be seen in reference4. In certain circumstances the longitudinal motion (s, \(p_s\)) can be decoupled from the transverse motion (x, \(p_x\), y, \(p_y\)), and it is even possible to decouple the transverse motion into a horizontal (x, \(p_x\)) and a vertical motions (y, \(p_y\)).
Let us use a more general expression of the Hamiltonian that appears in Eq. (3). Following the notation of references4,33, a Hamiltonian of the form
$$\begin{aligned} H(x,p_x,y,p_y,s)= & {} \frac{1}{2}(p_x^2 + p_y^2)\left( 1 - \delta + \delta ^2 + \cdots \right) \nonumber \\{} & {} - b_1(s) x \delta + \frac{b_1^2(s)}{2} x^2 + \frac{b_2(s)}{2} (x^2 - y^2) \nonumber \\{} & {} + \frac{b_3(s)}{3} (x^3 - 3xy^2) + \cdots , \end{aligned}$$
describes the transverse dynamics of particles in a synchrotron. This Hamiltonian has been widely used in preliminary dynamic aperture studies4,34,35, in the first stage of storage ring design.
The functions \( b_1 (s)\), \( b_2 (s)\), and \( b_3 (s)\) are periodic functions with piece-wise constant behavior, respectively describing the curvature of the dipoles and the intensities of the quadrupoles and sextupoles of the accelerator and \(\delta = \Delta p / p_0\), where \( \Delta p \) is the momentum deviation with respect to design momentum \(p_0\). \(b_1(s) = 1 / \rho \) (\(\rho \) is the radius of curvature of a particular dipole) and the remaining parameters are given by the following expression34
$$\begin{aligned} b_n(s) = \frac{1}{B\rho }\frac{1}{(n-1)!}\frac{\partial ^{n-1}B_y(x,y)}{\partial x^{n-1}}\mid _{y=0}, \end{aligned}$$
where \( B\rho \) is the magnetic rigidity that connects magnetic field and radius of curvature with the energy of a relativistic electron via \(B\rho [Tm] = 3.3356\, E[GeV]\). Also, if R is the pole inscribed radius, \(b_n\) is related to the pole tip magnetic field \(B_{pt}\) (Fig. 1) by the equation
$$\begin{aligned} B_{pt} = (B\rho )b_nR^{n-1}, \end{aligned}$$
that is very useful in determining fields for magnet design.
In references24 and25 it has been proposed that a quasi-invariant of the form
$$\begin{aligned} I=\sum _{i+j+k+l\ge 2, i,j,k,l=0} \sum _{n=0} A^{(n)}_{ijkl}(s) x^ip_x^jy^kp_y^l\delta ^n, \end{aligned}$$
can be associated with this Hamiltonian (Eq. (6)). Here, the chromatic nonlinear functions (\( n \ge 1 \)) are also periodic, that is, \(A^{(n)}_{ijkl}(s)=A^{(n)}_{ijkl}(s+c)\). When computed for a particular arc length s, these functions acquire numerical values.
Equation (9) represents a nonlinear extension of the Courant-Snyder invariant, Eq. (2); which means that to a second degree, the lowest order, only the horizontal functions \(A^{(0)}_{ijkl}(s)\) are different from zero. The substitution of Eq. (9) in expression (5) leads to a system of linear differential equations. The number of equations depends on the polynomial degree considered. In this paper we consider a fifth-degree polynomial. The sixty meaningful associated differential equations corresponding to non-null, nonlinear functions related to on-momentum (\(\delta = 0\)) particles are presented below in Eqs. (10)–(18). The Eqs. (10)–(12) have already been considered in reference26.
$$\begin{aligned} \frac{d A^{(0)}_{2000}}{ds}&= \left( b_2 + b_1^{2}\right) A^{(0)}_{1100} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1100}}{ds}&= - 2 A^{(0)}_{2000} + 2 \left( b_2 + b_1^{2}\right) A^{(0)}_{0200}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0200}}{ds}&= - A^{(0)}_{1100} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{3000}}{ds}&= \left( b_2 + b_1^{2}\right) A^{(0)}_{2100} + b_3 A^{(0)}_{1100} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{2100}}{ds}&= -3 A^{(0)}_{3000} + 2 \left( b_2 + b_1^{2}\right) A^{(0)}_{1200} + 2 b_3 A^{(0)}_{0200} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1200}}{ds}&= -2 A^{(0)}_{2100} + 3 \left( b_2 + b_1^{2}\right) A^{(0)}_{0300} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1020}}{ds}&= - b_3 A^{(0)}_{1100} - b_2 A^{(0)}_{1011} + \left( b_2 + b_1^{2}\right) A^{(0)}_{0120} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1011}}{ds}&= - 2 A^{(0)}_{1020} - 2 b_2 A^{(0)}_{1002} + \left( b_2 + b_1^{2}\right) A^{(0)}_{0111} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0120}}{ds}&= - A^{(0)}_{1020} - 2 b_3 A^{(0)}_{0200} - b_2 A^{(0)}_{0111} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0111}}{ds}&= - A^{(0)}_{1011} - 2 A^{(0)}_{0120} - 2 b_2 A^{(0)}_{0102} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1002}}{ds}&= - A^{(0)}_{1011} + \left( b_2 + b_1^{2}\right) A^{(0)}_{0102}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0102}}{ds}&= - A^{(0)}_{1002} - A^{(0)}_{0111} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{4000}}{ds}&= b_2 A^{(0)}_{3100}+b_1^2 A^{(0)}_{3100}+b_3 A^{(0)}_{2100}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{3100}}{ds}&= -4 A^{(0)}_{4000}+2 b_2 A^{(0)}_{2200}+2 b_1^2 A^{(0)}_{2200}\nonumber \\ {}&+2 b_3 A^{(0)}_{1200}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1300}}{ds}&= -2 A^{(0)}_{2200}+4 b_2 A^{(0)}_{0400}+4 b_1^2 A^{(0)}_{0400}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0400}}{ds}&= -A^{(0)}_{1300} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{2020}}{ds}&= -b_3 A^{(0)}_{2100}-b_2 A^{(0)}_{2011}+b_2 A^{(0)}_{1120}+b_1^2 A^{(0)}_{1120}\nonumber \\&-2 b_3 A^{(0)}_{1011}+b_3 A^{(0)}_{0120}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{2011}}{ds}&= -2 A^{(0)}_{2020}-2 b_2 A^{(0)}_{2002}+b_2 A^{(0)}_{1111}+b_1^2 A^{(0)}_{1111}\nonumber \\&-4 b_3 A^{(0)}_{1002}+b_3 A^{(0)}_{0111}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1120}}{ds}&= -2 A^{(0)}_{2020}-2 b_3 A^{(0)}_{1200}-b_2 A^{(0)}_{1111}+2 b_2 A^{(0)}_{0220}\nonumber \\&+2 b_1^2 A^{(0)}_{0220}-2 b_3 A^{(0)}_{0111}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{2002}}{ds}&= -A^{(0)}_{2011}+b_2 A^{(0)}_{1102}+b_1^2 A^{(0)}_{1102}+b_3 A^{(0)}_{0102}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1111}}{ds}&= -2 A^{(0)}_{2011}-2 A^{(0)}_{1120}-2 b_2 A^{(0)}_{1102}+2 b_2 A^{(0)}_{0211}\nonumber \\&+2 b_1^2 A^{(0)}_{0211}-4 b_3 A^{(0)}_{0102}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0211}}{ds}&= -A^{(0)}_{1111}-2 A^{(0)}_{0220}-2 b_2 A^{(0)}_{0202}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0220}}{ds}&= -A^{(0)}_{1120}-3 b_3 A^{(0)}_{0300}-b_2 A^{(0)}_{0211} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1102}}{ds}&= -2A^{(0)}_{2002}-A^{(0)}_{1111}+2 b_2 A^{(0)}_{0202}+2 b_1^2 A^{(0)}_{0202}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0202}}{ds}&= -A^{(0)}_{1102}-A^{(0)}_{0211} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0040}}{ds}&= -b_3 A^{(0)}_{0120}-b_2 A^{(0)}_{0031}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0031}}{ds}&= -b_3 A^{(0)}_{0111}-4 A^{(0)}_{0040}-2 b_2 A^{(0)}_{0022} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0022}}{ds}&= -b_3 A^{(0)}_{0102}-3 A^{(0)}_{0031}-3 b_2 A^{(0)}_{0013}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0013}}{ds}&= -2 A^{(0)}_{0022}-4 b_2 A^{(0)}_{0004}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{4100}}{ds}&= -5 A^{(0)}_{5000}+2 b_2 A^{(0)}_{3200}+2 b_1^2 A^{(0)}_{3200}+2 b_3 A^{(0)}_{2200}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{3200}}{ds}&= -4 A^{(0)}_{4100}+3 b_2 A^{(0)}_{2300}+3 b_1^2 A^{(0)}_{2300}+3 b_3 A^{(0)}_{1300} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{2120}}{ds}&= -3 A^{(0)}_{3020}-2 b_3 A^{(0)}_{2200}-b_2 A^{(0)}_{2111}+2 b_2 A^{(0)}_{1220}\nonumber \\&+2 b_1^2 A^{(0)}_{1220}-2 b_3 A^{(0)}_{1111}+2 b_3 A^{(0)}_{0220}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{2111}}{ds}&= -3 A^{(0)}_{3011}-2 A^{(0)}_{2120}-2 b_2 A^{(0)}_{2102}+2 b_2 A^{(0)}_{1211}\nonumber \\&+2 b_1^2 A^{(0)}_{1211}-4 b_3 A^{(0)}_{1102}+2 b_3 A^{(0)}_{0211}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{2102}}{ds}&= -3 A^{(0)}_{3002}-A^{(0)}_{2111}+2 b_2 A^{(0)}_{1202}+2 b_1^2 A^{(0)}_{1202}\nonumber \\&+2 b_3 A^{(0)}_{0202} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1211}}{ds}&= -2 A^{(0)}_{2111}-2 A^{(0)}_{1220}-2 b_2 A^{(0)}_{1202}+3 b_2 A^{(0)}_{0311}\nonumber \\&+3 b_1^2 A^{(0)}_{0311}-4 b_3 A^{(0)}_{0202} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1202}}{ds}&= -2 A^{(0)}_{2102}-A^{(0)}_{1211}+3 b_2 A^{(0)}_{0302}+3 b_1^2 A^{(0)}_{0302} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0311}}{ds}&= -A^{(0)}_{1211}-2 A^{(0)}_{0320}-2 b_2 A^{(0)}_{0302} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1040}}{ds}&= -b_3 A^{(0)}_{1120}-b_2 A^{(0)}_{1031}+b_2 A^{(0)}_{0140}+b_1^2 A^{(0)}_{0140}\nonumber \\&-2 b_3 A^{(0)}_{0031}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1031}}{ds}&= -b_3 A^{(0)}_{1111}-4 A^{(0)}_{1040}-2 b_2 A^{(0)}_{1022}+b_2 A^{(0)}_{0131}\nonumber \\&+b_1^2 A^{(0)}_{0131}-4 b_3 A^{(0)}_{0022}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0140}}{ds}&= -A^{(0)}_{1040}-2 b_3 A^{(0)}_{0220}-b_2 A^{(0)}_{0131}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0131}}{ds}&= -A^{(0)}_{1031}-2 b_3 A^{(0)}_{0211}-4 A^{(0)}_{0140}-2 b_2 A^{(0)}_{0122} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1013}}{ds}&= -2 A^{(0)}_{1022}-4 b_2 A^{(0)}_{1004}+b_2 A^{(0)}_{0113}+b_1^2 A^{(0)}_{0113}\nonumber \\&-8 b_3 A^{(0)}_{0004} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0122}}{ds}&= -A^{(0)}_{1022}-2 b_3 A^{(0)}_{0202}-3 A^{(0)}_{0131}-3 b_2 A^{(0)}_{0113}\end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{1004}}{ds}&= -A^{(0)}_{1013}+b_2 A^{(0)}_{0104}+b_1^2 A^{(0)}_{0104} \end{aligned}$$
$$\begin{aligned} \frac{d A^{(0)}_{0104}}{ds}&= -A^{(0)}_{1004}-A^{(0)}_{0113}. \end{aligned}$$
Although equations for the non-linear functions \(A^{(1)}_{ijkl}(s)\) for \(\delta \ne 0\) were presented in Ref.26, that can be implemented for treating off-momentum particles, we have only used \(A^{(0)}_{ijkl}(s)\) (on-momentum) in order to investigate the suitability of the proposed approach to produce good results when optimizing dynamic aperture. An extension to off-momentum analysis is underway.
It is possible to incorporate the longitudinal motion into a more general Hamiltonian and reapply the quasi-invariant protocol in 6D space, as is done using other methods (TPSA/tracking)36. This, however, is beyond the scope of this work as the 4D system has not yet been developed.
The differential equations (10-18) for the \(A's\) functions, and the periodicity conditions \(A(s+c) = A(s)\), imposed by the considered magnetic lattice (the specific selection of magnetic multipoles in the ring), determine the values of the functions A(s) in a period, which is usually a synchrotron cell. The synchrotron ring consists of an assembly of several of these cells.
Note that expressions (10) reproduce the equations satisfied by the Courant-Snyder parameters \(\alpha _x\), \(\beta _x\) and \(\gamma _x\), inherent to linear dynamics; while Eqs. (11) and (12) are the equations of the first non-null nonlinear functions in the expansion (Eq. (9)), which must be satisfied for invariant condition (Eq. (5)) to be fulfilled. Equation system (11) only involves functions associated with the horizontal motion x in the accelerator, while the functions of equation system (12) couple the horizontal and vertical motions. In all the equation systems there are only functions \(A^{(0)}\), which describe on-momentum particles. The algebraic manipulation to obtain these equations has been carried out with wxMaxima37.
Another point that deserves attention is that the set of functions in Eq. (10): \(A^{(0)}_{2000}\), \(A^{(0)}_{1100}\) and \(A^{(0)}_{0200}\) are of essential importance. Real and bounded values of these functions are possible thanks to an appropriate selection of quadrupoles in the lattice, whereas the use of inappropriate values would lead to instability of the linear solution. The existence of non-zero values of higher order functions, like those appearing in Eqs. (11) and (12), occurs thanks to the fact that the aforementioned functions intervene in the non-homogeneous part of the equations, giving them nonzero values. Analogously, in those equations that contain nonlinear functions, the sextupolar contribution \(b_3(s)\) appears for the first time. If this contribution were zero, the functions contained in Eqs. (11) and (12) must be zero, i.e., the quasi-invariant Eq. (9) will be reduced to the Courant-Snyder invariant of Eq. (2).
Reference26 outlines the procedure to determine the sixty values of the functions \(A^{(0)}(s)\), at \(s=0\), appearing in Eqs. (10)–(18), subjected to periodic boundary conditions \(A^{(0)}_{ijkl}(0)=A^{(0)}_{ijkl}(0+c)\).
Only the quasi-invariant associated with horizontal motion has been studied in this work since the oscillation amplitudes in the horizontal plane are larger than those in the vertical plane; therefore, dynamics in the horizontal phase space is more complex than in the vertical phase space.
An analytical approximate representation for Poincaré sections through quasi-invariants
It is well known that the quadrupolar magnetic components (represented in general by \( b_2(s))\) contained in an accelerator affect the off-momentum particles (\( p_0 + \Delta p \)) differently from the reference particle with momentum \(p_0\). This changes the number of oscillations in the horizontal and vertical planes (tunes \(\nu _x\), \(\nu _y\)), deviating them from the original tunes \(\nu _{0x}\), \(\nu _{0y}\), in the following way
$$\begin{aligned} \nu _{x,y}(\delta )= \nu _{0x,y} + \xi _{x,y}\ \delta + \cdots \end{aligned}$$
where \(\xi _{x,y}\), the horizontal and vertical chromaticities, are given by expression (20) taking \(b_3(s)=0\).
In a bunch, particles have energies that differ from the energy of design. Therefore, quadrupoles in the accelerator supply a focusing force to each particle that depends on its energy, originating a chromatic effect. Furthermore, any kind of field imperfections in the quadrupoles will cause the same effect on electrons traveling in their orbits (see Ref.38).
Stability of the off-momentum particles requires that \(\xi _x\) and \(\xi _y\) be close to zero to keep tunes near to the operating point, while small positive values are required to avoid collective resonances. To satisfy these requirements, it is necessary to introduce magnetic sextupoles \(b_3(s)\) in the lattice. Sextupoles affect chromaticities in the form34,39,40,41,42
$$\begin{aligned} \xi _{x,y} = \mp \frac{1}{4\pi } \int _{s_0}^{s_0+C} {\left[ b_2(s) - 2 b_3(s) \eta (s)\right] \beta _{x,y}(s) ds}. \end{aligned}$$
The function \(\beta _y(s)\) plays a similar role for the vertical motion as \(\beta _x(s)\) does for the horizontal case, and C is the circumference of full storage ring. \(\eta (s)\) is known as dispersion function and it quantifies how much off-momentum particles closed orbit differs from on-momentum particles closed orbit.
Application of the quasi-invariant technique to a third-generation synchrotron
Synchrotron light sources are classified into generations, based on the properties of the emitted light and the devices used to produce it. The complexity of beam dynamics increases with each generation, in such a way that new facilities demand better optimization techniques for a superior performance. In this work, a third-generation typical lattice is used to study some relevant aspects of our quasi-invariant method (see also23,24,25), since our main objective is to present its advantages.
Application of quasi-invariants to a toy model for the Mexican synchrotron
The storage ring model presented below was, at some point, considered a low-risk scheme for the Mexican project43. It is based on the ALBA lattice44 and due to its emittance value, 1.3 nm\(\cdot \)rad, can be considered as a third-generation light source. Its linear optic functions are shown in Fig. 2 for one half super period. DBA-cells are shown at the bottom of the figure, where dipoles, quadrupoles, and sextupoles are depicted in blue, red, and green, respectively. The complete ring has 4 super periods; each super period consists of six DBA-cells: two matching cells at the ends and four-unit cells in the middle. In Fig. 1 the whole matching cell and half a unit cell are shown in more detail, depicting the magnetic fields of each magnet, following the magnets color code.
Magnetic fields of the first magnets in Fig. 2 are shown. In the lower part of the figure, the left set of magnets, separated by drift spaces, corresponds to a matching cell. The right set exhibits a half unit cell. The field strengths for the corresponding dipoles (blue), quadrupoles (red) and sextupoles (green) are depicted in the upper part of the figure. These field values are related with the parameters \(b_n\) through Eq. (7).
Optical functions of a half super period of the ALBA-like lattice. The functions \(\beta _x(s)\) and \(\beta _y(s)\) are marked in blue and red, respectively; and in green, the dispersion function \( \eta (s)\). The lower part shows the distribution of dipoles (blue), quadrupoles (red) and sextupoles (green) of the three DBA-cells in a half super period. The lower green text line shows the assignment of the different sextupoles. The other half of the superperiod is symmetric about the vertical axis. The ring is composed of four superperiods.
Once the dipole and quadrupole magnetic fields have been fixed, the storage ring optical functions and main parameters are determined; the latter are shown in Table 1. Natural chromaticities indicate that the sextupole intensities should not be very large, therefore, when performing chromaticity corrections, undesirable nonlinear phenomena should be minimal. We used this simple lattice to show the capability of the quasi-invariant formalism to inhibit the onset of resonances and to increase the dynamic aperture of synchrotrons.
Table 1 Main parameters of the storage ring shown in Fig. 2.
Manipulating the onset of low order resonances to increase the dynamic aperture
The mechanism to find the roots of the polynomial quasi-invariant was described in refs.23,26, however, for the sake of completeness of this work, the methodology to find the points (\(x,p_x\)) that belong to a value of the quasi-invariant of Eq. (9), is outlined below. For \(p_x=0\), the magnitude of the initial oscillation amplitude \(x_0\), a numerical value of the quasi-invariant is obtained through the Courant-Snyder expression as \(I=x_0^2/\gamma _x\); in our case (\(\alpha _x=0\) since \(s_0=0\) is a point of symmetry in the lattice). The numerical value of I is kept for the non-linear case and then the topological deformation of the phase space is calculated. Then, scanning x, for each value of x, the 5 values of \(p_x\) (roots of the polynomial of degree 5) are found, which can be complex by pairs or real, using the root-finding function roots from \(\hbox {MATLAB}^{\circledR}\). In this way, a set of points \((x,p_{x1}),...,(x,p_{x5})\) is obtained for a specific value of the quasi-invariant I. By plotting these pairs (\(x, p_x\)) in phase space, a sequence of nearby points is obtained that visually appear to form (interpolating) a continuous curve. These sequences will form what we have called a branch.
When a pair of branches overlap, the overlapping zone contains complex roots, and the non-overlapping portions will form islands indicating the presence of resonances. If there is no such overlap the resonances will not appear, as described in Ref.26. A similar procedure could be carried out by scanning \(p_x\) and finding the values of the x roots of the quasi-invariant polynomial.
Once the process of resonances onset has been identified, we are interested in avoiding resonance formation while increasing the oscillation amplitude. With this, the method can provide a mechanism to increase the dynamic aperture of the synchrotron under consideration. A protocol is required to handle the roots in a controlled manner as the nonlinear elements are changed by the numerical optimization of the objective function, Eq. (21).
One way to do this is to separate the real root branches that give rise to resonances while increasing the amplitude of the horizontal oscillation. This is largely achieved by requiring that the innermost branches, which in principle form the deformed ellipse, have the best resemblance to the ellipse that arises when nonlinear elements (sextupoles, octupoles, etc.) are not present in the calculus, i.e., linear dynamics. With this procedure, a nonlinear phase space is forced to resemble a linear one by finding near-integrable solutions in a bound area of small amplitudes.
The proposal of an objective function
A simple way to define an objective function is to quantify the separation between the branches of real roots corresponding to the nonlinear problem {\(p_x^{nl}\}\) and the branches of real roots corresponding to the linear problem {\(p_x^{ l}\}\); that is, the upper and lower branches of an ellipse that are obtained for the reference particle, when only dipoles and quadrupoles are considered. If we consider N points on the x axis, the corresponding lower and upper roots are in abbreviated form, {\(p_x(x_i), i=1\cdot \cdot \cdot N\}\). Then, the 2N distances between the upper branches (u) and the lower branches (d) of the two systems, linear and nonlinear, are added to define the objective function \(f_{obj}\) as
$$\begin{aligned} f_{obj}= \sum _{i=1}^{i=N}{\mid {p_x^{nl}(x_i)-p_x^l(x_i)\mid _{u}}}+\sum _{i=1}^{i=N}{\mid {p_x^{nl}(x_i)-p_x^l(x_i)\mid _{d}}}. \end{aligned}$$
Minimizing \(f_{obj}\), we will be looking for solutions of the nonlinear problem that are as close as possible to the linear problem. As mentioned before, the nonlinear phase space is intended to resemble that of the linear case for small amplitudes.
The definition of \(f_{obj}\) in Eq. (21) is done for the on-momentum reference particle, i.e, at \(\delta =0\), but it can be extended to include terms corresponding to off-momentum particles (\(\delta \ne 0\)).
The objective function defined in the expression (21) considers the sum of point-to-point distances between the linear and non-linear trajectories in phase space. Since these distances depend on the nonlinear functions \(A^{0}(s)\), the terms that most deform the phase space will be penalized the most. Being the only function that is intended to be optimized, the inclusion of weights in the problem is not necessary. We consider this to be a strength of the algorithm and the use of the objective function of Eq. (21).
Initial stage
Defining \(S=b_3/3\), the free parameters to optimize are
$$\begin{aligned} \{S\}_{free}=\{SD2, SD3, SD4, SD5, SF2, SF3, SF4\}, \end{aligned}$$
while the sextupoles
$$\begin{aligned} \{S\}_{chrom}=\{SD1, SF1\} \end{aligned}$$
are adjusted at each change made to \(\{S\}_{free}\), by the optimization algorithm, in order to keep the chromaticities close to zero, \(\xi _{x,y} \sim 0\).
Let us write the union of the above sets simply as
$$\begin{aligned} \{S\} = \{S\}_{free} \cup \{S\}_{chrom}. \end{aligned}$$
The initial set \(S_0\) comes from the first attempts to confirm that the quasi-invariant method gave reliable results and compatible with particle tracking simulation. It was used in Ref.26 and has the following numerical values
$$\begin{aligned} \{S_0\}=&\{-5.3788110533658759, 1.3839068709904867,\nonumber \\ {}&-8.8951634707862990, -15.453045920930355,\nonumber \\ {}&21.482198067784918, 1.2586848537899471,\nonumber \\&11.837997264320983, -17.378852082005590,\nonumber \\ {}&19.49703775940865 \} \end{aligned}$$
Figure 3 represents the horizontal phase corresponding to the initial set \(\{S_0\}\), showing the resonances arising at an oscillation amplitude of \(x_0=1.53\) mm.
Horizontal phase space \((x,p_x)\) for the initial set of sextupoles \(\{S_0\}\) given by Eq. (25). The used amplitudes \(x_0\) are 0.5, 0.75, 1.0, 1.25, 1.45, 1.53 mm. Only representative resonances and internal tori are depicted. Units in the plot are m-rad.
The location of these sextupoles in the unit cell of the synchrotron toy model considered here can be seen in Fig. 2, in green.
The initial \(\{S_0\}\) free values can in general be selected as a set of small, arbitrary sextupole intensities, and the \(x_0\) initially must be small enough to contain a stable zone. By increasing \(x_0\), the optimization yields new sets of sextupoles with larger stable zones.
Second stage: sextupoles optimization
The optimization process was started with a small \(x_0\) since the non-linear effects produced by the sextupoles are small. Within the optimization process, as \(x_0\) grows, the algorithm seeks the trajectories in phase space to be stable. In this stage, starting from the set of sextupoles \(\{S_0\}\), the search for a set of sextupoles that increases the previous dynamic aperture begins; here the oscillation amplitude is taken to be \(x_0=7\) mm. This is done using genetic algorithms45 trying to minimize the objective function \(f_{obj}\). Other methods, such as simplex of the \(\hbox {MATLAB}^{\circledR}\) function fminsearch and simulated annealing, could also be useful. This process is shown in Fig. 4, where the vertical axis is the relative value of \(f_{obj}\) with respect to its initial value \(f^{initial}_{obj}\). It is convenient to note that in the optimization process of the sextupoles, the interval of values of the objective function is (0.9859, 66.432). However, only the range of interest (\(f_{obj}/{f^{initial}_{obj}}) < 1\),where the objective function decreases, is shown in Fig. 4. The apparently small improvement is due to the fact that the initial \(\{S_0\}\) was a set of sextupoles that already gave a small value of the objective function. The horizontal axis is the cumulative CPU time (in sec) of the genetic algorithm on a Pentium7 PC. The number of populations involved in the calculation was 14, and the process converged in around 1000 generations (Fig. 4).
As has been studied in detail in reference26, the resonances that appear in Fig. 3 arise when two branches of real solutions, from the quasi-invariant (Eq. (9)) proposed for the nonlinear case, overlap becoming complex solutions. The non-overlapping regions form the islands of the resonances. When the optimization process starts, in search of new sextupoles that allow increasing the dynamic aperture, the mentioned branches of real solutions are affected due to the numerical change of \(\{S\}\) values. Minimization of function \(f_{obj}\) (Eq. (21)) requires that the inner branch of real solutions be close to the corresponding solution of the linear problem (ellipse). This behavior occurs when the dynamics of the nonlinear problem allows an increase of dynamic aperture; otherwise, the roots overlap, generating resonances that limit stability.
This figure shows the behavior of the objective function \(f_{obj}\) defined in Eq. (21) as the number of iterations increases. The vertical axis represents the relative value of the objective function with respect to its initial value \(f^{initial}_{obj}\), and the horizontal axis represents the cumulative CPU time (in seconds) as iterations proceed.
Figure 5 shows the behavior of these two branches of real solutions (in red), as the values of S change. The external branch, contained mainly in the green rectangle, has a more erratic behavior than the internal branch and, in general, the external branch separates from the internal one under the optimization process. It is expected that the farther away a polynomial solution is from the origin of the phase space, \(p_x\) will have a higher degree of uncertainty. Particle tracking simulation will show the agreement when comparing the results obtained with the proposed method with simulation, even for large oscillation amplitudes, as shown later. The linear-problem ellipse is used as a goal of the optimization method. It can be seen that the inner branch of real solutions tends to approach the blue curve that represents the trajectory corresponding to the Courant-Snyder invariant for the linear problem27. The latter can be appreciated in greater detail in Fig. 6 where a band of real solutions in red can be seen next to the blue ellipse. At the end of the optimization process, the real nonlinear solution, which is closest to the linear solution, is the one with a minimum value of \(f_{obj}\). This will become clearer in the next stage of optimization.
Phase space \((x,p_x)\) showing the evolution of the two real solutions (in red), under the optimization process of the objective function (when \(\{S\}\) values change). The ellipse, corresponding to the Courant-Snyder invariant of the linear problem, is shown in blue for comparison; it is the optimization method goal. The horizontal blue lines for \(p_x\) = 0 depict the real part of \(p_x\), while the imaginary part is not shown in the figure, i.e., \(p_x\) is a purely imaginary number. The outer branches are located mainly in the green rectangle, and they correspond to several \(\{S\}\) values for the single invariant used in the optimization process of the second stage (section III.E).
Phase space \((x,p_x)\) showing the original resonances and, in greater detail than in Fig. 5, the behavior of the inner real roots (in red) trying to fit the ellipse (blue) of linear dynamics, under the optimization process of the objective function. Units in the plot are m-rad.
The above procedure allows to find the value of new sextupoles \(\{S_1\}\) that minimize \(f_{obj}\) as shown in Fig. 4. The new \(\{S\}\) values are:
$$\begin{aligned} \{S_1\}=&\{1.4659108229654287, -1.6645577302767018,\nonumber \\ {}&-8.9509019610107163, -15.441140170269934,\nonumber \\ {}&17.416398341152696, 6.4256740018934524,\nonumber \\ {}&10.142133777479255, -22.142621628627953,\nonumber \\ {}&15.173009378441337\}, \end{aligned}$$
where the order of \(\{S_1\}\) values is the same as that used in Eq. (24). As before, for each free parameter choice \(\{S_1\}_{free}\), two chromatic sextupoles \(\{S_1\}_{chrom}\) that allow keeping the chromaticities close to zero are determined. The following section addresses the implications of a new set of sextupoles \(\{S_1\}\) in the nonlinear dynamics described in phase space (\(x,p_x\)). When this set of sextupoles is used, the phase space of Fig. 7 shows that the region of stability grows from \(\sim 2\) mm to \(\sim 6-7\) mm, since low order resonances onset have been inhibited in that region.
Phase space \((x,p_x)\) for various oscillation amplitudes \(x_0=1.53, 3, 4, 5, 6, 7\) mm. With the new sextupoles \(\{S_1\}\), obtained by optimizing to \(x_0=7\) mm, the dynamic aperture is explored and it is observed that it increases to \(\sim 6-7\) mm, before tori start to break. For each value of the quasi-invariant, the inner branch and its corresponding outer branch are plotted in the same grayscale color. Units in the plot are m-rad.
Third stage: Further optimization of sextupoles
Now, we are exploring the possibility of continuing to grow the stability zone with a new optimization of sextupoles. This stage is carried out at a higher oscillation amplitude, \(x_0=10\) mm, taking \(\{S_1\}\) as the initial set of sextupoles for this stage of the optimization process, under the same guidelines as the previous optimization at \( x_0=7\) mm.
The inner and outer branches of real solutions that give a smaller value of the objective function \(f_{obj}\) are shown in black in Fig. 8. They represent the final result of the optimization for the invariant corresponding to \(x_0=10\) mm. Two of the branches of real solutions (inner branches) are close to the Courant-Snyder invariant represented by the blue ellipse. The other two branches (outer branches) are now far from the inner ones, meaning that, the possibility to generate low order resonances, by overlapping, has been diminished by the optimization process, increasing in this way the stability zone due to a better selection of sextupoles set. We see that the internal black branches resemble an ellipse, as requested in Eq. (21) of \(f_{obj}\). The detail of this behavior is clearer in Fig. 9, where a bundle of intermediate solutions of the nonlinear problem in the optimization process is shown in red. The black solution is the best approximation to the linear solution in blue. The set \(\{S_2\}\) of sextupoles obtained at the end of this stage is
$$\begin{aligned} \{S_2\}=&\{-12.652556508323647, -14.064768921050906,\nonumber \\ {}&-9.3690097475880325, -13.208398288099152,\nonumber \\ {}&13.876702657250986, 9.9207821018681628,\nonumber \\ {}&9.0521585477227173, -23.680418521419298,\nonumber \\ {}&12.458479828590827\}. \end{aligned}$$
Again, for comparison, the gray curves shown in Fig. 9 are the ones shown in Fig. 7 for the optimization \(\{S_1\}\).
Structure of the phase space \((x,p_x)\) at the end of the third optimization process stage of the sextupoles set. The black curves are obtained as the branches of real roots solutions that give a minimum value of \(f_{obj}\). Starting from the sextupoles \(\{S_1\}\), that had optimized the objective function \(f_{obj}\) with amplitude \(x_0=7\) mm, a new set of sextupoles \(\{S_{2}\}\) is found when \(x_0=10\) mm is used. With the new sextupoles \(\{S_{2}\}\) (Eq. (27)), no low order resonances are present since the real solution branches have been separated by the optimization process when requested that the inner branches are close to the Courant-Snyder ellipse, in blue. The dynamic aperture is now beyond 10 mm. Units in the plot are m-rad.
Enlargement of the phase space \((x,p_x)\) of Fig. 8 that depicts the set of inner branches (red) of the nonlinear problem in greater detail. These are intermediate solutions in the optimization process, from which the sextupoles, that minimize the objective function \(f_{obj}\), arise. With the new sextupoles \(\{S_{2}\}\), obtained by optimizing at 10 mm, we see that the estimated dynamic aperture (black) has grown beyond 10 mm. The black solution is the best approximation to the ellipse (blue) that represents the linear solution, according to the Courant-Snyder invariant. The gray tone curves show the structure of the underlying phase space (Fig. 7) before doing the new optimization using \(x_0=10\) mm.
The next point of interest to investigate is the increase in dynamic aperture achieved with the \(\{S_2\}\) optimization. For this, different values of oscillation amplitude \(x_0\) are used that produce the trajectories shown in Fig. 10. We notice that closed trajectories start to break up at around 17 mm, although results coming from the quasi-invariant have been found to overestimate the dynamic aperture compared to that obtained by tracking simulation26.
Structure of the phase space \((x,p_x)\) showing the dynamic aperture estimated by the quasi-invariant method, after optimization at \(x_0=10\) mm, with the set of sextupoles \(\{S_{2}\}\). The quasi-invariant method predicts stability for amplitudes of the order of 15 mm. For comparison purposes, the resonance structure (Fig. 3) provided by the initial set \(\{S_0\}\) of sextupoles is included in black.
Comparison of dynamic apertures from quasi-invariant techniques with those from particle tracking simulations
How the theoretical results obtained with quasi-invariants compare with those of particle tracking simulation given by OPA? Using the set of sextupoles \(\{S_2\}\) in OPA, the good agreement between both schemes is noteworthy. This is shown in Fig. 11, which represents the phase space (\(x,p_x\)) for on-momentum particles (\(\delta =0\)). In blue, several trajectories calculated with OPA, using different amplitudes, are shown. No low-order resonances are observed, which are a major problem in synchrotron light sources. The dynamic aperture shown in Fig. 11, of the order of 15 mm, confirms that the method of quasi-invariants gives a good prediction of the dynamic aperture. To make this agreement noticeable, the curves in Fig. 10 are superimposed in faint red.
Structure of the phase space \((x,p_x)\) according to OPA particle tracking simulation (blue), showing good agreement with the dynamic aperture obtained with the quasi-invariant method (faint red superimposed, Fig. 10). Both calculations have been performed using the set of sextupoles \(\{S_2\}\) for on-momentum particles (\(\delta =0\)).
Optimization over on-momentum particles permeates to off-momentum particles
Interestingly, although this work does not incorporate the formalism related to off-momentum particles \((\delta \ne 0)\), we see that the corresponding phase space, in this case provided by particle tracking simulation with OPA, also seems to follow the inertia of a linear dynamic, to which on-momentum particles have been forced with a small value of the objective function \(f_{obj}\). That is, there is a certain degree of transmissibility of the stability of the considered phase space, when going from \(\delta =0\) to \(\delta \ne 0\). This process is observed in the following six figures (Figs. 12, 13, 14, 15, 16, 17), calculated with the set of sextupoles \(\{S_2\}\), for a momentum deviation \(\delta = -3,- 2,-1,1,2\) and \(3 \%\). This is surprising because no optimization was done involving momentum deviation \(\delta \ne 0\).
The origin shift in Figs. 12, 13, 14, 15, 16, 17 is consistent with the displacement of the off-momentum closed orbit given by \(\eta \delta \) for a value of \(\eta =0.2039\) m, which is the dispersion value in the straight sections for our toy model synchrotron.
Structure of phase space \((x,p_x)\) for \(\delta =-1\%\) when various initial conditions are used for particle tracking simulation in OPA, using the set of sextupoles \(\{S_2\}\).
Structure of phase space \((x,p_x)\) for \(\delta =1\%\) when various initial conditions are used for particle tracking simulation in OPA, using the set of sextupoles \(\{S_2\}\).
Limit on the oscillation amplitude for the optimization of the quasi-invariant
In previous sections phase space area results have been obtained by optimizations using the quasi-invariant defined in Eq. (9), carried out at amplitude values \(x_0=7\) and 10 mm. What is being explored now is the possibility of further increasing the value of \(x_0\). A new optimization is performed at \(x_0=12\) mm and it is shown that the increase in dynamic aperture is insignificant, indicating that the non-linear dynamics of the considered synchrotron does not allow a greater increase in dynamic aperture, and/or the description of the quasi-invariant is less exact (the degree of the polynomial considered is not large enough) as the amplitudes \(x_0\) increase. In Fig. 18 this process is shown. Optimization at 10 mm of Fig. 11 is shown in faint blue, while a new calculation (dark blue, superimposed) is obtained by the optimization process using an amplitude of 12 mm. Figure 11 was obtained with the set of sextupoles \(\{S_2\}\), while this dark blue figure is made with the new set of sextupoles \(\{S_3\}\), shown in Eq. (28), both, for on-momentum particles (\(\delta =0\)). Comparing both results, it is observed that the outer high order islands chain (faint blue) is inhibited. New KAM tori appear, marginally expanding the stable area of the phase space. It is interesting to note that the new selection \(\{S_3\}\) differs less than \(<2\%\) from \(\{S_2\}\) which represents a fine adjustment of sextupole intensities. The effect is significantly smaller than that obtained going from 7 to 10 mm. This suggests that the optimization limit is being reached.
$$\begin{aligned} \{S_3\}=&\{-12.864277322994971, -14.050032324271484,\nonumber \\ {}&-9.3226604284599492, -13.099983409997680,\nonumber \\ {}&14.022145293554772, 10.011461482140758,\nonumber \\ {}&9.2036108097558724, -24.02863787948452,\nonumber \\ {}&12.21599140620257\}. \end{aligned}$$
Structure of the phase space \((x,p_x)\) according to OPA's particle tracking simulation (dark blue) done with the \(\{S_3\}\) set of sextupoles obtained for a 12 mm amplitude optimization. In faint blue (background) the optimization at 10 mm of Fig. 11 is shown. In the new calculation (dark blue) there is a slight increase in the phase space area. Both structures (blue dark and faint) in phase space are for on-momentum particles (\(\delta = 0\)).
For a synchrotron represented in one dimension, the results obtained are a clear indication of the robustness of the quasi-invariant concept and its use in increasing synchrotron dynamic aperture, requesting the topologies of the phase space of a nonlinear and a linear system to be similar in a bounded area of small oscillation amplitudes. The mathematical formalism of this idea can be extended to higher dimensions to achieve low emittance designs.
Although not analyzed in this paper, it is also noteworthy that studies using quasi-invariants report a large amplitude-dependent tune spread20. Therefore, it is expected that with our technique, resonance lines, probably of higher orders, can be crossed without noticeable effects due to their narrow stop bandwidths.
This work has been motivated by the increasing requirements in optimizing the magnetic lattices of synchrotron light sources to meet the growing needs of users of synchrotron radiation, such as increased brightness and coherence. Methods developed by other authors have been essential to deal with the nonlinear problem, some of them involve a large capacity of computational resources. In this work, it is shown that the scheme based on a polynomial quasi-invariant is a dynamic aperture optimization technique complementary to methods using resonant terms or many turns particle tracking simulation9. The precision achieved with the quasi-invariants is sufficient to accomplish a dynamic aperture compatible with numerical tracking results. With these ideas, we introduce and explore one more mechanism useful to find solutions that allows advancing in the understanding of dynamic processes in synchrotrons. The proposed algorithm requires the construction of a quasi-invariant of motion whose use is valid in a restricted phase space area including its origin, where the electron beam should be stable. A quasi-invariant mechanism, and an objective function that allows manipulating the onset of resonances and, thereby, increasing the dynamic aperture of a particular synchrotron design, were studied. In the stability region, the resemblance between a non-linear system phase space topology and the one of the linear system, seems to be a key to achieve good results in increasing the dynamic aperture. The results obtained were validated by comparison with particle tracking simulations, using available software in the field of accelerator physics. The numerical results indicate that the proposed method can be used as a suitable scheme to increase the dynamic aperture in the one-dimension studied model. The methodology can be easily extended to two dimensions by building a second quasi-invariant, allowing to increase the dynamic aperture of the two-dimensional nonlinear problem, a phenomenon that is quite restrictive in fourth generation light sources. Work is ongoing in this direction.
The data generated or analyzed during the current study are available from the corresponding author on reasonable request.
Einfeld, D., Schaper, J. & Plesko, M. A lattice design to reach the theoretical minimum emittance for a storage ring. in Proceedings of the EPAC96, 638 (1996).
Mobilio, S., Boscherini, F. & Meneghini, E. C. Synchrotron Radiation: Basics. Methods and Applications. Springer, New York. https://doi.org/10.1007/978-3-642-55315-8 (2015).
Sajaev, M. B. V., Emery, L. & Xiao, A. Multi-objective direct optimization of dynamic acceptance and lifetime for potential upgrades of the advanced photon source. ANL/APS/ vol. LS-319 (2010).
Bengtsson, J. The sextupole scheme for the swiss light source (sls): An analytical approach. https://ados.web.psi.ch/slsnotes/sls0997.pdf (1997).
Soutome, K. & Tanaka, H. Higher-order formulas of amplitude-dependent tune shift caused by a sextupole magnetic field distribution. Phys. Rev. Accel. Beams 20, 064001 (2017).
Brown, K. L., Belbeoch, R., & Bounin, P. First- and second- order magnetic optics matrix equations for the midplane of uniform- field wedge magnets. Rev. Sci. Instrum. 35, 481 https://doi.org/10.1063/1.171885.
Douglas, D. R. & Dragt, A. J. Lie algebraic methods for particle tracking calculations. Proc. 12th Int. Conf. High-Energy Accel. 32, 139 (1983).
Papaphilippou, Y. Detecting chaos in particle accelerators through the frequency map analysis method. Chaos: Interdiscipl. J. Nonlinear Sci. 24(2), 024412. https://doi.org/10.1063/1.4884495 (2014).
Borland, M. Elegant: A flexible sdds-compliant code for accelerator simulation. Adv. Photon Sourc. vol. LS-287, (2000).
Gao, W., Wang, L. & Li, W. Simultaneous optimization of beam emittance and dynamic aperture for electron storage ring using genetic algorithm. Phys. Rev. ST Accel. Beams 14, 094001 (2011).
Kranjčević, M., Riemann, B., Adelmann, A. & Streun, A. Multiobjective optimization of the dynamic aperture using surrogate models based on artificial neural networks. Phys. Rev. Accel. Beams 24, 014601. https://doi.org/10.1103/PhysRevAccelBeams.24.014601 (2021).
Sun, Y. & Borland, M. Comparison of nonlinear dynamics optimization methods for aps-u. in Proceedings of 2nd North American Particle Accelerator Conference, p. WEPOB15 (2016).
Li, Y. & Yu, L.-H. Using square matrix to realize phase space manipulation and dynamic aperture optimization. in Proceedings of 2nd North American Particle Accelerator Conference, p. TUPOB54, (2016).
Yang, L., Li, Y., Guo, W. & Krinsky, S. Multiobjective optimization of dynamic aperture. Phys. Rev. Spec. Top. Accel. Beams 14, 054001 (2011).
Antipov, S. et al. Iota (integrable optics test accelerator): facility and experimental beam physics program. J. Instrum. 12, T03002. https://doi.org/10.1088/1748-0221/12/03/t03002 (2017).
Danilov, V. & Nagaitsev, S. Nonlinear accelerator lattices with one and two analytic invariants. Phys. Rev. Spec. Top. Accel Beams 13, 084002 (2010).
Antipov, S., Nagaitsev, S. & Valishev, A. Single-particle dynamics in a nonlinear accelerator lattice: attaining a large tune spread with octupoles in iota. JINST 12, P04008 (2017).
Webb, S., Cook, N. & Eldred, J. Averaged invariants in storage rings with synchrotron motion. J. Instrum. 15, 12032. https://doi.org/10.1088/1748-0221/15/12/p12032 (2020).
Warnock, R. L., Berg, J. S. & Forest, E. Fast symplectic mapping and quasi-invariants for the large hadron collider. Proc. Particle Accel. Conf. 5, 2804–2806. https://doi.org/10.1109/PAC.1995.505699 (1995).
Li, Y. et al. Design of double-bend and multibend achromat lattices with large dynamic aperture and approximate invariants. Phys. Rev. Accel. Beams 24, (2021).
Gabella, W. E., Ruth, R. D. & Warnock, R. L. Iterative determination of invariant tori for a time-periodic hamiltonian with two degrees of freedom. Phys. Rev. A 46(6), 3493 (1992).
Li, Y., Cheng, W., Yu, L. H. & Rainer, R. Genetic algorithm enhanced by machine learning in dynamic aperture optimization. Phys. Rev. Accel. Beams 21, (2018).
Antillón, A. Emittance for a nonlinear machine: the one-dimensional problem. Part. Accel. 23, 187–195 (1988).
Antillón, A., Forest, E., Hoeneisen, B. & Leyvraz, F. Transport matrices for nonlinear lattice functions. Nucl. Instrum. Methods Phys. Res., Sect. A 305(2), 247–256. https://doi.org/10.1016/0168-9002(91)90544-Z (1991).
Antillón, A. & Hoeneisen, B. Emittance of a nonlinear machine: the two-dimensional problem. Nucl. Instrum. Methods Phys. Res., Sect. A 305(2), 239–246. https://doi.org/10.1016/0168-9002(91)90543-Y (1991).
Sánchez, E. A., Flores, A., Hernández-Cobos, J., Moreno, M. & Antillón, A. Onset of resonances by roots overlapping using quasi-invariants in nonlinear accelerator dynamics. Nonlinear Dyn. Accepted (2022).
Courant, E. & Snyder, H. Theory of the alternating-gradient synchrotron. Ann. Phys. 3(1), 1–48. https://doi.org/10.1016/0003-4916(58)90012-5 (1958).
Article ADS MATH Google Scholar
Dimper, R., Reichert, H., Raimondi, P., Ortiz, L.S., Sette, F. & Susini, J. ESRF upgrade programme phase II (2015 - 2022). https://www.esrf.fr/Apache_files/Upgrade/ESRF-orange-book.pdf (2014).
Aiba, M. et al. SLS-2 Conceptual Design Report. PSI-Bericht 17-03, Editor: Andreas Streun (2017).
Herr, W. & Forest, E. Non-linear dynamics in accelerators. in Particle Physics Reference Library : Volume 3: Accelerators and Colliders (S. Myers and H. Schopper, eds.), pp. 51–104, Springer International Publishing, 2020. https://doi.org/10.1007/978-3-030-34245-6_3.
Suzuki, T. Hamiltonian formulation for synchrotron oscillations and Sacherer's integral equation. Particle Accel. 12, 237–246 (1982). http://cds.cern.ch/record/139498/files/p237.pdf.
Stupakov, G. Lecture notes on classical mechanics and electromagnetism in accelerator physics. https://jseldredphysics.files.wordpress.com/2018/03/stupakov-notes-2011.pdf (2011).
Bengtsson, J. & Streun, A. Robust design strategy for sls-2. https://wiki.classe.cornell.edu/pub/CBB/RDTProject/SLS2-BJ84-001.pdf (2017).
Streun, A. Experimental methods of particle physics particle accelerators. https://ipnp.cz/~dolezal/teach/accel/talks/empp.pdf.
Cai, Y. Singularity and stability in a periodic system of particle accelerators. https://www.slac.stanford.edu/pubs/slacpubs/17250/slac-pub-17251.pdf (2018).
Fjellström, M. Particle tracking in circular accelerators using the exact hamiltonian in sixtrack. CERN-THESIS-2013-248 (2013). https://inspirehep.net/files/74c88492a8d6a790563d145198c4813d.
wxMaxima, https://wxmaxima.sourceforge.net/, wxMaxima, Version 22.03.0, (2022).
Sands, M. Physics of electron storage rings: An introduction. Tech. rep., Stanford Linear Accelerator Center, CA (1970).
Ohnuma, S. Effects of correction sextupoles in synchrotrons. AIP Conf. Proc. 123, 415–423. https://doi.org/10.1063/1.34886 (1984).
Cornacchia, M. Lattices for synchrotron radiation sources. SLAC-PUB-6459, p. 16 (1994).
Levichev, E. B. Low emittance electron storage rings. Phys. Usp. 61, 29. https://doi.org/10.3367/ufne.2016.12.038014 (2018).
Steinhagen, R. J. Tune and chromaticity diagnostics. CERN p. 324 (2009). https://cds.cern.ch/record/1213281/files/p317.pdf.
Antillón, A., et al., Laboratorio nacional de aceleradores y luz sincrotrón: Fase de diseňo y prototipos. https://www.fisica.unam.mx/sincrotron/fomix/PDFs/Vol1.pdf (2015).
Muňoz, M., & Einfeld, D. Optics for the alba light source. Proc. PAC 2005, Knoxville, Tennessee pp. 3777–3779 (2005).
Yang, X.-S. Nature-inspired optimization algorithms. Academic Press, Cambridge (2020).
This work was supported by UNAM-PAPIIT IN108522 and CONACYT CF-2023-I-119. E.A.S. is grateful to CONACYT for funding a postdoctoral fellowship.
Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México, Av. Universidad 1001, Col. Chamilpa, Cuernavaca, Morelos, 62210, Mexico
Edgar Andrés Sánchez, Jorge Hernández-Cobos & Armando Antillón
Departamento de Bioingeniería y Ciencias, Tecnológico de Monterrey, Puebla, 72453, Mexico
Alain Flores
Departamento de Física Teórica, Instituto de Física, Universidad Nacional Autónoma de México, Cd. de México, 04510, Mexico
Matías Moreno
Edgar Andrés Sánchez
Jorge Hernández-Cobos
Armando Antillón
E.A.S., A.F., J.H-C., M.M and A.A. conceptualized this research, analyzed, and interpreted the results, wrote computer codes, prepared figures and wrote and revised the manuscript.
Correspondence to Armando Antillón.
Sánchez, E.A., Flores, A., Hernández-Cobos, J. et al. Increasing beam stability zone in synchrotron light sources using polynomial quasi-invariants. Sci Rep 13, 1335 (2023). https://doi.org/10.1038/s41598-023-27732-y | CommonCrawl |
Professor Ioannis Kontoyiannis
Professor Kontoyiannis works in information theory, applied probability, and statistics, including their applications in neuroscience, bioinformatics, and the development of machine learning algorithms. His research has been funded by the National Science Foundation, the European Union, Greek national funds, the European Research Council, and numerous other national and international bodies. He has also been involved in consulting work for companies in the financial, medical, and high-tech industries.
He has been with DPMMS since June 2020 as Churchill Professor of Mathematics.
Kontoyiannis was born in Athens, Greece, in 1972. He received the B.Sc. degree in mathematics in 1992 from Imperial College (U of London), and in 1993 he obtained a distinction in Part III of the Cambridge University Pure Mathematics Tripos. In 1997 he received the M.S. degree in statistics, and in 1998 the Ph.D. degree in electrical engineering, both from Stanford University. In 1995 he worked at IBM Research, on a NASA-IBM satellite image processing and compression project. From 1998 to 2001 he was with the Department of Statistics at Purdue University (and also, by courtesy, with the Department of Mathematics, and the School of Electrical and Computer Engineering). Between 2000 and 2005 he was with the Division of Applied Mathematics and with the Department of Computer Science at Brown University. Between 2005 and 2021 he was with the Department of Informatics of the Athens University of Economics and Business.
Between 2018 and 2020 he was Professor of Information and Communications with the Information Engineering Division of the Engineering Department at Cambridge, where he was also Head of the Signal Processing and Communications Laboratory, and where he remains as an affiliated member.
In 2002 he was awarded the Manning Endowed Assistant Professorship by Brown University; in 2004 he was awarded a Sloan Foundation Research Fellowship; in 2005 he was awarded an Honorary Master of Arts Degree Ad Eundem by Brown University; in 2009 he was awarded a two-year Marie Curie Fellowship; and in 2011 he was elevated to the grade of IEEE Fellow.
Research Fellow, Institute of Applied & Computational Mathematics, Foundation for Research and Technology – Hellas, Greece
Fellow, Darwin College, Cambridge
Affiliated Member, Signal Processing and Communications Group, Division of Information Engineering, Department of Engineering, Cambridge
Packet Speed and Cost in Mobile Wireless Delay-Tolerant Networks
R Cavallari, S Toumpis, R Verdone, I Kontoyiannis
– IEEE Transactions on Information Theory
(DOI: 10.1109/TIT.2020.3009690)
Nonasymptotic Gaussian Approximation for Inference With Stable Noise
M Riabiz, T Ardeshiri, I Kontoyiannis, S Godsill
Sharp Second-Order Pointwise Asymptotics for Lossless Compression with Side Information.
L Gavalakis, I Kontoyiannis
– Entropy (Basel)
(DOI: 10.3390/e22060705)
A Simple Network of Nodes Moving on the Circle
D Cheliotis, I Kontoyiannis, M Loulakis, S Toumpis
– Random Structures and Algorithms
(DOI: 10.1002/rsa.20932)
Geometric ergodicity in a weighted Sobolev space
A Devraj, I Kontoyiannis, S Meyn
– Annals of Probability
(DOI: 10.1214/19-aop1364)
The Lévy State Space Model
S Godsill, M Riabiz, I Kontoyiannis
– CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS
(DOI: 10.1109/ieeeconf44664.2019.9048715)
Denoising line edge roughness measurements using Hidden Markov Models
G Papavieros, I Kontoyiannis, V Constantoudis, E Gogolides
– Proceedings of SPIE - The International Society for Optical Engineering
109592z-109592z-8
(DOI: 10.1117/12.2523422)
Deep Tree Models for 'Big' Biological Data
L Mertzanis, A Panotonoulou, M Skoularidou, I Kontoyiannis
– IEEE Workshop on Signal Processing Advances in Wireless Communications, SPAWC
(DOI: 10.1109/spawc.2018.8445994)
Analysis of a One-Dimensional Continuous Delay-Tolerant Network Model
– 2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC)
Sharp Gaussian Approximation Bounds for Linear Systems with $\alpha$ -stable Noise
I Kontoyiannis, M Riabiz, T Ardeshiri, S Godsill
– IEEE International Symposium on Information Theory - Proceedings
2018-June,
(DOI: 10.1109/ISIT.2018.8437513)
[email protected]
D1.09
https://www.dpmms.cam.ac.uk/~ik355/ | CommonCrawl |
Doubly Robust Bayesian Inference for Non-Stationary Streaming Data with $\beta$-Divergences
The paper provided a doubly robust Bayesian inference method for online change-point detection, through the application of General Bayesian Inference with \beta-divergences. Theoretical analysis and empirical results demonstrate that the proposed method is robust to outliers and can adapt to non-stationary data via online parameter update. Overall, this is a paper rigorously written and a valuable contribution to the change-point detection community. Here are some additional comments about the paper: 1) In the algorithm description in Section 3.2, there is an operation called FullOpt, however I could not find the exact definition of this operator. 2) In the same section, the authors mentioned that "an MCMC implementation in Stan takes 10^5 times longer". I was wondering how this conclusion is obtained. It would be better if the authors give concrete running times for both approaches. 3) The main content of the paper is about change-point detection, but the title of the paper is about Robust Bayesian inference in general. I would recommend to have a title that more accurately reflects the content of the paper. Note: The authors agreed in the response to include more details of computational time in the final version, to make the comparison more clear.
Overview The paper introduces a robust online change point detection algorithm for non-stationary time-series data. Robustness comes as a by product of minimizing \beta-divergence between data and fitted model as opposed to using KL divergence as in standard Bayesian inference. \beta-Divergence uses Tsallis loss function which assigns less influence to samples in the tails and as a result the inference that relies on \beta-divergence becomes less sensitive to outliers and the model is less likely to call a random spike as a change point. In the generalized Bayesian inference the posteriors are intractable. The paper mitigate this problem by resorting to structural variational approximation, which is proved to be exact as \beta converges to zero. The paper also discusses systematic approaches to initialize \beta and refine it online. A heuristic stochactic gradient descent is proposed to make the algorithm scalable for streaming data. The main idea is to achieve a trade-off between accuracy and scalability by anchoring stochastic gradient near an optimum. Technical Quality The paper uses a similar idea as in Fearnhead and Rigaill to quantify robustness when studying the odds of r_{t+1} \in {0, r+1} but some motivation as to why these odds are important and why r+1 is selected for robustness would be useful. The paper argues that MCMC applications for online CP detection has been sparse mainly because MCMC are not very scalable. This argument is somewhat unjustified. Similar problems have been dealt with from a stochastic inference point of view using sequential Monte carlo (SMC) samplers and sequential importance resampling. For example Dundar et al. (Bayesian nonexhaustive learning for online discovery and modeling of emerging classes, ICML 2012) uses particle filters to classify non-stationary data, where emerging classes can be considered in a way similar to change points in the time series data. Also triggering kernels in self-exciting point processes such as Hawkes model can also be considered similar to change points. Some of the existing Hawkes model also uses SMC samplers. Given that MCMC samplers has been proved quite scalable and effective in a variety of similar problems it is not very convincing to think that they will not be scalable for BOCPD problem studied in the paper. Theorem 2 proves that the analytical form of the evidence lower bound can be obtained if three quantities have closed forms. Paper states that closed forms of these quantities can be obtained for many exponential models. The significance and relevance of this theorem is not very clear. For example, if one has to use Normal-Inverse-Gamma model the posterior predictive distribution can be obtained in a closed-form as student-t. In which case one is inclined to think that MCMC sampling based on this posterior can also achieve robustness. The paper offers three reasons as to why student-t is not ideal in this case but considering student-t based MCMC as one of the benchmark techniques and demonstrate its limitations on real-world datasets would offer a more compelling case to support the three arguments. Clarity The paper reads well. Most of the presented ideas are easy to follow. Originality Using \beta-divergence to achieve robustness in online change point detection can be considered an original idea. Significance The main contribution of the paper is somewhat incremental and limited. The key idea is to replace KL divergence with \beta-divergence for online CP detection to more effectively deal with outliers. Interesting additional work is done especially for initializing and refining \beta online and using SGD anchoring to achieve scalability but the significances of these different pieces of work are difficult to judge because their role on the overall performance of the proposed model are not discussed in greater detail. Although the comparison against KL divergence is well justified no comparison is offered against MCMC techniques. For example potential improvement over sequential Monte carlo samplers is not clear. The thoroughness of experimental discussion and analysis is below NIPS standards. Given these limitations the impact of the proposed work in the ML literature would likely be limited. Other comments: The experimental analysis compares performances of robust vs. standard Bayesian online change point detection on two data sets. Details are quite scarce especially with regard to the selection of hyperparameters in Bayesian priors, characteristics of data sets, additional benchmark techniques etc. For example one straightforward solution to eliminate outliers in univariate data would be to preprocess the signal by a 1-dimensional median filter. Would such a basic preprocessing technique be competitive against more complicated solutions? If yes then it could have been considered for benchmarking. If not, then arguments as to why it would not be effective would be useful to eliminate such naive ideas and get an intuitive understanding of the characteristics of the data sets being studied. In page 7 "Replacing expected ..." may read better if "expected" is replaced by "expectation". Note: I have reviewed author's feedback and glad to see that it addresses some of my concerns about MCMC not being used as a benchmark. Accordingly, I upgraded my score from a 5 to a 6.
This paper builds on recent work in the area of "General Bayesian Inference" (GBI). GBI has been proposed to deal with the "M-open world", where the class of models we choose does not capture the actual sampling distribution. A remarkable fact is that in this setting standard Bayesian updating can be seen as a method which learns a model by minimising the predictive KL-divergence from the model from which the data were sampled. The problem with the KL-divergence in the case where the model is mis-specified the penalties are much worse in the tails, meaning that the posteriors that are derived can be unsatisfactory for e.g. decision making. Hence, the use of "robust" divergence measures are proposed, which focus instead on the areas of greatest mass. $\beta$-divergences have been proposed to do exactly this. The $\beta$-divergence can be derived from the more general $\alpha,\beta$-divergence by setting $\alpha$ to 1, or from the Bregman divergence family. There are two nice properties of the $\beta$-divergence: posterior inference does not require estimating the data generating density; and it results in robust inference, since observations with low predicted probability under the model are automatically down weighted. This is not for free, and is at the cost of statistical efficiency. The paper's first contribution is to take GBI, and apply it to sequential Bayesian inference, where the prior at time t is the posterior from t - 1, to produce a general online inference method that is naturally robust to model misspecification. I feel that some of this could be made clearer. For example, M-open worlds are cited but not explained, and to the general reader it is probably not at all obvious why standard Bayesian inference corresponds to the KL between the fitted model and the data generating mechanism. I also think that the point that using $\beta$-divergence gives your robustness *irrespective of the model* is quite subtle. It is mentioned in the paper (e.g. at the end of 2.1) but I think this point could get missed. The technical contribution here is then to take a specific model: the Product Partition Model (aka Bayesian Changepoint Detection), and derive the inference procedure for the $\beta$-divergence. This essentially boils down to a regression model that is computed over the period since the last changepoint (the run length), with a corresponding model for the probability of the run continuing from one time step to the next. Note that the (extremely long and detailed) derivations are given in the appendix. The second contribution is to improve the efficiency of GBI in this model using stochastic variance reduced gradient descent (SVRG). This is actually general to GBI, rather than specific to BOCPD, although it's also a fairly trivial modification from SVRG with KL. There is an extra nuance where the gradient estimates are anchored every few steps, which presumably comes from an insight (failure?) whilst running vanilla SVRG. It would be interesting to know how much is lost (if anything) versus performing full optimisation of the variational parameters (as we know, sometimes SGD provides an additional regularisation effect). The final technical contribution is of optimisation of $\beta$, including doing so in an online fashion. There are a lot of moving parts in this, even for the static version. It would be interesting in the examples to see how $\beta$ ends up getting set, and how it evolves through the time course of the experiments. Without this is somewhat difficult to estimate the utility of this method of selection, and whether the online updating is necessary or not. The experiments themselves are two classical examples in CPD. Unsurprisingly the proposed method is more robust to outliers, so mission accomplished in this regard. However, I feel there are multiple ways that these could be improved. Firstly, both of these examples are 1-dimensional; whilst the methods are clearly applicable to higher dimensions, and it would be really interesting to see how they behave. Also, as described in the appendix, there are quite a few hyper parameters and preprocessing steps, and little intuition as to how these are reached. I fear that a lot of experimentation was performed to get to this stage, and that application to a new domain would remain a difficult task. Specific comments: Figure 1a is a bit unintuitive. It's clear that the KL is focusing more on the tails of the distribution. However, the influence has a strange profile for the $\beta$-divergences - going up as you move away before falling away again. It's hard to say what the practitioner should take from this: one might think to use as small a $\beta$ as possible. However when $\beta$->0 it reduces to the KL-Divergence (again, not obvious at all from the plot). I realise that section 4 discusses optimisation of $\beta$, but I think there is some intuition missing here In the definition of the product partition model, there is a $\Sigma_0$ which has no definition (it is multiplied by $\sigma^2$, which has a conjugate prior) In the algorithm, what is B vs b? What is Geom? Is the $\mathcal{I}$ sampled with or without replacement? In the for loop, it looks like the indexer i isn't used, and clashes with the $i \in \mathcal{I}$ [22] looks like it has been published under the title "Principles of Bayesian Inference Using General Divergence Criteria" references missing in appendices (latex build) Note: I didn't check all of the appendix, but the parts that I did check were all correct. Note: I have reviewed author's feedback, and glad to see that it addresses the concerns raised both by myself also by R2. I had a misconception of the data being used in the 2nd example. Accordingly I have raised my score from a 7 to an 8. | CommonCrawl |
A new Carleson measure adapted to multi-level ellipsoid covers
CPAA Home
Admissibility and generalized nonuniform dichotomies for discrete dynamics
October 2021, 20(10): 3445-3479. doi: 10.3934/cpaa.2021113
Choquard equations via nonlinear rayleigh quotient for concave-convex nonlinearities
M. L. M. Carvalho 1, , Edcarlos D. Silva 1,, and C. Goulart 2,
Universidade Federal de Goiás, IME, Goiânia-GO, Brazil
Universidade Federal de Jataí, Jataí-GO, Brazil
Received June 2020 Revised May 2021 Published October 2021 Early access June 2021
Fund Project: The second author was partially supported by CNPq and FAPDF with grants 309026/2020-2 and 16809.78.45403.25042017, respectively
It is established existence of ground and bound state solutions for Choquard equation considering concave-convex nonlinearities in the following form
$ \begin{equation*} \begin{cases} -\Delta u +V(x) u = (I_\alpha* |u|^p)|u|^{p-2}u+ \lambda |u|^{q-2}u \, {\rm{\;in\;}}\, \mathbb{R}^N, \\ \ u\in H^1( \mathbb{R}^N) \end{cases} \end{equation*} $
$ \lambda > 0, N \geq 3, \alpha \in (0, N) $
. The potential
$ V $
is a continuous function and
$ I_\alpha $
denotes the standard Riesz potential. Assume also that
$ 1 < q < 2 $
$ 2_\alpha < p < 2^*_\alpha $
$ 2_\alpha = (N+\alpha)/N $
$ 2_\alpha = (N+\alpha)/(N-2) $
. Our main contribution is to consider a specific condition on the parameter
$ \lambda > 0 $
taking into account the nonlinear Rayleigh quotient. More precisely, there exists
$ \lambda^* > 0 $
such that our main problem admits at least two positive solutions for each
$ \lambda \in (0, \lambda^*] $
. In order to do that we combine Nehari method with a fine analysis on the nonlinear Rayleigh quotient. The parameter
$ \lambda^*> 0 $
is optimal in some sense which allow us to apply the Nehari method.
Keywords: Choquard equation, concave-convex nonlinearities, Nehari method, nonlinear Rayleigh quotient, nonlocal elliptic problems.
Mathematics Subject Classification: Primary: 35A01, 35A15; Secondary: 35A23, 35A25.
Citation: M. L. M. Carvalho, Edcarlos D. Silva, C. Goulart. Choquard equations via nonlinear rayleigh quotient for concave-convex nonlinearities. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3445-3479. doi: 10.3934/cpaa.2021113
C. O. Alves and Ji anfu Yang, Existence and regularity of solutions for a Choquard equation with zero mass, Milan J. Math. Vol., 86 (2018), 329-342. doi: 10.1007/s00032-018-0289-x. Google Scholar
A. Ambrosetti, H. Brezis and G. Cerami, Combined effects of concave and convex nonlinearities in some elliptic problems, J. Funct. Anal., 122 (1994), 519-543. doi: 10.1006/jfan.1994.1078. Google Scholar
M. Badiale and E. Serra, Semilinear Elliptic Equations for Beginners. Existence Results via the Variational Approach, Universitext. Springer, London, 2011. doi: 10.1007/978-0-85729-227-8. Google Scholar
T. Bartsch and Z. Q. Wang, Existence and multiplicity results for some superlinear elliptic problems on $\mathbb{R}^{N}$, Commun. Partial Differ. Equ., 20 (1995), 1725–1741. doi: 10.1080/03605309508821149. Google Scholar
K. J. Brown and T. F. Wu, A fibering map approach to a semilinear elliptic boundary value problem, Electr. J. Differ. Equ., 69 (2007), 1-9. Google Scholar
K. J. Brown and T. F. Wu, A fibering map approach to a potential operator equation and its applications, Differ. Int. Equ., 22 (2009), 1097-1114. Google Scholar
M. L. M. Carvalho, Y. Ilyasov and C. A. Santos, Separating of critical points on the Nehari manifold via the nonlinear generalized Rayleigh quotients, arXiv: 1906.07759. Google Scholar
Yi-Hsin Cheng and Tsung-Fang Wu, Multiplicity and concentration of positive solutions for semilinear elliptic equations with steep potential, Commun. Pure Appl. Anal., 15 (2016), 1534-0392. doi: 10.3934/cpaa.2016044. Google Scholar
S. Chen and X. Tang, Ground state solutions for general Choquard equations with a variable potential and a local nonlinearity, Rev. R. Acad. Cienc. Exactas Fìs. Nat. Ser. A Mat. RACSAM, 114 (2020), 14 pp. doi: 10.1007/s13398-019-00775-5. Google Scholar
P. Drábek and J. Milota, Methods of Nonlinear Analysis, Basler Lehrbücher, 2013. doi: 10.1007/978-3-0348-0387-8. Google Scholar
D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer, 2015. Google Scholar
E. P. Gross, Physics of Many-Particle Systems, Gordon Breach, New York, 1996. Google Scholar
Y. Huang, Tsung-Fang Wu and Y. Wu, Multiple positive solutions for a class of concave-convex elliptic problems in $\mathbb{R}^N$ involving sign-changing weight, II, Commun. Contemp. Math., 17 (2015), 1450045. doi: 10.1142/S021919971450045X. Google Scholar
X. Li, X. Liu and S. Mab, Infinitely many bound states for Choquard equations with local nonlinearities, Nonlinear Anal., 189 (2019), 111583. doi: 10.1016/j.na.2019.111583. Google Scholar
V. Moroz and J. Van Schaftingen, A guide to the Choquard equation, J. Fixed Point Theory Appl., 19 (2017), 773-813. doi: 10.1007/s11784-016-0373-1. Google Scholar
V. Moroz and J. Van Schaftingen, Ground states of nonlinear Choquard equations: Existence, qualitative properties and decay asymptotics, J. Funct. Anal., 265 (2013), 153-184. doi: 10.1016/j.jfa.2013.04.007. Google Scholar
Z. Nehari, On a class of nonlinear second-order differential equations, Trans. Amer. Math. Soc., 95 (1960), 101-123. doi: 10.2307/1993333. Google Scholar
Z. Nehari, Characteristic values associated with a class of non-linear second-order differential equations, Acta Math., 105 (1961), 141-175. doi: 10.1007/BF02559588. Google Scholar
E. H. Lieb, Existence and uniqueness of the minimizing solution of Choquard's nonlinear equation, Stud. Appl. Math., 57 (1977), 93-105. doi: 10.1002/sapm197757293. Google Scholar
X. Li and S. Ma, Choquard equations with critical nonlinearities, Communications in Contemporary Mathematics, 22 (2020), 1950023. doi: 10.1142/S0219199719500238. Google Scholar
R. Penrose, On gravity's role in quantum state reduction, Gen. Relativity Gravitation, 28 (1996), 581-600. doi: 10.1007/BF02105068. Google Scholar
S. I. Pokhozhaev, The fibration method for solving nonlinear boundary value problems, Trudy Mat. Inst. Steklov., 192 (1990), 146-163. Google Scholar
P. Drabek and S. I. Pohozaev, Positive solutions for the p-Laplacian: application of the fibering method, Proc. Roy. Soc. Edinburgh Sect. A, 127 (1997), 703-726. doi: 10.1017/S0308210500023787. Google Scholar
S. Pekar, Untersuchung Ber Die Elektronentheorie Der Kristalle, Akademie Verlag, Berlin, 1954. Google Scholar
P. Pucci and J. Serrin, The maximum principle, in Nonlinear Differential Equations and their Applications, Birkhäuser Verlag, Basel, 2007. Google Scholar
P. Rabinowitz, Minimax methods in critical point theory with applications to differential equations, Conf. Board of Math. Sci. Reg. Conf. Ser. in Math., No. 65, Amer. Math. Soc., 1986. doi: 10.1090/cbms/065. Google Scholar
P. Rabinowitz, On a class of nonlinear Schrödinger equations, Z. Angew. Math. Phys., 43, (1992), 270–291. doi: 10.1007/BF00946631. Google Scholar
C. A. Santos, R. L. Alves and K. Silva, Multiplicity of negative-energy solutions for singular-superlinear Schrödinger equations with indefinite-sign potential, (To appear in Communications in Contemporary Mathematics). Google Scholar
M. Struwe, Variational methods Applications to Nonlinear Partial Differential Equations and Hamiltonian Systems, Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer Verlag, Berlin, 2000. doi: 10.1007/978-3-662-04194-9. Google Scholar
Y. Il'yasov and K. Silva, On branches of positive solutions for p-Laplacian problems at the extreme value of Nehari manifold method, Proc. Amer. Math. Soc., 146 (2018), 2925-2935. doi: 10.1090/proc/13972. Google Scholar
Y. Il'yasov, On extreme values of Nehari manifold method via nonlinear Rayleigh's quotient, Topol. Methods Nonlinear Anal., 49 (2017), 683-714. doi: 10.12775/tmna.2017.005. Google Scholar
Y. Il'yasov, On nonlocal existence results for elliptic equations with convex-concave nonlinearities, Nonl. Anal.: Th., Meth. Appl., 61 (2005), 211-236. doi: 10.1016/j.na.2004.10.022. Google Scholar
M. Willem, Minimax Theorems, Birkhauser Boston, Basel, Berlin, 1996. doi: 10.1007/978-1-4612-4146-1. Google Scholar
Tsung-Fang Wu, Multiple positive solutions for a class of concave-convex elliptic problems in $\mathbb{R}^N$ involving sign-changing weight, J Funct. Anal., 258 (2010), 99-131. doi: 10.1016/j.jfa.2009.08.005. Google Scholar
Figure 1. $ \lambda\in (0,\lambda_*) $
Figure 2. $ \lambda = \lambda_* $
Figure 3. $ \lambda\in(\lambda_*,\lambda^*) $
Figure 4. The functions $ Q_n(t) $, $ Q_e(t) $
Figure 5. $ \lambda\in(0,\lambda_*) $
Figure 8. $ \lambda_1<\lambda_2 $
Jia-Feng Liao, Yang Pu, Xiao-Feng Ke, Chun-Lei Tang. Multiple positive solutions for Kirchhoff type problems involving concave-convex nonlinearities. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2157-2175. doi: 10.3934/cpaa.2017107
Boumediene Abdellaoui, Abdelrazek Dieb, Enrico Valdinoci. A nonlocal concave-convex problem with nonlocal mixed boundary data. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1103-1120. doi: 10.3934/cpaa.2018053
Junping Shi, Ratnasingham Shivaji. Exact multiplicity of solutions for classes of semipositone problems with concave-convex nonlinearity. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 559-571. doi: 10.3934/dcds.2001.7.559
Yaoping Chen, Jianqing Chen. Existence of multiple positive weak solutions and estimates for extremal values for a class of concave-convex elliptic problems with an inverse-square potential. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1531-1552. doi: 10.3934/cpaa.2017073
Miao-Miao Li, Chun-Lei Tang. Multiple positive solutions for Schrödinger-Poisson system in $\mathbb{R}^{3}$ involving concave-convex nonlinearities with critical exponent. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1587-1602. doi: 10.3934/cpaa.2017076
Jinguo Zhang, Dengyun Yang. Fractional $ p $-sub-Laplacian operator problem with concave-convex nonlinearities on homogeneous groups. Electronic Research Archive, 2021, 29 (5) : 3243-3260. doi: 10.3934/era.2021036
Lucas C. F. Ferreira, Elder J. Villamizar-Roa. On the heat equation with concave-convex nonlinearity and initial data in weak-$L^p$ spaces. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1715-1732. doi: 10.3934/cpaa.2011.10.1715
Min Liu, Zhongwei Tang. Multiplicity and concentration of solutions for Choquard equation via Nehari method and pseudo-index theory. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3365-3398. doi: 10.3934/dcds.2019139
Qingfang Wang. Multiple positive solutions of fractional elliptic equations involving concave and convex nonlinearities in $R^N$. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1671-1688. doi: 10.3934/cpaa.2016008
Salvatore A. Marano, Nikolaos S. Papageorgiou. Positive solutions to a Dirichlet problem with $p$-Laplacian and concave-convex nonlinearity depending on a parameter. Communications on Pure & Applied Analysis, 2013, 12 (2) : 815-829. doi: 10.3934/cpaa.2013.12.815
João Marcos do Ó, Uberlandio Severo. Quasilinear Schrödinger equations involving concave and convex nonlinearities. Communications on Pure & Applied Analysis, 2009, 8 (2) : 621-644. doi: 10.3934/cpaa.2009.8.621
Qingfang Wang. The Nehari manifold for a fractional Laplacian equation involving critical nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2261-2281. doi: 10.3934/cpaa.2018108
Asadollah Aghajani. Regularity of extremal solutions of semilinear elliptic problems with non-convex nonlinearities on general domains. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 3521-3530. doi: 10.3934/dcds.2017150
Kanishka Perera, Marco Squassina. On symmetry results for elliptic equations with convex nonlinearities. Communications on Pure & Applied Analysis, 2013, 12 (6) : 3013-3026. doi: 10.3934/cpaa.2013.12.3013
J. García-Melián, Julio D. Rossi, José Sabina de Lis. A convex-concave elliptic problem with a parameter on the boundary condition. Discrete & Continuous Dynamical Systems, 2012, 32 (4) : 1095-1124. doi: 10.3934/dcds.2012.32.1095
Darya V. Verveyko, Andrey Yu. Verisokin. Application of He's method to the modified Rayleigh equation. Conference Publications, 2011, 2011 (Special) : 1423-1431. doi: 10.3934/proc.2011.2011.1423
Shouchuan Hu, Nikolaos S. Papageorgiou. Nonlinear Neumann problems with indefinite potential and concave terms. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2561-2616. doi: 10.3934/cpaa.2015.14.2561
Mingzheng Sun, Jiabao Su, Leiga Zhao. Infinitely many solutions for a Schrödinger-Poisson system with concave and convex nonlinearities. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 427-440. doi: 10.3934/dcds.2015.35.427
Bartosz Bieganowski, Simone Secchi. The semirelativistic Choquard equation with a local nonlinear term. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 4279-4302. doi: 10.3934/dcds.2019173
Zhijun Zhang. Boundary blow-up for elliptic problems involving exponential nonlinearities with nonlinear gradient terms and singular weights. Communications on Pure & Applied Analysis, 2007, 6 (2) : 521-529. doi: 10.3934/cpaa.2007.6.521
M. L. M. Carvalho Edcarlos D. Silva C. Goulart | CommonCrawl |
Chapter 1: Economics: The Study of Choice
1.1 Defining Economics
1.2 The Field of Economics
1.3 The Economists' Tool Kit
1.4 Review and Practice
Chapter 2: Confronting Scarcity: Choices in Production
2.1 Factors of Production
2.2 The Production Possibilities Curve
2.3 Applications of the Production Possibilities Model
Chapter 3: Demand and Supply
3.1 Demand
3.2 Supply
3.3 Demand, Supply, and Equilibrium
Chapter 4: Applications of Demand and Supply
4.1 Putting Demand and Supply to Work
4.2 Government Intervention in Market Prices: Price Floors and Price Ceilings
4.3 The Market for Health-Care Services
Chapter 5: Elasticity: A Measure of Response
5.1 The Price Elasticity of Demand
5.2 Responsiveness of Demand to Other Factors
5.3 Price Elasticity of Supply
Chapter 6: Markets, Maximizers, and Efficiency
6.1 The Logic of Maximizing Behavior
6.2 Maximizing in the Marketplace
6.3 Market Failure
Chapter 7: The Analysis of Consumer Choice
7.1 The Concept of Utility
7.2 Utility Maximization and Demand
7.3 Indifference Curve Analysis: An Alternative Approach to Understanding Consumer Choice
Chapter 8: Production and Cost
8.1 Production Choices and Costs: The Short Run
8.2 Production Choices and Costs: The Long Run
Chapter 9: Competitive Markets for Goods and Services
9.1 Perfect Competition: A Model
9.2 Output Determination in the Short Run
9.3 Perfect Competition in the Long Run
Chapter 10: Monopoly
10.1 The Nature of Monopoly
10.2 The Monopoly Model
10.3 Assessing Monopoly
10.4 Review and Practice
Chapter 11: The World of Imperfect Competition
11.1 Monopolistic Competition: Competition Among Many
11.2 Oligopoly: Competition Among the Few
11.3 Extensions of Imperfect Competition: Advertising and Price Discrimination
Chapter 12: Wages and Employment in Perfect Competition
12.1 The Demand for Labor
12.2 The Supply of Labor
12.3 Labor Markets at Work
Chapter 13: Interest Rates and the Markets for Capital and Natural Resources
13.1 Time and Interest Rates
13.2 Interest Rates and Capital
13.3 Natural Resources and Conservation
Chapter 14: Imperfectly Competitive Markets for Factors of Production
14.1 Price-Setting Buyers: The Case of Monopsony
14.2 Monopsony and the Minimum Wage
14.3 Price Setters on the Supply Side
Chapter 15: Public Finance and Public Choice
15.1 The Role of Government in a Market Economy
15.2 Financing Government
15.3 Choices in the Public Sector
Chapter 16: Antitrust Policy and Business Regulation
16.1 Antitrust Laws and Their Interpretation
16.2 Antitrust and Competitiveness in a Global Economy
16.3 Regulation: Protecting People from the Market
Chapter 17: International Table
17.1 The Gains from Trade
17.2 Two-Way Trade
17.3 Restrictions on International Trade
Chapter 18: The Economics of the Environment
18.1 Maximizing the Net Benefits of Pollution
18.2 Alternatives in Pollution Control
Chapter 19: Inequality, Poverty, and Discrimination
19.1 Income Inequality
19.2 The Economics of Poverty
19.3 The Economics of Discrimination
Chapter 20: Macroeconomics: The Big Picture
20.1 Growth of Real GDP and Business Cycles
20.2 Price-Level Changes
20.3 Unemployment
Chapter 21: Measuring Total Output and Income
21.1 Measuring Total Output
21.2 Measuring Total Income
21.3 GDP and Economic Well-Being
Chapter 22: Aggregate Demand and Aggregate Supply
22.1 Aggregate Demand
22.2 Aggregate Demand and Aggregate Supply: The Long Run and the Short Run
22.3 Recessionary and Inflationary Gaps and Long-Run Macroeconomic Equilibrium
Chapter 23: Economic Growth
23.1 The Significance of Economic Growth
23.2 Growth and the Long-Run Aggregate Supply Curve
23.3 Determinants of Economic Growth
Chapter 24: The Nature and Creation of Money
24.1 What Is Money?
24.2 The Banking System and Money Creation
24.3 The Federal Reserve System
Chapter 25: Financial Markets and the Economy
25.1 The Bond and Foreign Exchange Markets
25.2 Demand, Supply, and Equilibrium in the Money Market
Chapter 26: Monetary Policy and the Fed
26.1 Monetary Policy in the United States
26.2 Problems and Controversies of Monetary Policy
26.3 Monetary Policy and the Equation of Exchange
Chapter 27: Government and Fiscal Policy
27.1 Government and the Economy
27.2 The Use of Fiscal Policy to Stabilize the Economy
27.3 Issues in Fiscal Policy
Chapter 28: Consumption and the Aggregate Expenditures Model
28.1 Determining the Level of Consumption
28.2 The Aggregate Expenditures Model
28.3 Aggregate Expenditures and Aggregate Demand
Chapter 29: Investment and Economic Activity
29.1 The Role and Nature of Investment
29.2 Determinants of Investment
29.3 Investment and the Economy
Chapter 30: Net Exports and International Finance
30.1 The International Sector: An Introduction
30.2 International Finance
30.3 Exchange Rate Systems
Chapter 31: Inflation and Unemployment
31.1 Relating Inflation and Unemployment
31.2 Explaining Inflation–Unemployment Relationships
31.3 Inflation and Unemployment in the Long Run
Chapter 32: A Brief History of Macroeconomic Thought and Policy
32.1 The Great Depression and Keynesian Economics
32.2 Keynesian Economics in the 1960s and 1970s
32.3. An Emerging Consensus: Macroeconomics for the Twenty-First Century
Chapter 33: Economic Development
33.1 The Nature and Challenge of Economic Development
33.2 Population Growth and Economic Development
33.3 Keys to Economic Development
Chapter 34: Socialist Economies in Transition
34.1 The Theory and Practice of Socialism
34.2 Socialist Systems in Action
34.3 Economies in Transition: China and Russia
Appendix A: Graphs in Economics
Appendix A.1: How to Construct and Interpret Graphs
Appendix A.2: Nonlinear Relationships and Graphs without Numbers
Appendix A.3: Using Graphs and Charts to Show Values of Variables
Appendix B: Extensions of the Aggregate Expenditures Model
Appendix B.1: The Algebra of Equilibrium
Appendix B.2: The Aggregate Expenditures Model and Fiscal Policy
Appendix B.3: Review and Practice
Principles of Economics
Explain and graph the consumption function and the saving function, explain what the slopes of these curves represent, and explain how the two are related to each other.
Compare the current income hypothesis with the permanent income hypothesis, and use each to predict the effect that temporary versus permanent changes in income will have on consumption.
Discuss two factors that can cause the consumption function to shift upward or downward.
J. R. McCulloch, an economist of the early nineteenth century, wrote, "Consumption … is, in fact, the object of industry" (Mc Culloch, 1824). Goods and services are produced so that people can use them. The factors that determine consumption thus determine how successful an economy is in fulfilling its ultimate purpose: providing goods and services for people. So, consumption is not just important because it is such a large component of economic activity. It is important because, as McCulloch said, consumption is at the heart of the economy's fundamental purpose.
Consumption and Disposable Personal Income
It seems reasonable to expect that consumption spending by households will be closely related to their disposable personal income, which equals the income households receive less the taxes they pay. Note that disposable personal income and GDP are not the same thing. GDP is a measure of total income; disposable personal income is the income households have available to spend during a specified period.
Real values of disposable personal income and consumption per year from 1960 through 2010 are plotted in Figure 28.1 "The Relationship Between Consumption and Disposable Personal Income, 1960–2010". The data suggest that consumption generally changes in the same direction as does disposable personal income.
The relationship between consumption and disposable personal income is called the consumption function. It can be represented algebraically as an equation, as a schedule in a table, or as a curve on a graph.
Figure 28.1 The Relationship Between Consumption and Disposable Personal Income, 1960–2010
Plots of consumption and disposable personal income over time suggest that consumption increases as disposable personal income increases.
Source: U. S. Department of Commerce, Bureau of Economic Analysis, NIPA Tables 1.16 and 2.1 (November 23, 2010 revision; Data are through 3rd quarter 2010).
Figure 28.2 "Plotting a Consumption Function" illustrates the consumption function. The relationship between consumption and disposable personal income that we encountered in Figure 28.1 "The Relationship Between Consumption and Disposable Personal Income, 1960–2010" is evident in the table and in the curve: consumption in any period increases as disposable personal income increases in that period. The slope of the consumption function tells us by how much. Consider points C and D. When disposable personal income (Yd) rises by $500 billion, consumption rises by $400 billion. More generally, the slope equals the change in consumption divided by the change in disposable personal income. The ratio of the change in consumption (ΔC) to the change in disposable personal income (ΔYd) is the marginal propensity to consume (MPC). The Greek letter delta (Δ) is used to denote "change in."
Equation 28.1
[latex]MPC = \frac{ \Delta C}{ \Delta Y_d}[/latex]
In this case, the marginal propensity to consume equals $400/$500 = 0.8. It can be interpreted as the fraction of an extra $1 of disposable personal income that people spend on consumption. Thus, if a person with an MPC of 0.8 received an extra $1,000 of disposable personal income, that person's consumption would rise by $0.80 for each extra $1 of disposable personal income, or $800.
We can also express the consumption function as an equation
[latex]C = \$ 300 \: billion + 0.8Y_d[/latex]
Figure 28.2 Plotting a Consumption Function
The consumption function relates consumption C to disposable personal income Yd. The equation for the consumption function shown here in tabular and graphical form is C = $300 billion + 0.8Yd.
It is important to note carefully the definition of the marginal propensity to consume. It is the change in consumption divided by the change in disposable personal income. It is not the level of consumption divided by the level of disposable personal income. Using Equation 28.2, at a level of disposable personal income of $500 billion, for example, the level of consumption will be $700 billion so that the ratio of consumption to disposable personal income will be 1.4, while the marginal propensity to consume remains 0.8. The marginal propensity to consume is, as its name implies, a marginal concept. It tells us what will happen to an additional dollar of personal disposable income.
Notice from the curve in Figure 28.2 "Plotting a Consumption Function" that when disposable personal income equals 0, consumption is $300 billion. The vertical intercept of the consumption function is thus $300 billion. Then, for every $500 billion increase in disposable personal income, consumption rises by $400 billion. Because the consumption function in our example is linear, its slope is the same between any two points. In this case, the slope of the consumption function, which is the same as the marginal propensity to consume, is 0.8 all along its length.
We can use the consumption function to show the relationship between personal saving and disposable personal income. Personal saving is disposable personal income not spent on consumption during a particular period; the value of personal saving for any period is found by subtracting consumption from disposable personal income for that period:
[latex]Personal \ saving = disposable \ personal \ income \ - \ consumption[/latex]
The saving function relates personal saving in any period to disposable personal income in that period. Personal saving is not the only form of saving—firms and government agencies may save as well. In this chapter, however, our focus is on the choice households make between using disposable personal income for consumption or for personal saving.
Figure 28.3 "Consumption and Personal Saving" shows how the consumption function and the saving function are related. Personal saving is calculated by subtracting values for consumption from values for disposable personal income, as shown in the table. The values for personal saving are then plotted in the graph. Notice that a 45-degree line has been added to the graph. At every point on the 45-degree line, the value on the vertical axis equals that on the horizontal axis. The consumption function intersects the 45-degree line at an income of $1,500 billion (point D). At this point, consumption equals disposable personal income and personal saving equals 0 (point D′ on the graph of personal saving). Using the graph to find personal saving at other levels of disposable personal income, we subtract the value of consumption, given by the consumption function, from disposable personal income, given by the 45-degree line.
Figure 28.3 Consumption and Personal Saving
Personal saving equals disposable personal income minus consumption. The table gives hypothetical values for these variables. The consumption function is plotted in the upper part of the graph. At points along the 45-degree line, the values on the two axes are equal; we can measure personal saving as the distance between the 45-degree line and consumption. The curve of the saving function is in the lower portion of the graph.
At a disposable personal income of $2,000 billion, for example, consumption is $1,900 billion (point E). Personal saving equals $100 billion (point E′)—the vertical distance between the 45-degree line and the consumption function. At an income of $500 billion, consumption totals $700 billion (point B). The consumption function lies above the 45-degree line at this point; personal saving is −$200 billion (point B′). A negative value for saving means that consumption exceeds disposable personal income; it must have come from saving accumulated in the past, from selling assets, or from borrowing.
Notice that for every $500 billion increase in disposable personal income, personal saving rises by $100 billion. Consider points C′ and D′ in Figure 28.3 "Consumption and Personal Saving". When disposable personal income rises by $500 billion, personal saving rises by $100 billion. More generally, the slope of the saving function equals the change in personal saving divided by the change in disposable personal income. The ratio of the change in personal saving (ΔS) to the change in disposable personal income (ΔYd) is the marginal propensity to save (MPS).
[latex]MPS = \frac{ \Delta S}{ \Delta Y_d}[/latex]
In this case, the marginal propensity to save equals $100/$500 = 0.2. It can be interpreted as the fraction of an extra $1 of disposable personal income that people save. Thus, if a person with an MPS of 0.2 received an extra $1,000 of disposable personal income, that person's saving would rise by $0.20 for each extra $1 of disposable personal income, or $200. Since people have only two choices of what to do with additional disposable personal income—that is, they can use it either for consumption or for personal saving—the fraction of disposable personal income that people consume (MPC) plus the fraction of disposable personal income that people save (MPS) must add to 1:
[latex]MPC + MPS = 1[/latex]
Current versus Permanent Income
The discussion so far has related consumption in a particular period to income in that same period. The current income hypothesis holds that consumption in any one period depends on income during that period, or current income.
Although it seems obvious that consumption should be related to disposable personal income, it is not so obvious that consumers base their consumption in any one period on the income they receive during that period. In buying a new car, for example, consumers might base their decision not only on their current income but on the income they expect to receive during the three or four years they expect to be making payments on the car. Parents who purchase a college education for their children might base their decision on their own expected lifetime income.
Indeed, it seems likely that virtually all consumption choices could be affected by expectations of income over a very long period. One reason people save is to provide funds to live on during their retirement years. Another is to build an estate they can leave to their heirs through bequests. The amount people save for their retirement or for bequests depends on the income they expect to receive for the rest of their lives. For these and other reasons, then, personal saving (and thus consumption) in any one year is influenced by permanent income. Permanent income is the average annual income people expect to receive for the rest of their lives.
People who have the same current income but different permanent incomes might reach very different saving decisions. Someone with a relatively low current income but a high permanent income (a college student planning to go to medical school, for example) might save little or nothing now, expecting to save for retirement and for bequests later. A person with the same low income but no expectation of higher income later might try to save some money now to provide for retirement or bequests later. Because a decision to save a certain amount determines how much will be available for consumption, consumption decisions can also be affected by expected lifetime income. Thus, an alternative approach to explaining consumption behavior is the permanent income hypothesis, which assumes that consumption in any period depends on permanent income. An important implication of the permanent income hypothesis is that a change in income regarded as temporary will not affect consumption much, since it will have little effect on average lifetime income; a change regarded as permanent will have an effect. The current income hypothesis, though, predicts that it does not matter whether consumers view a change in disposable personal income as permanent or temporary; they will move along the consumption function and change consumption accordingly.
The question of whether permanent or current income is a determinant of consumption arose in 1992 when President George H. W. Bush ordered a change in the withholding rate for personal income taxes. Workers have a fraction of their paychecks withheld for taxes each pay period; Mr. Bush directed that this fraction be reduced in 1992. The change in the withholding rate did not change income tax rates; by withholding less in 1992, taxpayers would either receive smaller refund checks in 1993 or owe more taxes. The change thus left taxpayers' permanent income unaffected.
President Bush's measure was designed to increase aggregate demand and close the recessionary gap created by the 1990–1991 recession. Economists who subscribed to the permanent income hypothesis predicted that the change would not have any effect on consumption. Those who subscribed to the current income hypothesis predicted that the measure would boost consumption substantially in 1992. A survey of households taken during this period suggested that households planned to spend about 43% of the temporary increase in disposable personal income produced by the withholding experiment (Shapiro & Slemrod, 1995). That is considerably less than would be predicted by the current income hypothesis, but more than the zero change predicted by the permanent income hypothesis. This result, together with related evidence, suggests that temporary changes in income can affect consumption, but that changes regarded as permanent will have a much stronger impact.
Many of the tax cuts passed during the administration of President George W. Bush are set to expire in 2010. The proposal to make these tax cuts permanent is aimed toward having a stronger impact on consumption, since tax cuts regarded as permanent have larger effects than do changes regarded as temporary.
Other Determinants of Consumption
The consumption function graphed in Figure 28.2 "Plotting a Consumption Function" and Figure 28.3 "Consumption and Personal Saving" relates consumption spending to the level of disposable personal income. Changes in disposable personal income cause movements along this curve; they do not shift the curve. The curve shifts when other determinants of consumption change. Examples of changes that could shift the consumption function are changes in real wealth and changes in expectations. Figure 28.4 "Shifts in the Consumption Function" illustrates how these changes can cause shifts in the curve.
Figure 28.4 Shifts in the Consumption Function
An increase in the level of consumption at each level of disposable personal income shifts the consumption function upward in Panel (a). Among the events that would shift the curve upward are an increase in real wealth and an increase in consumer confidence. A reduction in the level of consumption at each level of disposable personal income shifts the curve downward in Panel (b). The events that could shift the curve downward include a reduction in real wealth and a decline in consumer confidence.
Changes in Real Wealth
An increase in stock and bond prices, for example, would make holders of these assets wealthier, and they would be likely to increase their consumption. An increase in real wealth shifts the consumption function upward, as illustrated in Panel (a) of Figure 28.4 "Shifts in the Consumption Function". A reduction in real wealth shifts it downward, as shown in Panel (b).
A change in the price level changes real wealth. We learned in an earlier chapter that the relationship among the price level, real wealth, and consumption is called the wealth effect. A reduction in the price level increases real wealth and shifts the consumption function upward, as shown in Panel (a). An increase in the price level shifts the curve downward, as shown in Panel (b).
Changes in Expectations
Consumers are likely to be more willing to spend money when they are optimistic about the future. Surveyors attempt to gauge this optimism using "consumer confidence" surveys that ask respondents to report whether they are optimistic or pessimistic about their own economic situation and about the prospects for the economy as a whole. An increase in consumer optimism tends to shift the consumption function upward as in Panel (a) of Figure 28.4 "Shifts in the Consumption Function"; an increase in pessimism tends to shift it downward as in Panel (b). The sharp reduction in consumer confidence in 2008 and early in 2009 contributed to a downward shift in the consumption function and thus to the severity of the recession.
The relationship between consumption and consumer expectations concerning future economic conditions tends to be a form of self-fulfilling prophecy. If consumers expect economic conditions to worsen, they will cut their consumption—and economic conditions will worsen! Political leaders often try to persuade people that economic prospects are good. In part, such efforts are an attempt to increase economic activity by boosting consumption.
Consumption is closely related to disposable personal income and is represented by the consumption function, which can be presented in a table, in a graph, or in an equation.
Personal saving is disposable personal income not spent on consumption.
The marginal propensity to consume is MPC = ΔC/ΔYd and the marginal propensity to save is MPS = ΔS/ΔYd. The sum of the MPC and MPS is 1.
The current income hypothesis holds that consumption is a function of current disposable personal income, whereas the permanent income hypothesis holds that consumption is a function of permanent income, which is the income households expect to receive annually during their lifetime. The permanent income hypothesis predicts that a temporary change in income will have a smaller effect on consumption than is predicted by the current income hypothesis.
Other factors that affect consumption include real wealth and expectations.
For each of the following events, draw a curve representing the consumption function and show how the event would affect the curve.
A sharp increase in stock prices increases the real wealth of most households.
Consumers decide that a recession is ahead and that their incomes are likely to fall.
The price level falls.
Case in Point: Consumption and the Tax Rebate of 2001
Figure 28.5
Max Wei – Cashier – CC BY-ND 2.0.
The first round of the Bush tax cuts was passed in 2001. Democrats in Congress insisted on a rebate aimed at stimulating consumption. In the summer of 2001, rebates of $300 per single taxpayer and of $600 for married couples were distributed. The Department of Treasury reported that 92 million people received the rebates. While the rebates were intended to stimulate consumption, the extent to which the tax rebates stimulated consumption, especially during the recession, is an empirical question.
It is difficult to analyze the impact of a tax rebate that is a single event experienced by all households at the same time. If spending does change at that moment, is it because of the tax rebate or because of some other event that occurred at that time?
Fortunately for researchers Sumit Agarwal, Chunlin Liu, and Nicholas Souleles, using data from credit card accounts, the 2001 tax rebate checks were distributed over 10 successive weeks from July to September of 2001. The timing of receipt was random, since it was based on the next-to-last digit of one's Social Security number, and taxpayers were informed well in advance that the checks were coming. The researchers found that consumers initially saved much of their rebates, by paying down their credit card debts, but over a nine-month period, spending increased to about 40% of the rebate. They also found that consumers who were most liquidity constrained (for example, close to their credit card debt limits) spent more than consumers who were less constrained.
The researchers thus conclude that their findings do not support the permanent income hypothesis, since consumers responded to spending based on when they received their checks and because the results indicate that consumers do respond to what they call "lumpy" changes in income, such as those generated by a tax rebate. In other words, current income does seem to matter.
Two other studies of the 2001 tax rebate reached somewhat different conclusions. Using survey data, researchers Matthew D. Shapiro and Joel Slemrod estimated an MPC of about one-third. They note that this low increased spending is particularly surprising, since the rebate was part of a general tax cut that was expected to last a long time. At the other end, David S. Johnson, Jonathan A. Parker, and Nicholas S. Souleles, using yet another data set, found that looking over a six-month period, the MPC was about two-thirds. So, while there is disagreement on the size of the MPC, all conclude that the impact was non-negligible.
Sources: Sumit Agarwal, Chunlin Liu, and Nicholas S. Souleles, "The Reaction of Consumer Spending and Debt to Tax Rebates—Evidence from Consumer Credit Data," NBER Working Paper No. 13694, December 2007; David S. Johnson, Jonathan A. Parker, and Nicholas S. Souleles, "Household Expenditure and the Income Tax Rebates of 2001," American Economic Review 96, no. 5 (December 2006): 1589–1610; Matthew D. Shapiro and Joel Slemrod, "Consumer Response to Tax Rebates," American Economic Review 93, no. 1 (March 2003): 381–96; and Matthew D. Shapiro and Joel Slemrod, "Did the 2001 Rebate Stimulate Spending? Evidence from Taxpayer Surveys," NBER Tax Policy & the Economy 17, no. 1 (2003): 83–109.
Answers to Try It! Problems
A sharp increase in stock prices makes people wealthier and shifts the consumption function upward, as in Panel (a) of Figure 28.4 "Shifts in the Consumption Function".
This would be reported as a reduction in consumer confidence. Consumers are likely to respond by reducing their purchases, particularly of durable items such as cars and washing machines. The consumption function will shift downward, as in Panel (b) of Figure 28.4 "Shifts in the Consumption Function".
A reduction in the price level increases real wealth and thus boosts consumption. The consumption function will shift upward, as in Panel (a) of Figure 28.4 "Shifts in the Consumption Function".
Mc Culloch, J. R., A Discourse on the Rise, Progress, Peculiar Objects, and Importance, of Political Economy: Containing the Outline of a Course of Lectures on the Principles and Doctrines of That Science (Edinburgh: Archibald Constable, 1824), 103.
Shapiro, M. D., and Joel Slemrod, "Consumer Response to the Timing of Income: Evidence from a Change in Tax Withholding," American Economic Review 85 (March 1995): 274–83.
Previous: Chapter 28: Consumption and the Aggregate Expenditures Model
Next: 28.2 The Aggregate Expenditures Model
Principles of Economics by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. | CommonCrawl |
Search all SpringerOpen articles
Journal of Inequalities and Applications
Research | Open | Published: 14 April 2015
A new iterative algorithm for split solution problems of quasi-nonexpansive mappings
Rong Li1 &
Zhenhua He2
Journal of Inequalities and Applicationsvolume 2015, Article number: 131 (2015) | Download Citation
Some strong convergence algorithms are introduced to solve the split common fixed point problem for quasi-nonexpansive mappings. These results develop the related ones for fixed point iterative methods in the literature.
Introduction and preliminaries
Throughout this paper, let H be a real Hilbert space with zero vector θ, whose inner product and norm are denoted by $\langle \cdot,\cdot\rangle$ and $\Vert\cdot\Vert$, respectively. The symbols ℕ and ℝ are used to denote the sets of positive integers and real numbers, respectively. Let K be a nonempty closed convex subset of a Banach space E and T be a mapping from K into itself. In this paper, the set of fixed points of T is denoted by $F(T)$. The symbols → and ⇀ denote strong and weak convergence, respectively.
Let $T:K\rightarrow K$ be a mapping and K a subset of a Banach space E. T is called a nonexpansive mapping if, for all $x,y\in K$, $\Vert Tx-Ty\Vert\leq\Vert x-y\Vert$. T is called quasi-nonexpansive, if $F(T)\neq\emptyset$ and for all $x\in K$, $p\in F(T)$, $\Vert Tx-Tp\Vert\leq\Vert x-p\Vert$. For examples of quasi-nonexpansive mappings, see [1].
Let $H_{1}$ and $H_{2}$ be two real Hilbert spaces. $T_{1}:H_{1}\rightarrow H_{1}$, $T_{2}:H_{2}\rightarrow H_{2}$ are two nonlinear operators with $F(T_{1})\neq\emptyset$ and $F(T_{2})\neq\emptyset$. $A:H_{1}\rightarrow H_{2}$ is a bounded linear operator. The split fixed point problem for $T_{1}$ and $T_{2}$ is to
$$ \mbox{find an element } x\in F(T_{1}) \mbox{ such that } Ax\in F(T_{2}). $$
Let $\Gamma=\{ x\in F(T_{1}): Ax\in F(T_{2})\}$ denote the solution set of the problem (1.1). The problem was proposed by Censor and Segal [2] in a finite-dimensional space firstly. Next, Moudafi [3] studied the problem (1.1) in real Hilbert spaces; this generalized the problem (1.1) from a finite-dimensional space to infinite-dimensional Hilbert spaces. More precisely, the following result was obtained.
Theorem M
(see [3])
Let $H_{1}$ and $H_{2}$ be two real Hilbert spaces. Given a bounded linear operator $A:H_{1}\rightarrow H_{2}$, let $U:H_{1}\rightarrow H_{1}$ and $T:H_{2}\rightarrow H_{2}$ be two quasi-nonexpansive operators with $F(U)\neq\emptyset$ and $F(T)\neq\emptyset$. Assume that $U-I$ and $T-I$ are demiclosed at θ. Let $\{x_{n}\}$ be generated by
$$ \left \{ \begin{array}{l} x_{1}\in H_{1}, \\ u_{n}=x_{n}+\gamma\beta A^{*}(T-I)Ax_{n}, \\ x_{n+1}=(1-\alpha_{n})u_{n}+\alpha_{n}U(u_{n}),\quad \forall n\in \mathbb{N},\end{array} \right . $$
where $\beta\in(0,1)$, $\{\alpha_{n}\}\subset(\delta,1-\delta)$ for a small enough $\delta>0$, $\gamma\in(0,\frac{1}{\lambda\beta})$, and λ is the spectral radius of the operator $A^{*}A$. Then $\{x_{n}\}$ weakly converges to a split common fixed point $x^{*}\in\{x^{*}\in F(U): Ax^{*}\in F(T)\}$.
It is well known that the split feasibility problem and the convex feasibility problem are useful to some areas of applied mathematics such as image recovery, convex optimization, and so on. According to [2], the split common fixed point problem (1.1) is a generalization of both these; also see [3]. This shows the split common fixed point problem (1.1) is important. Recently, some convergence theorems for the split common solution problems were given in [4–9]. We notice that Theorem M is a weak convergence theorem, and it is well known that a strong convergence theorem is always more convenient to use. Hence, the purpose of this paper is to give some algorithms for the problem (1.1), and establishes some strong convergence theorems. At the same time, we generalize the problem (1.1) to two countable families of quasi-nonexpansive mappings.
A mapping T is said to be demiclosed if, for any sequence $\{x_{n}\}$ which weakly converges to y, and if the sequence $\{Tx_{n}\}$ strongly converges to z, we have $T(y)=z$; see [3].
Definition 1.1
(see [4, 5])
Let K be a nonempty closed convex subset of a real Hilbert space and T a mapping from K into K. The mapping T is called zero-demiclosed if $\{x_{n}\}$ in K satisfying $\Vert x_{n}-Tx_{n}\Vert\rightarrow0$ and $x_{n}\rightharpoonup z\in K$ implies $Tz=z$.
Proposition 1.1
Let K be a nonempty closed convex subset of a real Hilbert space with zero vector θ and T a mapping from K into K. Then the following statements hold.
T is zero-demiclosed if and only if $I-T$ is demiclosed at θ.
If T is a nonexpansive mapping and there is a bounded sequence $\{x_{n}\}\subset H$ such that $\Vert x_{n}-Tx_{n}\Vert\rightarrow0$ as $n\rightarrow0$, then T is zero-demiclosed.
Example 1.1
Let $H=\mathbb{R}$ with the inner product defined by $\langle x,y\rangle=xy$ for all $x,y\in\mathbb{R}$ and the standard norm $|\cdot|$. Let $C:=[0,+\infty)$ and $Tx=\frac{x^{2}+2}{1+x}$ for all $x\in C$. Then T is a continuous zero-demiclosed quasi-nonexpansive mapping but not nonexpansive.
Let $H=\mathbb{R}$ with the inner product defined by $\langle x,y\rangle=xy$ for all $x,y\in\mathbb{R}$ and the standard norm $|\cdot|$. Let $C:=[0,+\infty)$. Let T be a mapping from C into C defined by
$$ Tx=\left \{ \begin{array}{l@{\quad}l} \frac{2x}{x^{2}+1},&x\in(1,+\infty), \\ 0,& x\in[0,1]. \end{array} \right . $$
Then T is a discontinuous quasi-nonexpansive mapping but not zero-demiclosed.
The following results are important in this paper.
Let C be a closed convex subset of a real Hilbert space H. $P_{C}$ denotes a metric projection of H onto C, it is well known that $P_{C}(x)$ has the properties: for $x\in H$, and $z\in C$,
$$ z=P_{C}(x)\quad \Leftrightarrow\quad \langle x-z,z-y\rangle\geq0, \quad \forall y \in C $$
$$ \bigl\Vert y-P_{C}(x)\bigr\Vert ^{2}+\bigl\Vert x-P_{C}(x)\bigr\Vert ^{2}\leq\Vert x-y\Vert^{2}, \quad \forall y\in C, \forall x\in H. $$
In a real Hilbert space H, it is also well known that
$$ \bigl\Vert \lambda x+(1-\lambda)y\bigr\Vert ^{2}= \lambda \Vert x\Vert^{2}+(1-\lambda)\Vert y\Vert^{2}- \lambda(1-\lambda)\Vert x-y\Vert^{2},\quad \forall x,y\in H, \forall \lambda\in\mathbb{R} $$
$$ 2\langle x,y\rangle=\Vert x\Vert^{2}+\Vert y\Vert^{2}-\Vert x- y\Vert^{2},\quad \forall x,y\in H. $$
Strong convergence theorems
In this section, we construct some algorithms to solve the split common fixed point problem (1.1) for quasi-nonexpansive mappings.
Theorem 2.1
Let $H_{1}$ and $H_{2}$ be two real Hilbert spaces. C is a nonempty closed convex subset of $H_{1}$ and K a nonempty closed convex subset of $H_{2}$. $T_{1}: C\rightarrow H_{1}$ and $T_{2}:H_{2}\rightarrow H_{2}$ are two quasi-nonexpansive mappings with $F(T_{1})\neq\emptyset$ and $F(T_{2})\neq\emptyset$. $A: H_{1}\rightarrow H_{2}$ is a bounded linear operator. Assume that $T_{1} -I$ and $T_{2}-I$ are demiclosed at θ. Let $x_{0}\in C$, $C_{0}=C$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})T_{1}z_{n}, \\ z_{n}=P_{C}(x_{n}+\lambda A^{*}(T_{2}-I)Ax_{n}), \\ C_{n+1}=\{x\in C_{n}:\Vert y_{n}-x\Vert\leq\Vert z_{n}-x\Vert\leq\Vert x_{n}-x\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N} \cup\{0\}, \end{array} \right . $$
where P is a projection operator and $A^{*}$ denotes the adjoint of A. $\{\alpha_{n}\}\subset(0,\eta]\subset(0,1)$, $\lambda\in(0,\frac{1}{\Vert A^{*}\Vert^{2}})$. Assume that $\Gamma=\{p\in F(T_{1}): Ap\in F(T_{2})\}\neq\emptyset$, then $x_{n} \rightarrow x^{*}\in \Gamma$ and $Ax_{n} \rightarrow Ax^{*}\in F(T_{2})$.
It is easy to verify that $C_{n}$ is closed for $n\in\mathbb{N}\cup\{0\}$. We verify $C_{n}$ is convex for $n\in\mathbb{N}\cup\{0\}$. In fact, let $v_{1},v_{2}\in C_{n+1}$, for each $\lambda\in(0,1)$, we have
$$\begin{aligned} \bigl\Vert y_{n}-\bigl(\lambda v_{1}+(1- \lambda)v_{2}\bigr)\bigr\Vert ^{2} =&\bigl\Vert \lambda(y_{n}-v_{1})-(1-\lambda) (y_{n}-v_{2}) \bigr\Vert ^{2} \\ =& \lambda\Vert y_{n}-v_{1}\Vert^{2}+(1-\lambda) \Vert y_{n}-v_{2}\Vert ^{2}-\lambda(1-\lambda)\Vert v_{1}-v_{2}\Vert^{2} \\ \leq&\lambda\Vert z_{n}-v_{1}\Vert^{2}+(1- \lambda)\Vert z_{n}-v_{2}\Vert^{2}-\lambda(1- \lambda)\Vert v_{1}-v_{2}\Vert^{2} \\ =&\bigl\Vert z_{n}-\bigl(\lambda v_{1}+(1- \lambda)v_{2}\bigr)\bigr\Vert ^{2}, \end{aligned}$$
namely, $\Vert y_{n}-(\lambda v_{1}+(1-\lambda)v_{2})\Vert\leq\Vert z_{n}-(\lambda v_{1}+(1-\lambda)v_{2})\Vert$. Similarly, we have $\Vert z_{n}-(\lambda v_{1}+(1-\lambda)v_{2})\Vert\leq\Vert x_{n}-(\lambda v_{1}+(1-\lambda)v_{2})\Vert$; this shows $\lambda v_{1}+(1-\lambda)v_{2}\in C_{n+1}$ and $C_{n+1}$ is a convex set for $n\in\mathbb{N}\cup\{0\}$. Now we prove $\Gamma\subset C_{n}$ for $n\in\mathbb{N}\cup\{0\}$. Let $p\in\Gamma$, then
$$\begin{aligned}& 2\lambda\bigl\langle x_{n}-p, A^{*}(T_{2}Ax_{n}-Ax_{n}) \bigr\rangle \\& \quad = 2\lambda\bigl\langle A(x_{n}-p)+(T_{2}Ax_{n}-Ax_{n})-(T_{2}Ax_{n}-Ax_{n}), T_{2}Ax_{n}-Ax_{n}\bigr\rangle \\& \quad = 2\lambda\bigl(\bigl\langle T_{2}Ax_{n}-Ap, (T_{2}Ax_{n}-Ax_{n})\bigr\rangle -\Vert T_{2}Ax_{n}-Ax_{n}\Vert^{2}\bigr) \\& \quad = 2\lambda\biggl(\frac{1}{2}\Vert T_{2}Ax_{n}-Ap \Vert^{2}+\frac{1}{2}\Vert T_{2}Ax_{n}-Ax_{n} \Vert^{2} \\& \qquad {}-\frac {1}{2}\Vert Ax_{n}-Ap\Vert^{2} -\Vert T_{2}Ax_{n}-Ax_{n}\Vert^{2}\biggr) \quad \text{by (1.6)} \\& \quad \leq 2\lambda\biggl(\frac{1}{2}\Vert T_{2}Ax_{n}-Ax_{n} \Vert^{2} -\Vert T_{2}Ax_{n}-Ax_{n} \Vert^{2}\biggr) \\& \quad = -\lambda\Vert T_{2}Ax_{n}-Ax_{n} \Vert^{2}. \end{aligned}$$
From (2.1) and (2.2) we have
$$\begin{aligned} \Vert z_{n}-p\Vert ^{2} =&\bigl\Vert P_{C} \bigl(x_{n}+\lambda A^{*}(T_{2}Ax_{n}-Ax_{n}) \bigr)-P_{C}(p)\bigr\Vert ^{2} \\ \leq&\bigl\Vert x_{n}+\lambda A^{*}(T_{2}Ax_{n}-Ax_{n})-p \bigr\Vert ^{2} \\ =&\Vert x_{n}-p\Vert ^{2}+\bigl\Vert \lambda A^{*}(T_{2}Ax_{n}-Ax_{n})\bigr\Vert ^{2}+ 2\lambda \bigl\langle x_{n}-p, A^{*}(T_{2}Ax_{n}-Ax_{n}) \bigr\rangle \\ \leq&\Vert x_{n}-p\Vert ^{2}+ \lambda^{2} \bigl\Vert A^{*}\bigr\Vert ^{2}\Vert T_{2}Ax_{n}-Ax_{n} \Vert ^{2}-\lambda \Vert T_{2}Ax_{n}-Ax_{n} \Vert ^{2} \\ =& \Vert x_{n}-p\Vert ^{2}-\lambda\bigl(1-\lambda\bigl\Vert A^{*}\bigr\Vert ^{2} \bigr)\Vert T_{2}Ax_{n}-Ax_{n} \Vert ^{2}. \end{aligned}$$
Again from $p\in\Gamma$, (2.1), and (2.3), it follows that
$$ \Vert y_{n}-p\Vert\leq\Vert z_{n}-p\Vert\leq\Vert x_{n}-p\Vert. $$
Hence, $p\in C_{n}$ and $\Gamma\subset C_{n}$ for $n\in \mathbb{N}\cup\{0\}$.
Notice that $\Gamma\subset C_{n+1}\subset C_{n}$ and $x_{n+1}=P_{C_{n+1}}(x_{0})\subset C_{n}$, then
$$ \Vert x_{n+1}-x_{0}\Vert\leq\Vert p-x_{0}\Vert \quad \text{for } n\in\mathbb {N} \text{ and } p\in\Gamma. $$
By (2.5), $\{x_{n}\}$ is bounded. For $n\in\mathbb{N}$, by (1.4), we have
$$ \Vert x_{n+1}-x_{n}\Vert ^{2}+\Vert x_{0}-x_{n}\Vert ^{2}=\bigl\Vert x_{n+1}-P_{C_{n}}(x_{0})\bigr\Vert ^{2}+ \bigl\Vert x_{0}-P_{C_{n}}(x_{0})\bigr\Vert ^{2}\leq \Vert x_{n+1}-x_{0}\Vert ^{2}, $$
which implies that $0\leq\Vert x_{n}-x_{n+1}\Vert^{2}\leq\Vert x_{n+1}-x_{0}\Vert^{2}-\Vert x_{0}-x_{n}\Vert^{2}$. Thus $\{\Vert x_{n}-x_{0}\Vert\}$ is non-decreasing. Therefore, by the boundedness of $\{ x_{n}\}$, $\lim_{n\rightarrow\infty}\Vert x_{n}-x_{0}\Vert$ exists. For $m, n\in\mathbb{N}$ with $m>n$, from $x_{m}=P_{C_{m}}(x_{0})\subset C_{n}$ and (1.4), we have
$$ \Vert x_{m}-x_{n}\Vert ^{2}+\Vert x_{0}-x_{n}\Vert ^{2}=\bigl\Vert x_{m}-P_{C_{n}}(x_{0})\bigr\Vert ^{2}+ \bigl\Vert x_{0}-P_{C_{n}}(x_{0})\bigr\Vert ^{2}\leq \Vert x_{m}-x_{0}\Vert ^{2}. $$
By (2.5) and (2.6), $\lim_{n\rightarrow\infty}\Vert x_{n}-x_{m}\Vert=0$. So, $\{x_{n}\}$ is a Cauchy sequence.
Let $x_{n}\rightarrow x^{*}$. Since $x_{n+1}=P_{C_{n+1}}(x_{0})\in C_{n+1}\subset C_{n}$, we have
$$\begin{aligned}& \Vert z_{n}-x_{n}\Vert\leq\Vert z_{n}-x_{n+1}\Vert+\Vert x_{n+1}-x_{n}\Vert \leq2\Vert x_{n+1}-x_{n}\Vert\rightarrow0, \\& \Vert y_{n}-x_{n}\Vert\leq\Vert y_{n}-x_{n+1} \Vert+\Vert x_{n+1}-x_{n}\Vert\leq2\Vert x_{n+1}-x_{n} \Vert\rightarrow0, \\& \Vert y_{n}-z_{n}\Vert\leq\Vert y_{n}-x_{n} \Vert+\Vert x_{n}-z_{n}\Vert \rightarrow0. \end{aligned}$$
Notice that $\lambda(1- \lambda\Vert A^{*}\Vert^{2})>0$, from (2.3) and (2.7),
$$\begin{aligned} \Vert T_{2}Ax_{n}-Ax_{n}\Vert^{2} \leq& \frac{ \Vert x_{n}-p\Vert^{2}-\Vert z_{n}-p\Vert^{2} }{\lambda(1- \lambda\Vert A^{*}\Vert^{2})} \\ \leq& \frac{1}{\lambda(1- \lambda\Vert A^{*}\Vert^{2})}\Vert x_{n}-z_{n}\Vert \bigl\{ \Vert x_{n}-p\Vert +\Vert z_{n}-p\Vert \bigr\} \rightarrow0. \end{aligned}$$
Again from (2.1) and (2.7), we have
$$ \Vert T_{1}z_{n}-z_{n}\Vert=\bigl\Vert (T_{1}-I)z_{n}\bigr\Vert \rightarrow0. $$
Since $x_{n}\rightarrow x^{*}$, from (2.7) we have $z_{n}\rightarrow x^{*}$, which implies that $z_{n}\rightharpoonup x^{*}$. By Proposition 1.1, we obtain $x^{*}\in F(T_{1})$.
Next, we want to show $Ax^{*}\in F(T_{2})$. Since A is a bounded linear operator, we know that $\Vert Ax_{n}-Ax^{*}\Vert\rightarrow0$ by $x_{n}\rightarrow x^{*}$. Together with $\Vert T_{2}Ax_{n}-Ax_{n}\Vert\rightarrow0$ and $T_{2}-I$ being demiclosed at θ, we have $Ax^{*}\in F(T_{2})$. Thus, $x^{*}\in \Gamma$ and $\{x_{n}\}$ converges strongly to $x^{*}\in\Gamma$. The proof is completed. □
Remark 2.1
If the quasi-nonexpansive mappings $T_{1}$ and $T_{2}$ are continuous, then the demiclosed property can be removed for the quasi-nonexpansive mappings $T_{1}$ and $T_{2}$ in Theorem 2.1.
Now, we consider the split fixed point problem for a finite family of quasi-nonexpansive mappings.
Lemma 2.1
Let $T:H\rightarrow H$ be a quasi-nonexpansive mapping, and set $T_{\alpha}:=(1-\alpha)I+\alpha T$ for $\alpha\in(0,1]$. Then $\Vert T_{\alpha}x-p\Vert\leq\Vert x-p\Vert-\alpha(1-\alpha)\Vert T x-x\Vert$, $p\in F(T)$ and $x\in H$. Moreover, $F(T_{\alpha})=F(T)$.
Let $T_{1}, T_{2}:H\rightarrow H$ be two quasi-nonexpansive mappings and set $S_{\xi_{1}}:=(1-\xi_{1})I+\xi_{1}T_{1}$ and $S_{\xi_{2}}:=(1-\xi_{2})I+\xi_{2}T_{2}$ for $\xi_{1}, \xi_{2}\in(0,1)$. Again let $S=\tau S_{\xi_{1}}+(1-\tau)S_{\xi_{2}}$ for $\tau\in(0,1)$. Then S is a quasi-nonexpansive mapping, and $F(S)=\bigcap_{i=1}^{2}F(S_{\xi_{i}})=\bigcap_{i=1}^{2}F(T_{i})$.
(1) It is easy to verify that $\bigcap_{i=1}^{2}F(S_{\xi_{i}})=\bigcap_{i=1}^{2}F(T_{i})$. We only need to prove $F(S)=\bigcap_{i=1}^{2}F(S_{\xi_{i}})$. Clearly, $\bigcap_{i=1}^{2}F(S_{\xi_{i}})\subset F(S)$. On the other hand, for $p\in F(S)$ and $p_{1}\in\bigcap_{i=1}^{2}F(S_{\xi_{i}})$, we have
$$\begin{aligned} \Vert p-p_{1}\Vert ^{2} =&\bigl\Vert \tau S_{\xi_{1}}p+(1-\tau)S_{\xi_{2}} p-p_{1}\bigr\Vert ^{2}=\bigl\Vert \tau( S_{\xi_{1}}p-p_{1})+(1-\tau) (S_{\xi_{2}} p-p_{1})\bigr\Vert ^{2} \\ =&\tau \Vert S_{\xi_{1}}p-p_{1}\Vert ^{2}+(1-\tau) \Vert S_{\xi_{2}} p-p_{1}\Vert ^{2}-\tau(1-\tau) \Vert S_{\xi_{1}}p-S_{\xi_{2}} p\Vert ^{2} \\ \leq&\tau \Vert p-p_{1}\Vert ^{2}-\tau \xi_{1}(1-\xi_{1})\Vert T_{1}p-p\Vert ^{2}+(1-\tau)\Vert p-p_{1}\Vert ^{2} \\ &{}-(1-\tau) \xi_{2}(1-\xi _{2})\Vert T_{2} p-p \Vert ^{2}\quad \text{(by Lemma 2.1)} \\ =& \Vert p-p_{1}\Vert ^{2}-\tau\xi_{1}(1- \xi_{1})\Vert T_{1}p-p\Vert ^{2}-(1-\tau) \xi_{2}(1-\xi_{2})\Vert T_{2} p-p \Vert ^{2}, \end{aligned}$$
which yields $\Vert T_{1}p-p\Vert=\Vert T_{2} p-p \Vert=0$, namely, $p\in\bigcap_{i=1}^{2}F(T_{i})= \bigcap_{i=1}^{2}F(S_{\xi_{i}})$. So, $F(S)=\bigcap_{i=1}^{2}F(S_{\xi_{i}})$.
(2) Let $x\in H$ and $p\in F(S)$. Then
$$\begin{aligned} \Vert Sx-p\Vert =&\bigl\Vert \tau S_{\xi_{1}}x+(1- \tau)S_{\xi_{2}} x-p\bigr\Vert =\bigl\Vert \tau( S_{\xi_{1}}x-p)+(1- \tau) (S_{\xi_{2}} x-p)\bigr\Vert \\ \leq&\tau \Vert x-p\Vert +(1-\tau)\Vert x-p \Vert =\Vert x-p \Vert \quad \text{(by Lemma 2.1)}. \end{aligned}$$
So, S is a quasi-nonexpansive mapping. The proof is completed. □
Let $T_{1}, T_{2},\ldots, T_{k}:H\rightarrow H$ be k quasi-nonexpansive mappings and set $S=\sum_{i=1}^{k}\tau_{i} S_{\xi_{i}}$, where $\tau_{i}\in(0,1)$ satisfies $\sum_{i=1}^{k}\tau_{i} =1$, $S_{\xi_{i}}:=(1-\xi_{i})I+\xi_{i}T_{i}$ for $\xi_{i}\in(0,1)$, $i=1,2,\ldots,k$. Then S is a quasi-nonexpansive mapping, and $F(S)=\bigcap_{i=1}^{k}F(S_{\xi_{i}})=\bigcap_{i=1}^{k}F(T_{i})$.
Using mathematical induction, Lemma 2.3 is obtained by Lemma 2.2. □
Let $H_{1}$ and $H_{2}$ be two real Hilbert spaces. C is a nonempty closed convex subset of $H_{1}$ and K a nonempty closed convex subset of $H_{2}$. $T_{1},\ldots, T_{k}: C\rightarrow H_{1}$ are k quasi-nonexpansive mappings with $\bigcap_{i=1}^{k}F(T_{i})\neq\emptyset$. $G_{1},\ldots, G_{l}:H_{2}\rightarrow H_{2}$ are l quasi-nonexpansive mappings with $\bigcap_{j=1}^{l}F(G_{j})\neq\emptyset$. $A: H_{1}\rightarrow H_{2}$ is a bounded linear operator. Assume that $T_{i}-I$ ($i=1,2,\ldots,k$) and $G_{j}-I$ ($j=1,2,\ldots, l$) are demiclosed at θ. Let $x_{0}\in C$, $C_{0}=C$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \textstyle \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})\sum_{i=1}^{k}\tau_{i} T_{\xi _{i}}z_{n}, \\ z_{n}=P_{C}(x_{n}+\lambda A^{*}(\sum_{i=1}^{l}\varepsilon_{j} G_{\theta_{j}} -I)Ax_{n}), \\ C_{n+1}=\{v\in C_{n}:\Vert y_{n}-v\Vert\leq\Vert z_{n}-v\Vert\leq\Vert x_{n}-v\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N}\cup\{0\},\end{array} \right . $$
where P is a projection operator and $A^{*}$ denotes the adjoint of A, $\{\alpha_{n}\}\subset(0,\eta]\subset(0,1)$, $\lambda\in(0,\frac{1}{\|A^{*}\|^{2}})$. $\tau_{i}\in(0,1)$ and $\varepsilon_{j}\in(0,1)$ satisfy $\sum_{i=1}^{k}\tau_{i} =1$ and $\sum_{j=1}^{l}\varepsilon_{j} =1$, $T_{\xi _{i}}:=(1-\xi_{i})I+\xi_{i}T_{i}$ for $\xi_{i}\in(0,1)$, $i=1,2,\ldots,k$, $G_{\theta_{j}}:=(1-\theta_{j})I+\theta_{j}G_{j}$ for $\theta_{j}\in(0,1)$, $j=1,2,\ldots,l$. Assume that $\Gamma=\{p\in\bigcap_{i=1}^{k}F(T_{i}): Ap\in\bigcap_{j=1}^{l}F(G_{j})\}\neq\emptyset$, then the sequence $\{x_{n}\}$ converges strongly to an element $q\in\Gamma$.
Let $T=\sum_{i=1}^{k}\tau_{i} T_{\xi_{i}} $, $S=\sum_{i=1}^{l}\varepsilon_{j} G_{\theta_{j}}$, by Lemma 2.3, $F(T)=\bigcap_{i=1}^{k}F(T_{i})\neq\emptyset$, and $F(S)=\bigcap_{j=1}^{l}F(G_{j})\neq\emptyset$. Moreover, T and S are quasi-nonexpansive mappings.
Next, we want to prove $T-I$ and $S-I$ are demiclosed at θ. By the hypothesis, $T_{i}-I$ ($i=1,2,\ldots,k$) and $G_{j}-I$ ($j=1,2,\ldots, l$) are demiclosed at θ. So, $T_{\xi_{i}}-I= \xi_{i}(T_{i}-I)$ and $G_{\theta_{j}}-I=\theta_{j}(G_{j}-I)$ are demiclosed at θ, and that $T-I=\sum_{i=1}^{k}\tau_{i} (T_{\xi_{i}}-I)$ and $S-I=\sum_{i=1}^{l}\varepsilon_{j} (G_{\theta_{j}}-I)$ are demiclosed at θ.
Thus, by Theorem 2.1, we obtain the desired result. The proof is completed. □
If $C=H_{1}$ in Theorem 2.1 and Theorem 2.2, then we have the following corollaries.
Corollary 2.1
Let $H_{1}$ and $H_{2}$ be two real Hilbert spaces. $T_{1}: H_{1}\rightarrow H_{1}$ and $T_{2}:H_{2}\rightarrow H_{2}$ are two quasi-nonexpansive mappings with $F(T_{1})\neq\emptyset$ and $F(T_{2})\neq\emptyset$. $A: H_{1}\rightarrow H_{2}$ is a bounded linear operator. Assume that $T_{1} -I$ and $T_{2}-I$ are demiclosed at θ. Let $x_{0}\in H_{1}$, $C_{0}=H_{1}$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})T_{1}z_{n}, \\ z_{n}= x_{n}+\lambda A^{*}(T_{2}Ax_{n}-Ax_{n}) , \\ C_{n+1}=\{v\in C_{n}:\Vert y_{n}-v\Vert\leq\Vert z_{n}-v\Vert\leq\Vert x_{n}-v\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N}\cup\{0\},\end{array} \right . $$
where P is a projection operator and $A^{*}$ denotes the adjoint of A, $\{\alpha_{n}\}\subset (0,\eta]\subset(0,1)$, $\lambda\in(0,\frac{1}{\|A^{*}\|^{2}})$. Assume that $\Gamma=\{p\in F(T_{1}): Ap\in F(T_{2})\}\neq\emptyset$, then the sequence $\{x_{n}\}$ converges strongly to an element $x^{*}\in\Gamma$.
Let $H_{1}$ and $H_{2}$ be two real Hilbert spaces. $T_{1},\ldots, T_{k}: H_{1}\rightarrow H_{1}$ are k quasi-nonexpansive mappings with $\bigcap_{i=1}^{k}F(T_{i})\neq\emptyset$. $G_{1},\ldots, G_{l}: H_{2}\rightarrow H_{2}$ are l quasi-nonexpansive mappings with $\bigcap_{j=1}^{l}F(G_{j})\neq\emptyset$. $A: H_{1}\rightarrow H_{2}$ is a bounded linear operator. Assume that $T_{i}-I$ ($i=1,2,\ldots,k$) and $G_{j}-I$ ($j=1,2,\ldots, l$) are demiclosed at θ. Let $x_{0}\in H_{1}$, $C_{0}=H_{1}$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \textstyle \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})\sum_{i=1}^{k}\tau_{i} T_{\xi _{i}}z_{n}, \\ z_{n}= x_{n}+\lambda A^{*}(\sum_{i=1}^{k}\tau_{i} G_{\theta_{j}}Ax_{n}-Ax_{n}) , \\ C_{n+1}=\{v\in C_{n}:\Vert y_{n}-v\Vert\leq\Vert z_{n}-v\Vert\leq\Vert x_{n}-v\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N}\cup\{0\},\end{array} \right . $$
where P is a projection operator and $A^{*}$ denotes the adjoint of A, $\{\alpha_{n}\}\subset (0,\eta]\subset(0,1)$, $\lambda\in(0,\frac{1}{\|A^{*}\|^{2}})$. Here $\tau_{i}\in(0,1)$ and $\varepsilon_{j}\in(0,1)$ satisfy $\sum_{i=1}^{k}\tau_{i} =1$ and $\sum_{j=1}^{l}\varepsilon_{j} =1$, $T_{\xi _{i}}:=(1-\xi_{i})I+\xi_{i}T_{i}$ for $\xi_{i}\in(0,1)$, $i=1,2,\ldots,k$, $G_{\theta_{j}}:=(1-\theta_{j})I+\theta_{j}G_{j}$ for $\theta_{j}\in(0,1)$, $j=1,2,\ldots,l$. Assume that $\Gamma=\{p\in\bigcap_{i=1}^{k}F(T_{i}): Ap\in\bigcap_{j=1}^{l}F(G_{j})\}\neq\emptyset$, then the sequence $\{x_{n}\}$ converges strongly to an element $q\in\Gamma$.
If $H_{1}=H_{2}:=H$ and A is an identity operator, then we have the following results by Theorems 2.1 and 2.2, respectively.
Let H be a real Hilbert space. C is a nonempty closed convex subset of H. $T_{1}: C\rightarrow H$ and $T_{2}:H\rightarrow H$ are two quasi-nonexpansive mappings with $\Gamma:=F(T_{1})\cap F(T_{2})\neq \emptyset$. Assume that $T_{1} -I$ and $T_{2}-I$ are demiclosed at θ. Let $x_{0}\in C$, $C_{0}=C$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})T_{1}z_{n}, \\ z_{n}=P_{C}((1-\lambda)x_{n}+\lambda T_{2}x_{n}), \\ C_{n+1}=\{x\in C_{n}:\Vert y_{n}-x\Vert\leq\Vert z_{n}-x\Vert\leq\Vert x_{n}-x\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N} \cup\{0\},\end{array} \right . $$
where P is a projection operator. $\{\alpha_{n}\}\subset(0,\eta ]\subset(0,1)$, $\lambda\in(0,1)$. Then $x_{n} \rightarrow x^{*}\in \Gamma$.
Let H be a real Hilbert space. C is a nonempty closed convex subset of H. $T_{1},\ldots, T_{k}: C\rightarrow H$ are k quasi-nonexpansive mappings with $\bigcap_{i=1}^{k}F(T_{i})\neq\emptyset$. $G_{1},\ldots, G_{l}:H\rightarrow H$ are l quasi-nonexpansive mappings with $\bigcap_{j=1}^{l}F(G_{j})\neq\emptyset$. Assume that $T_{i}-I$ ($i=1,2,\ldots,k$) and $G_{j}-I$ ($j=1,2,\ldots, l$) are demiclosed at θ. Let $x_{0}\in C$, $C_{0}=C$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \textstyle \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})\sum_{i=1}^{k}\tau_{i} T_{\xi _{i}}z_{n}, \\ z_{n}=P_{C}((1-\lambda)x_{n}+\lambda \sum_{i=1}^{l}\varepsilon_{j} G_{\theta _{j}} x_{n}), \\ C_{n+1}=\{v\in C_{n}:\Vert y_{n}-v\Vert\leq\Vert z_{n}-v\Vert\leq\Vert x_{n}-v\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N}\cup\{0\} ,\end{array} \right . $$
where P is a projection operator. $\{\alpha_{n}\}\subset(0,\eta ]\subset(0,1)$, $\lambda\in(0,1)$. $\tau_{i}\in(0,1)$ and $\varepsilon_{j}\in(0,1)$ satisfy $\sum_{i=1}^{k}\tau_{i} =1$ and $\sum_{j=1}^{l}\varepsilon_{j} =1$, $T_{\xi _{i}}:=(1-\xi_{i})I+\xi_{i}T_{i}$ for $\xi_{i}\in(0,1)$, $i=1,2,\ldots,k$, $G_{\theta_{j}}:=(1-\theta_{j})I+\theta_{j}G_{j}$ for $\theta_{j}\in(0,1)$, $j=1,2,\ldots,l$. Assume that $\Gamma:=(\bigcap_{i=1}^{k}F(T_{i}))\cap( \bigcap_{j=1}^{l}F(G_{j}))\neq\emptyset$, then the sequence $\{x_{n}\}$ converges strongly to an element $q\in\Gamma$.
If $C=H:=H_{1}=H_{2}$ and A is an identity operator, then we have the following results by Corollaries 2.3 and 2.4, respectively.
Let H be a real Hilbert space. $T_{1}, T_{2}: H\rightarrow H$ are two quasi-nonexpansive mappings with $\Gamma:=F(T_{1})\cap F(T_{2})\neq \emptyset$. Assume that $T_{1} -I$ and $T_{2}-I$ are demiclosed at θ. Let $x_{0}\in C$, $C_{0}=C$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})T_{1}z_{n}, \\ z_{n}=(1-\lambda)x_{n}+\lambda T_{2}x_{n}, \\ C_{n+1}=\{x\in C_{n}:\Vert y_{n}-x\Vert\leq\Vert z_{n}-x\Vert\leq\Vert x_{n}-x\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N} \cup\{0\},\end{array} \right . $$
Let H be a real Hilbert space. $T_{1},\ldots, T_{k}: H\rightarrow H$ are k quasi-nonexpansive mappings with $\bigcap_{i=1}^{k}F(T_{i})\neq\emptyset$. $G_{1},\ldots, G_{l}:H\rightarrow H$ are l quasi-nonexpansive mappings with $\bigcap_{j=1}^{l}F(G_{j})\neq\emptyset$. Assume that $T_{i}-I$ ($i=1,2,\ldots,k$) and $G_{j}-I$ ($j=1,2,\ldots, l$) are demiclosed at θ. Let $x_{0}\in C$, $C_{0}=C$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \textstyle \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})\sum_{i=1}^{k}\tau_{i} T_{\xi _{i}}z_{n}, \\ z_{n}=(1-\lambda)x_{n}+\lambda \sum_{i=1}^{l}\varepsilon_{j} G_{\theta_{j}} x_{n}, \\ C_{n+1}=\{v\in C_{n}:\Vert y_{n}-v\Vert\leq\Vert z_{n}-v\Vert\leq\Vert x_{n}-v\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N}\cup\{0\} ,\end{array} \right . $$
The coefficient condition that $\{\alpha_{n}\}\subset(\delta,1-\delta)$ for a small enough $\delta>0$ in Theorem M is replaced with $\{\alpha_{n}\}\subset(0,\eta]\subset (0,1)$. This shows we can let $\alpha_{n}=\frac{1}{n+1}$ in this paper, which is a natural choice.
Further generalization of the problem (1.1)
In Section 2, we gave a strong convergence algorithm for the problem (1.1). By the algorithm, we also considered the split solution problem for two finite families of quasi-nonexpansive mappings; see the algorithm (2.10). However, the algorithm (2.10) has an obvious drawback, in that the algorithm (2.10) will be invalid for two countable families of quasi-nonexpansive mappings. So, in this section, we introduce an algorithm for the split solution problem of two countable families of quasi-nonexpansive mappings. The following lemma can be found in [10].
The unique solutions to the positive integer equation
$$ n=i+\frac{(m-1)m}{2},\quad m \geq i, n=1,2,3,\ldots $$
$$ i=n-\frac{(m-1)m}{2},\quad m=- \biggl[\frac{1}{2}-\sqrt {2n+ \frac{1}{2}} \biggr]\geq i, n=1,2,3,\ldots, $$
where $[x]$ denotes the maximal integer that is not larger than x.
Let $H_{1}$ and $H_{2}$ be two real Hilbert spaces. C is a nonempty closed convex subset of $H_{1}$. $A: H_{1}\rightarrow H_{2}$ is a bounded linear operator. $\{T_{i}\}_{i=1}^{\infty}: C\rightarrow H_{1}$ and $\{G_{i}\}_{i=1}^{\infty}: H_{2}\rightarrow H_{2}$ are two countable families of quasi-nonexpansive mappings with $\Gamma=\{p\in\bigcap_{i=1}^{\infty}F(T_{i}): Ap\in\bigcap_{j=1}^{\infty}F(G_{j})\}\neq\emptyset$. Assume that $T_{i}-I$ ($i=1,2,\ldots $) and $G_{j}-I$ ($j=1,2,\ldots $) are demiclosed at θ. Let $x_{0}\in C$, $C_{0}=C$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})T_{i_{n}}z_{n}, \\ z_{n}=P_{C}(x_{n}+\lambda A^{*}( G_{i_{n}}-I)Ax_{n}), \\ C_{n+1}=\{v\in C_{n}:\Vert y_{n}-v\Vert\leq\Vert z_{n}-v\Vert\leq\Vert x_{n}-v\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N}\cup\{0\} ,\end{array} \right . $$
where P is a projection operator and $A^{*}$ denotes the adjoint of A, $\{\alpha_{n}\}\subset(0,\eta]\subset(0,1)$, $\lambda\in(0,\frac{1}{\|A^{*}\|^{2}})$. $i_{n}$ satisfies (3.1), i.e. $i_{n}=n-\frac{(m-1)m}{2}$ and $m\geq i_{n}$ for $n=1,2,\ldots $ . Then the sequence $\{x_{n}\}$ converges strongly to an element $q\in\Gamma$.
Just like the proof in Theorem 2.1, we can obtain the following facts (I)-(IV):
(I) For $p\in\Gamma$,
$$\begin{aligned}& 2\lambda\bigl\langle x_{n}-p, A^{*}(G_{i_{n}}-I)Ax_{n} \bigr\rangle \leq-\lambda\bigl\Vert (G_{i_{n}}-I)Ax_{n} \bigr\Vert ^{2}, \end{aligned}$$
$$\begin{aligned}& \Vert z_{n}-p\Vert^{2} \leq\Vert x_{n}-p \Vert^{2}-\lambda\bigl(1-\lambda\bigl\Vert A^{*}\bigr\Vert ^{2} \bigr)\bigl\Vert (G_{i_{n}}-I)Ax_{n} \bigr\Vert ^{2} \end{aligned}$$
(II) We have $\Gamma\subset C_{n}$ for $n\in \mathbb{N}\cup\{0\}$. $C_{n}$ is also closed and convex for $n\in\mathbb{N}\cup\{0\}$.
(III) $\{x_{n}\}$ is a Cauchy sequence and
$$ \lim_{n\rightarrow\infty}\Vert z_{n}-x_{n}\Vert=\lim _{n\rightarrow \infty}\Vert y_{n}-x_{n}\Vert=\lim _{n\rightarrow\infty}\Vert y_{n}-z_{n}\Vert=0. $$
$$ \lim_{n\rightarrow\infty}\bigl\Vert (T_{i_{n}}-I)z_{n} \bigr\Vert =0, \qquad \lim_{n\rightarrow\infty}\bigl\Vert (G_{i_{n}}-I)Ax_{n} \bigr\Vert =0. $$
Now, for each $i\in\mathbb{N}$, set $K_{i}=\{k\geq1: k=i+\frac {(m-1)m}{2}, m\geq i,m\in \mathbb{N}\}$. Since $n=i_{n}+\frac {(m-1)m}{2}$, $m\geq i_{n}$, and $m\in\mathbb{N} $ for $n=1,2,\ldots $ , and the definition of $K_{i}$, we have $i_{k}\equiv i$ for $k\in K_{i}$. Obviously, $\{k\}$ is a subsequence of $\{n\}$. Thus, for $k\in K_{i}$ and $i\in\mathbb{N}$, it follows from (3.8) that
$$ \begin{aligned} &\lim_{k\rightarrow\infty}\bigl\Vert (T_{i}-I)z_{k} \bigr\Vert =\lim_{k\rightarrow\infty}\bigl\Vert (T_{i_{k}}-I)z_{k} \bigr\Vert =0, \\ &\lim_{k\rightarrow\infty}\bigl\Vert (G_{i}-I)Ax_{k} \bigr\Vert =\lim_{k\rightarrow\infty} \bigl\Vert (G_{i_{k}}-I)Ax_{k} \bigr\Vert =0. \end{aligned} $$
Let $x_{n}\rightarrow x^{*}$. From (3.7) we have $z_{n}\rightarrow x^{*}$. By (3.9), we obtain $x^{*}\in F(T_{i})$.
Next, we want to prove $Ax^{*}\in F(G_{i})$. Since A is a bounded linear operator, $\Vert Ax_{n}-Ax^{*}\Vert\rightarrow0$ by $x_{n}\rightarrow x^{*}$. Together with $\Vert(G_{i}-I)Ax_{k} \Vert\rightarrow0$, we have $Ax_{n}\rightarrow Ax^{*}\in F(G_{i})$. Thus, $x^{*}\in \Gamma$ and $\{x_{n}\}$ converges strongly to $x^{*}\in\Gamma$. The proof is completed. □
If $C=H_{1}$, then we have the following result by Theorem 3.1.
Let $H_{1}$ and $H_{2}$ be two real Hilbert spaces. $A: H_{1}\rightarrow H_{2}$ is a bounded linear operator. $\{T_{i}\}_{i=1}^{\infty}: H_{1}\rightarrow H_{1}$ and $\{G_{i}\}_{i=1}^{\infty}: H_{2}\rightarrow H_{2}$ are two countable families of quasi-nonexpansive mappings with $\Gamma=\{p\in\bigcap_{i=1}^{\infty}F(T_{i}): Ap\in\bigcap_{j=1}^{\infty}F(G_{j})\}\neq\emptyset$. Assume that $T_{i}-I$ ($i=1,2,\ldots $) and $G_{j}-I$ ($j=1,2,\ldots $) are demiclosed at θ. Let $x_{0}\in C$, $C_{0}=H_{1}$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})T_{i_{n}}z_{n}, \\ z_{n}=x_{n}+\lambda A^{*}( G_{i_{n}}-I)Ax_{n}, \\ C_{n+1}=\{v\in C_{n}:\Vert y_{n}-v\Vert\leq\Vert z_{n}-v\Vert\leq\Vert x_{n}-v\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N}\cup\{0\} ,\end{array} \right . $$
If $H_{1}=H_{2}:=H$ and A is an identity operator, then we have the following results by Theorem 3.1 and Corollary 3.1, respectively.
Let H be a real Hilbert space. C is a nonempty closed convex subset of H. $\{T_{i}\}_{i=1}^{\infty}: C\rightarrow H$ and $\{G_{i}\}_{i=1}^{\infty}: H\rightarrow H$ are two countable families of quasi-nonexpansive mappings with $\Gamma:= (\bigcap_{i=1}^{\infty}F(T_{i}))\cap( \bigcap_{j=1}^{\infty}F(G_{j}))\neq\emptyset$. Assume that $T_{i}-I$ ($i=1,2,\ldots $) and $G_{j}-I$ ($j=1,2,\ldots $) are demiclosed at θ. Let $x_{0}\in C$, $C_{0}=C$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})T_{i_{n}}z_{n}, \\ z_{n}=P_{C}((1-\lambda)x_{n}+\lambda G_{i_{n}}x_{n}), \\ C_{n+1}=\{v\in C_{n}:\Vert y_{n}-v\Vert\leq\Vert z_{n}-v\Vert\leq\Vert x_{n}-v\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N}\cup\{0\} ,\end{array} \right . $$
where P is a projection operator. $\{\alpha_{n}\}\subset(0,\eta ]\subset(0,1)$, $\lambda\in(0,1)$. $i_{n}$ satisfies (3.1), i.e. $i_{n}=n-\frac{(m-1)m}{2}$ and $m\geq i_{n}$ for $n=1,2,\ldots $ . Then the sequence $\{x_{n}\}$ converges strongly to an element $q\in\Gamma$.
Let H be a real Hilbert space. $\{T_{i}\}_{i=1}^{\infty}: H\rightarrow H$ and $\{G_{i}\}_{i=1}^{\infty}: H\rightarrow H$ are two countable families of quasi-nonexpansive mappings with $\Gamma=\{p\in(\bigcap_{i=1}^{\infty}F(T_{i}))\cap( \bigcap_{j=1}^{\infty}F(G_{j}))\}\neq\emptyset$. Assume that $T_{i}-I$ ($i=1,2,\ldots $) and $G_{j}-I$ ($j=1,2,\ldots $) are demiclosed at θ. Let $x_{0}\in H$, $C_{0}=H$, and $\{x_{n}\}$ be a sequence generated in the following manner:
$$ \left \{ \begin{array}{l} y_{n}=\alpha_{n}z_{n}+(1-\alpha_{n})T_{i_{n}}z_{n}, \\ z_{n}=(1-\lambda)x_{n}+\lambda G_{i_{n}}x_{n}, \\ C_{n+1}=\{v\in C_{n}:\Vert y_{n}-v\Vert\leq\Vert z_{n}-v\Vert\leq\Vert x_{n}-v\Vert\}, \\ x_{n+1}=P_{C_{n+1}}(x_{0}),\quad \forall n\in \mathbb{N}\cup\{0\} ,\end{array} \right . $$
We give strong convergence algorithms for the split common fixed point problem of quasi-nonexpansive mappings. Our results improve and generalize some well-known results in [3, 11] and so on.
Although Theorem 3.1 gives a strong convergence algorithm for two countable families of quasi-nonexpansive mappings, the condition that each mapping must be demiclosed at θ is very strong. In addition, we guess the speed of convergence is not too fast for the algorithm (3.3). Therefore, the algorithm (3.3) should be improved further in the future.
The split common solution problem is a very interesting topic. It has received attention by many scholars. Many research articles have been published, for example, [12–21] and references therein.
Lin, L-J, Chuang, C-S, Yu, Z-T: Fixed point theorems for some new nonlinear mappings in Hilbert spaces. Fixed Point Theory Appl. 2011, 51 (2011)
Censor, Y, Segal, A: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587-600 (2009)
Moudafi, A: A note on the split common fixed-point problem for quasi-nonexpansive operators. Nonlinear Anal. 74, 4083-4087 (2011)
He, Z, Du, W-S: Nonlinear algorithms approach to split common solution problems. Fixed Point Theory Appl. 2012, 130 (2012)
Du, W-S, He, Z: Feasible iterative algorithms for split common solution problems. J. Nonlinear Convex Anal. (in press)
He, Z: The split equilibrium problem and its convergence algorithms. J. Inequal. Appl. 2012, 162 (2012)
He, Z, Du, W-S: Viscosity iterative schemes for finding split common solutions of variational inequalities and fixed point problems. Abstr. Appl. Anal. 2012, Article ID 470354 (2012)
He, Z, Du, W-S: On hybrid split problem and its nonlinear algorithms. Fixed Point Theory Appl. 2013, 47 (2013)
Byrne, C, Censor, Y, Gibali, A, Reich, S: The split common null point problem. J. Nonlinear Convex Anal. 13, 759-775 (2012)
Deng, W-Q: A new approach to the approximation of common fixed points of an infinite family of relatively quasinonexpansive mappings with applications. Abstr. Appl. Anal. 2012, Article ID 437430 (2012)
Zhao, J, He, S: Strong convergence of the viscosity approximation process for the split common fixed-point problem of quasi-nonexpansive mappings. J. Appl. Math. 2012, Article ID 438023 (2012)
Lin, L-J, Chen, Y-D, Chuang, C-S: Solutions for a variational inclusion problem with applications to multiple sets split feasibility problems. Fixed Point Theory Appl. 2013, 333 (2013)
Ansari, QH, Rehan, A: Split feasibility and fixed point problems. In: Nonlinear Analysis: Approximation Theory, Optimization and Applications, pp. 281-322. Birkhäuser, New Delhi (2014)
Yang, Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 20, 1261-1266 (2004)
Yu, X, Shahzad, N, Yao, Y: Implicit and explicit algorithms for solving the split feasibility problem. Optim. Lett. 6, 1447-1462 (2012)
López, G, Martín-Márquez, V, Wang, F, Xu, H: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28(8), 085004 (2012)
Tseng, P: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431-446 (2000)
Bauschke, HH: A note on the paper by Eckstein and Svaiteron on 'General projective splitting methods for sums of maximal monotone operators'. SIAM J. Control Optim. 48, 2513-2515 (2009)
Qin, X, Cho, SY, Wang, L: Convergence of splitting algorithms for the sum of two accretive operators with applications. Fixed Point Theory Appl. 2014, 166 (2014)
Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)
Wirojana, N, Jitpeera, T, Kumam, P: The hybrid steepest descent method for solving variational inequality over triple hierarchical problems. J. Inequal. Appl. 2012, 280 (2012)
The Candidate Foundation of Youth Academic Experts at Honghe University (2014HB0206) is acknowledged.
Department of Mathematics and Computer Science, Yunnan University of Nationalities, Kunming, Yunnan, 650500, China
Rong Li
Department of Mathematics, Honghe University, Mengzi, Yunnan, 661199, China
Zhenhua He
Search for Rong Li in:
Search for Zhenhua He in:
Correspondence to Zhenhua He.
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
49M37
split common fixed point
iterative method
strong convergence
quasi-nonexpansive mapping
Thematic series on Nonlinear Analysis and Optimization
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Journal of The Korean Astronomical Society (천문학회지)
The Korean Astronomical Society (한국천문학회)
2288-890X(eISSN)
Earth Science(Earth/Atmosphere/Marine/Astronomy) > Astronomy
The Journal of the Korean Astronomical Society, JKAS, is an international scientific journal publishing papers in all the fields of astronomy and astrophysics (theoretical, observational, and instrumental) with a view to the advancement of the frontier knowledge. Manuscripts are classified into original contributions, proceedings, and reviews.
http://www.kas.org/view/submitpaper.jsp?lang=kor KSCI KCI SCOPUS SCIE
Volume 36 Issue spc1
LONG-TERM SOFT X-RAY VARIABILITY OF ACTIVE GALAXY MRK 841
Kim, Chul-Hee 17
https://doi.org/10.5303/JKAS.2008.41.2.017 PDF KSCI
We present an analysis of the soft X-ray emission of MRK 841 to investigate its long-term variation. The light variation of MRK 841 for three different energy bands of soft, medium, and hard values were studied. The maximum variability with a factor of 5 for about two years was confirmed at all three different bands. The light curves exhibit a gradual variation of brightness. In addition to a gradual variation, the short- term or micro variation was also confirmed with a factor of about two for all three different bands. The light variation of each band did not exhibit a correlation between them, but the flare event is strongest in the soft band. The hardness ratio for hard and soft bands shows irregular variation but there was no correlation between them. It was confirmed that there is a gradual decrease of the photon index. Results of our analysis are discussed within the framework of the accretion disk phenomenon.
CHEMICAL ABUNDANCES OF THE SYMBIOTIC NOVA AG PEGASI
Kim, Hyouk;Hyung, Siek 23
The high-resolution optical region spectroscopic data of the symbiotic nova AG Peg secured with the Hamilton Echelle Spectrograph at the Lick Observatory, have been analyzed along with the International Ultraviolet Explorer UV archive data. We measure about 700 line intensities in the wavelengths of 3859 to $9230{\AA}$ and identify about 300 lines. We construct pure photoionization models that represent the observed lines and the physical condition for this symbiotic nova. The spectral energy distribution of the ionizing radiation is adopted from stellar model atmospheres. Based on photoionization models, we derive the elemental abundances; C & N appear to be similar to be smaller than the Galactic planetary nebular value while O is enhanced. Our result is compared with the Contini (1997, 2003) who analyzed the UV region spectral data with the shock + ionization model. The Fe abundance appears to be enhanced than that of normal planetary nebulae, which suggests that AG Peg may have formed in the Galactic disk. The models indicate that the temperature of the central star which excite the shell gas may have fluctuated to an unexpected extent during the years 1998 - 2002.
CAPABILITY OF THE FAST IMAGING SOLAR SPECTROGRAPH ON NST/BBSO FOR OBSERVING FILAMENTS/PROMINENCES AT THE SPECTRAL LINES Hα, Ca II 8542, AND Ca II K
Ahn, Kwang-Su;Chae, Jong-Chul;Park, Hyung-Min;Nah, Jak-Young;Park, Young-Deuk;Jang, Bi-Ho;Moon, Yong-Jae 39
Spectral line profiles of filaments/prominences to be observed by the Fast Imaging Solar Spectrograph (FISS) are studied. The main spectral lines of interests are $H{\alpha}$, Ca II 8542, and Ca II K. FISS has a high spectral resolving power of $2{\times}10^5$, and supports simultaneous dual-band recording. This instrument will be installed at the 1.6m New Solar Telescope (NST) of Big Bear Solar Observatory, which has a high spatial resolution of 0.065" at 500nm. Adopting the cloud model of radiative transfer and using the model parameters inferred from pre-existing observations, we have simulated a set of spectral profiles of the lines that are emitted by a filament on the disk or a prominence at the limb. Taking into account the parameters of the instrument, we have estimated the photon count to be recorded by the CCD cameras, the signal-to-noise ratios, and so on. We have also found that FISS is suitable for the study of multi-velocity threads in filaments if the spectral profiles of Ca II lines are recorded together with $H{\alpha}$ lines. | CommonCrawl |
$C^{3}$ : A Command-line Catalogue Cross-matching tool for modern astrophysical survey data (1703.02300)
Giuseppe Riccio, Massimo Brescia, Stefano Cavuoti, Amata Mercurio, Anna Maria Di Giorgio, Sergio Molinari
March 7, 2017 astro-ph.IM
In the current data-driven science era, it is needed that data analysis techniques has to quickly evolve to face with data whose dimensions has increased up to the Petabyte scale. In particular, being modern astrophysics based on multi-wavelength data organized into large catalogues, it is crucial that the astronomical catalog cross-matching methods, strongly dependant from the catalogues size, must ensure efficiency, reliability and scalability. Furthermore, multi-band data are archived and reduced in different ways, so that the resulting catalogues may differ each other in formats, resolution, data structure, etc, thus requiring the highest generality of cross-matching features. We present $C^{3}$ (Command-line Catalogue Cross-match), a multi-platform application designed to efficiently cross-match massive catalogues from modern surveys. Conceived as a stand-alone command-line process or a module within generic data reduction/analysis pipeline, it provides the maximum flexibility, in terms of portability, configuration, coordinates and cross-matching types, ensuring high performance capabilities by using a multi-core parallel processing paradigm and a sky partitioning algorithm.
Distance biases in the estimation of the physical properties of Hi-GAL compact sources-I. Clump properties and the identification of high-mass star forming candidates (1701.08035)
Adriano Baldeschi, Davide Elia, Sergio Molinari, Stefano Pezzuto, Eugenio Schisano, Marco Gatti, Andrea Serra, Milena Benedettini, Anna Maria Di Giorgio, John Scige Liu, Manuel Merello
Jan. 27, 2017 astro-ph.GA
The degradation of spatial resolution in star-forming regions observed at large distances ($d\gtrsim1$ kpc) with Herschel,can lead to estimates of the physical parameters of the detected compact sources (clumps) which do not necessarily mirror the properties of the original population of cores. This paper aims at quantifying the bias introduced in the estimation of these parameters by the distance effect. To do so, we consider Herschel maps of nearby star-forming regions taken from the Herschel-Gould-Belt survey, and simulate the effect of increased distance to understand what amount of information is lost when a distant star-forming region is observed with Herschel resolution. In the maps displaced to different distances we extract compact sources, and we derive their physical parameters as if they were original Hi-GAL maps of the extracted source samples. In this way, we are able to discuss how the main physical properties change with distance. In particular, we discuss the ability of clumps to form massive stars: we estimate the fraction of distant sources that are classified as high-mass stars-forming objects due to their position in the mass vs radius diagram, that are only "false positives". We give also a threshold for high-mass star-formation $M>1282 \ \left(\frac{r}{[\mathrm{pc}]}\right)^{1.42} M_{\odot}$. In conclusion, this paper provides the astronomer dealing with Herschel maps of distant star-forming regions with a set of prescriptions to partially recover the character of the core population in unresolved clumps.
C3, A Command-line Catalogue Cross-match tool for large astrophysical catalogues (1611.04431)
Nov. 30, 2016 astro-ph.IM
Modern Astrophysics is based on multi-wavelength data organized into large and heterogeneous catalogues. Hence, the need for efficient, reliable and scalable catalogue cross-matching methods plays a crucial role in the era of the petabyte scale. Furthermore, multi-band data have often very different angular resolution, requiring the highest generality of cross-matching features, mainly in terms of region shape and resolution. In this work we present $C^{3}$ (Command-line Catalogue Cross-match), a multi-platform application designed to efficiently cross-match massive catalogues. It is based on a multi-core parallel processing paradigm and conceived to be executed as a stand-alone command-line process or integrated within any generic data reduction/analysis pipeline, providing the maximum flexibility to the end-user, in terms of portability, parameter configuration, catalogue formats, angular resolution, region shapes, coordinate units and cross-matching types. Using real data, extracted from public surveys, we discuss the cross-matching capabilities and computing time efficiency also through a direct comparison with some publicly available tools, chosen among the most used within the community, and representative of different interface paradigms. We verified that the $C^{3}$ tool has excellent capabilities to perform an efficient and reliable cross-matching between large datasets. Although the elliptical cross-match and the parametric handling of angular orientation and offset are known concepts in the astrophysical context, their availability in the presented command-line tool makes $C^{3}$ competitive in the context of public astronomical tools.
A Command-line Cross-matching tool for modern astrophysical pipelines (1611.08494)
The emerging need for efficient, reliable and scalable astronomical catalog cross-matching is becoming more pressing in the current data-driven science era, where the size of data has rapidly increased up to the Petabyte scale. C3 (Command-line Catalogue Cross-matching) is a multi-platform tool designed to efficiently cross-match massive catalogues from modern astronomical surveys, ensuring high-performance capabilities through the use of a multi-core parallel processing paradigm. The tool has been conceived to be executed as a stand-alone command-line process or integrated within any generic data reduction/analysis pipeline, providing the maximum flexibility to the end user, in terms of parameter configuration, coordinates and cross-matching types. In this work we present the architecture and the features of the tool. Moreover, since the modular design of the tool enables an easy customization to specific use cases and requirements, we present also an example of a customized C3 version designed and used in the FP7 project ViaLactea, dedicated to cross-correlate Hi-GAL clumps with multi-band compact sources.
Source clustering in the Hi-GAL survey determined using a minimum spanning tree method (1611.00799)
Maxime Beuret, Nicolas Billot, Laurent Cambrésy, David J. Eden, Davide Elia, Sergio Molinari, Stefano Pezzuto, Eugenio Schisano
Nov. 2, 2016 astro-ph.GA, astro-ph.SR
The aims are to investigate the clustering of the far-infrared sources from the Herschel infrared Galactic Plane Survey (Hi-GAL) in the Galactic longitude range of -71 to 67 deg. These clumps, and their spatial distribution, are an imprint of the original conditions within a molecular cloud. This will produce a catalogue of over-densities. The minimum spanning tree (MST) method was used to identify the over-densities in two dimensions. The catalogue was further refined by folding in heliocentric distances, resulting in more reliable over-densities, which are cluster candidates. We found 1,633 over-densities with more than ten members. Of these, 496 are defined as cluster candidates because of the reliability of the distances, with a further 1,137 potential cluster candidates. The spatial distributions of the cluster candidates are different in the first and fourth quadrants, with all clusters following the spiral structure of the Milky Way. The cluster candidates are fractal. The clump mass functions of the clustered and isolated are statistically indistinguishable from each other and are consistent with Kroupa's initial mass function.
Machine learning based data mining for Milky Way filamentary structures reconstruction (1505.06621)
Giuseppe Riccio, Stefano Cavuoti, Eugenio Schisano, Massimo Brescia, Amata Mercurio, Davide Elia, Milena Benedettini, Stefano Pezzuto, Sergio Molinari, Anna Maria Di Giorgio
Oct. 11, 2016 cs.CV, astro-ph.IM
We present an innovative method called FilExSeC (Filaments Extraction, Selection and Classification), a data mining tool developed to investigate the possibility to refine and optimize the shape reconstruction of filamentary structures detected with a consolidated method based on the flux derivative analysis, through the column-density maps computed from Herschel infrared Galactic Plane Survey (Hi-GAL) observations of the Galactic plane. The present methodology is based on a feature extraction module followed by a machine learning model (Random Forest) dedicated to select features and to classify the pixels of the input images. From tests on both simulations and real observations the method appears reliable and robust with respect to the variability of shape and distribution of filaments. In the cases of highly defined filament structures, the presented method is able to bridge the gaps among the detected fragments, thus improving their shape reconstruction. From a preliminary "a posteriori" analysis of derived filament physical parameters, the method appears potentially able to add a sufficient contribution to complete and refine the filament reconstruction.
VIALACTEA knowledge base homogenizing access to Milky Way data (1608.04526)
Marco Molinaro, Robert Butora, Marilena Bandieramonte, Ugo Becciani, Massimo Brescia, Stefano Cavuoti, Alessandro Costa, Anna M. Di Giorgio, Davide Elia, Akos Hajnal, Hermann Gabor, Peter Kacsuk, Scige J. Liu, Sergio Molinari, Giuseppe Riccio, Eugenio Schisano, Eva Sciacca, Riccardo Smareglia, , Fabio Vitello
Aug. 16, 2016 astro-ph.IM
The VIALACTEA project has a work package dedicated to Tools and Infrastructure and, inside it, a task for the Database and Virtual Observatory Infrastructure. This task aims at providing an infrastructure to store all the resources needed by the, more purposely, scientific work packages of the project itself. This infrastructure includes a combination of: storage facilities, relational databases and web services on top of them, and has taken, as a whole, the name of VIALACTEA Knowledge Base (VLKB). This contribution illustrates the current status of this VLKB. It details the set of data resources put together; describes the database that allows data discovery through VO inspired metadata maintenance; illustrates the discovery, cutout and access services built on top of the former two for the users to exploit the data content.
Calibration of evolutionary diagnostics in high-mass star formation (1604.06192)
Sergio Molinari, Riccardo Cesaroni INAF-Istituto di Astrofisica e Planetologia Spaziali, Roma INAF-Osservatorio Astrofisico di Arcetri, Firenze
April 21, 2016 astro-ph.GA, astro-ph.SR
The evolutionary classification of massive clumps that are candidate progenitors of high-mass young stars and clusters relies on a variety of independent diagnostics based on observables from the near-infrared to the radio. A promising evolutionary indicator for massive and dense cluster-progenitor clumps is the L/M ratio between the bolometric luminosity and the mass of the clumps. With the aim of providing a quantitative calibration for this indicator we used SEPIA/APEX to obtain CH3C2H(12-11) observations, that is an excellent thermometer molecule probing densities > 10^5 cm^-3 , toward 51 dense clumps with M>1000 solar masses, and uniformly spanning -2 < Log(L/M) < 2.3. We identify three distinct ranges of L/M that can be associated to three distinct phases of star formation in massive clumps. For L/M <1 no clump is detected in CH3C2H , suggesting an inner envelope temperature below 30K. For 1< L/M < 10 we detect 58% of the clumps, with a temperature between 30 and 35 K independently from the exact value of L/M; such clumps are building up luminosity due to the formation of stars, but no star is yet able to significantly heat the inner clump regions. For L/M> 10 we detect all the clumps, with a gas temperature rising with Log(L/M), marking the appearance of a qualitatively different heating source within the clumps; such values are found towards clumps with UCHII counterparts, suggesting that the quantitative difference in T - L/M behaviour above L/M >10 is due to the first appearance of ZAMS stars in the clumps.
Interactions of the Infrared bubble N4 with the surroundings (1601.01114)
Hong-Li Liu, Jin-Zeng Li, Yuefang Wu, Jing-Hua Yuan, Tie Liu, G. Dubner, S. Paron, M. E. Ortega, Sergio Molinari, Maohai Huang, Annie Zavagno, Manash R. Samal, Ya-Fang Huang, Si-Ju Zhang
The physical mechanisms that induce the transformation of a certain mass of gas in new stars are far from being well understood. Infrared bubbles associated with HII regions have been considered to be good samples of investigating triggered star formation. In this paper we report on the investigation of the dust properties of the infrared bubble N4 around the HII region G11.898+0.747, analyzing its interaction with its surroundings and star formation histories therein, with the aim of determining the possibility of star formation triggered by the expansion of the bubble. Using Herschel PACS and SPIRE images with a wide wavelength coverage, we reveal the dust properties over the entire bubble. Meanwhile, we are able to identify six dust clumps surrounding the bubble, with a mean size of 0.50 pc, temperature of about 22 K, mean column density of 1.7 $\times10^{22}$ cm$^{-2}$, mean volume density of about 4.4 $\times10^{4}$ cm$^{-3}$, and a mean mass of 320 $M_{\odot}$. In addition, from PAH emission seen at 8 $\mu$m, free-free emission detected at 20 cm and a probability density function in special regions, we could identify clear signatures of the influence of the HII region on the surroundings. There are hints of star formation, though further investigation is required to demonstrate that N4 is the triggering source.
Advanced Environment for Knowledge Discovery in the VIALACTEA Project (1511.08619)
Ugo Becciani, Marilena Bandieramonte, Massimo Brescia, Robert Butora, Stefano Cavuoti, Alessandro Costa, Anna Maria di Giorgio, Davide Elia, Akos Hajnal, Peter Kacsuk, Scige John Liu, Amata Mercurio, Sergio Molinari, Marco Molinaro, Giuseppe Riccio, Eugenio Schisano, Eva Sciacca, Riccardo Smareglia, Fabio Vitello
Dec. 1, 2015 astro-ph.IM
The VIALACTEA project aims at building a predictive model of star formation in our galaxy. We present the innovative integrated framework and the main technologies and methodologies to reach this ambitious goal.
The Carina Nebula and Gum 31 molecular complex: I. Molecular gas distribution, column densities and dust temperatures (1511.07513)
David Rebolledo, Michael Burton, Anne Green, Catherine Braiding, Sergio Molinari, Graeme Wong, Rebecca Blackwell, Davide Elia, Eugenio Schisano
Nov. 24, 2015 astro-ph.GA
We report high resolution observations of the $^{12}$CO$(1\rightarrow0)$ and $^{13}$CO$(1\rightarrow0)$ molecular lines in the Carina Nebula and the Gum 31 region obtained with the 22-m Mopra telescope as part of the The Mopra Southern Galactic Plane CO Survey. We cover 8 deg$^2$ from $l = 285^{\circ}$ to 290$^{\circ}$, and from $b = -1.5^{\circ}$ to +0.5$^{\circ}$. The molecular gas column density distributions from both tracers have a similar range of values. By fitting a grey-body function to the observed infrared spectral energy distribution from Herschel maps, we derive gas column densities and dust temperatures. The gas column density has values in the range from $6.3\times\ 10^{20}$ to $1.4\times 10^{23}$ cm$^{-2}$, while the dust temperature has values in the range from 17 to 43 K. The gas column density derived from the dust emission is approximately described by a log-normal function for a limited range of column densities. A high-column density tail is clearly evident for the gas column density distribution, which appears to be a common feature in regions with active star formation. There are regional variations in the fraction of the mass recovered by the CO emission lines with respect to the total mass traced by the dust emission. These variations may be related to changes in the radiation field strength, variation of the atomic to molecular gas fraction across the observed region, differences in the CO molecule abundance with respect to H$_{2}$, and evolutionary stage differences of the molecular clouds that compose the Carina Nebula-Gum 31 complex.
Large-scale latitude distortions of the inner Milky Way Disk from the Herschel/Hi-GAL Survey (1511.06300)
Sergio Molinari, Toby Moore, Bruce Swinyard, Milena Benedettini STScI, Baltimore Univ. of Calgary
We use the Herschel Hi-GAL survey data to study the spatial distribution in Galactic longitude and latitude of the interstellar medium and of dense, star-forming clumps in the inner Galaxy. The peak position and width of the latitude distribution of the dust column density as well as of number density of compact sources from the band-merged Hi-GAL photometric catalogues are analysed as a function of longitude. The width of the diffuse dust column density traced by the Hi-GAL 500 micron emission varies across the inner Galaxy, with a mean value of 1{\deg}.2-1{\deg}.3, similar to that of the 250um Hi-GAL sources. 70um Hi-GAL sources define a much thinner disk, with a mean FWHM of 0{\deg}.75, and an average latitude of b=0{\deg}.06, coincident with the results from ATLASGAL. The GLAT distribution as a function of GLON shows modulations, both for the diffuse emission and for the compact sources, with ~0{\deg}.2 displacements mostly toward negative latitudes at l~ +40{\deg}, +12{\deg}, -25{\deg} and -40{\deg}. No such modulations can be found in the MIPSGAL 24 or WISE 22 um data when the entire source samples are considered. The distortions revealed by Herschel are interpreted as large-scale bending modes of the Plane. The lack of similar distortions in tracers of more evolved YSOs or stars rules out gravitational instabilities or satellite-induced perturbations, as they should act on both the diffuse and stellar disk components. We propose that the observed bends are caused by incoming flows of extra-planar gas interacting with the gaseous disk. Stars decouple from the gaseous ISM and relax into the stellar disk potential. The time required for the disappearance of the distortions from the diffuse ISM to the relatively evolved YSO stages are compatible with star-formation timescales.
Large-scale filaments associated with Milky Way spiral arms (1504.00647)
Ke Wang, Leonardo Testi (ESO, Excellence Cluster Universe, INAF), Adam Ginsburg, C. Malcolm Walmsley (INAF, Dublin Institute of Advanced Studies), Sergio Molinari, Eugenio Schisano
July 8, 2015 astro-ph.GA
The ubiquity of filamentary structure at various scales through out the Galaxy has triggered a renewed interest in their formation, evolution, and role in star formation. The largest filaments can reach up to Galactic scale as part of the spiral arm structure. However, such large scale filaments are hard to identify systematically due to limitations in identifying methodology (i.e., as extinction features). We present a new approach to directly search for the largest, coldest, and densest filaments in the Galaxy, making use of sensitive Herschel Hi-GAL data complemented by spectral line cubes. We present a sample of the 9 most prominent Herschel filaments, including 6 identified from a pilot search field plus 3 from outside the field. These filaments measure 37-99 pc long and 0.6-3.0 pc wide with masses (0.5-8.3)$\times10^4 \, M_\odot$, and beam-averaged ($28"$, or 0.4-0.7 pc) peak H$_2$ column densities of (1.7-9.3)$\times 10^{22} \, \rm{cm^{-2}}$. The bulk of the filaments are relatively cold (17-21 K), while some local clumps have a dust temperature up to 25-47 K. All the filaments are located within <~60 pc from the Galactic mid-plane. Comparing the filaments to a recent spiral arm model incorporating the latest parallax measurements, we find that 7/9 of them reside within arms, but most are close to arm edges. These filaments are comparable in length to the Galactic scale height and therefore are not simply part of a grander turbulent cascade.
The impact of the SKA on Galactic Radioastronomy: continuum observations (1412.5833)
Grazia Umana, Corrado Trigilio, Luciano Cerrigone, Riccardo Cesaroni, Albert A. Zijlstra, Melvin Hoare, Kerstin Weis, Anthony J. Beasley, Dominik Bomans, Greg Hallinan, Sergio Molinari, Russ Taylor, Leonardo Testi, Mark Thompson
Dec. 18, 2014 astro-ph.SR
The SKA will be a state of the art radiotelescope optimized for both large area surveys as well as for deep pointed observations. In this paper we analyze the impact that the SKA will have on Galactic studies, starting from the immense legacy value of the all-sky survey proposed by the continuum SWG but also presenting some areas of Galactic Science that particularly benefit from SKA observations both surveys and pointed. The planned all-sky survey will be characterized by unique spatial resolution, sensitivity and survey speed, providing us with a wide-field atlas of the Galactic continuum emission. Synergies with existing, current and planned radio Galactic Plane surveys will be discussed. SKA will give the opportunity to create a sensitive catalog of discrete Galactic radio sources, most of them representing the interaction of stars at various stages of their evolution with the environment: complete census of all stage of HII regions evolution; complete census of late stages of stellar evolution such as PNe and SNRs; detection of stellar winds, thermal jets, Symbiotic systems, Chemically Peculiar and dMe stars, active binary systems in both flaring and quiescent states. Coherent emission events like Cyclotron Maser in the magnetospheres of different classes of stars can be detected. Pointed, deep observations will allow new insights into the physics of the coronae and plasma processes in active stellar systems and single stars, enabling the detection of flaring activity in larger stellar population for a better comprehension of the mechanism of energy release in the atmospheres of stars with different masses and age.
GAMMA-LIGHT: High-Energy Astrophysics above 10 MeV (1406.1071)
Aldo Morselli, Andrea Argan, Guido Barbiellini, Walter Bonvicini, Andrea Bulgarelli, Martina Cardillo, Andrew Chen, Paolo Coppi, Anna Maria Di Giorgio, Immacolata Donnarumma, Ettore Del Monte, Valentina Fioretti, Marcello Galli, Manuela Giusti, Attilio Ferrari, Fabio Fuschino, Paolo Giommi, Andrea Giuliani, Claudio Labanti, Paolo Lipari, Francesco Longo, Martino Marisaldi, Sergio Molinari, Carlos Muñoz, Torsten Neubert, Piotr Orleanski, Josep M. Paredes, M. Ángeles Pérez-García, Giovanni Piano, Piergiorgio Picozza, Maura Pilia, Carlotta Pittori, Gianluca Pucella, Sabina Sabatini, Edoardo Striani, Marco Tavani, Alessio Trois, Andrea Vacchi, Stefano Vercellone, Francesco Verrecchia, Valerio Vittorini, Andrzej Zdziarski
June 4, 2014 astro-ph.IM, astro-ph.HE
High-energy phenomena in the cosmos, and in particular processes leading to the emission of gamma- rays in the energy range 10 MeV - 100 GeV, play a very special role in the understanding of our Universe. This energy range is indeed associated with non-thermal phenomena and challenging particle acceleration processes. The technology involved in detecting gamma-rays is challenging and drives our ability to develop improved instruments for a large variety of applications. GAMMA-LIGHT is a Small Mission which aims at an unprecedented advance of our knowledge in many sectors of astrophysical and Earth studies research. The Mission will open a new observational window in the low-energy gamma-ray range 10-50 MeV, and is configured to make substantial advances compared with the previous and current gamma-ray experiments (AGILE and Fermi). The improvement is based on an exquisite angular resolution achieved by GAMMA-LIGHT using state-of-the-art Silicon technology with innovative data acquisition. GAMMA-LIGHT will address all astrophysics issues left open by the current generation of instruments. In particular, the breakthrough angular resolution in the energy range 100 MeV - 1 GeV is crucial to resolve patchy and complex features of diffuse sources in the Galaxy as well as increasing the point source sensitivity. This proposal addresses scientific topics of great interest to the community, with particular emphasis on multifrequency correlation studies involving radio, optical, IR, X-ray, soft gamma-ray and TeV emission. At the end of this decade several new observatories will be operational including LOFAR, SKA, ALMA, HAWK, CTA. GAMMA-LIGHT will "fill the vacuum" in the 10 MeV-10 GeV band, and will provide invaluable data for the understanding of cosmic and terrestrial high-energy sources.
The Milky Way as a Star Formation Engine (1402.6196)
Sergio Molinari, Alberto Noriega-Crespo, Enrique Vázquez-Semadeni, Peter Martin University of Colorado, Boulder, USA, Liverpool John Moores U., UK, ESO-HW, Garching, Germany, AMU-LAM, Marseille, France,
May 16, 2014 astro-ph.GA
The cycling of material from the interstellar medium (ISM) into stars and the return of stellar ejecta into the ISM is the engine that drives the "galactic ecology" in normal spirals, a cornerstone in the formation and evolution of galaxies through cosmic time. Major observational and theoretical challenges need to be addressed in determining the processes responsible for converting the low-density ISM into dense molecular clouds, forming dense filaments and clumps, fragmenting them into stars, OB associations and bound clusters, and characterizing the feedback that limits the rate and efficiency of star formation. This formidable task can be now effectively attacked thanks to the combination of new global-scale surveys of the Milky Way Galactic Plane from infrared to radio wavelengths, offering the possibility of bridging the gap between local and extragalactic star formation studies. The Herschel, Spitzer and WISE mid to far infrared continuum surveys, complemented by analogue surveys from ground-based facilities in the millimetre and radio wavelengths, enables us to measure the Galactic distribution and physical properties of dust on all scales and in all components of the ISM from diffuse clouds to filamentary complexes and tens of thousands of dense clumps. A complementary suite of spectroscopic surveys in various atomic and molecular tracers is providing the chemical fingerprinting of dense clumps and filaments, as well as essential kinematic information to derive distances and thus transform panoramic data into a 3D representation. The latest results emerging from these Galaxy-scale surveys are reviewed. New insights into cloud formation and evolution, filaments and their relationship to channeling gas onto gravitationally-bound clumps, the properties of these clumps, density thresholds for gravitational collapse, and star and cluster formation rates are discussed.
The pros and cons of the inversion method approach to derive 3D dust emission properties of the ISM: the Hi-GAL field centred on (l,b)=(30$^{\circ}$,0$^{\circ}$) (1403.3327)
Alessio Traficante, Roberta Paladini, Mathieu Compiegne, Marta I.R. Alves, Laurent Cambresy, Steven J. Gibson, Christopher T. Tibbs, Alberto Noriega-Crespo, Sergio Molinari, Sean J. Carey, Jim G. Ingalls, Paolo Natoli, Rod D. Davies, Richard J. Davis, Clive Dickinson, Gary A. Fuller
March 13, 2014 astro-ph.GA
Herschel FIR continuum data obtained as part of the Hi-GAL survey have been used, together with the GLIMPSE 8 $\mu$m and MIPSGAL 24 $\mu$m data, to attempt the first 3D-decomposition of dust emission associated with atomic, molecular and ionized gas at 15 arcmin angular resolution. Our initial test case is a 2$\times$2 square degrees region centred on (l,b)=(30$^{\circ}$,0$^{\circ}$), a direction that encompasses the origin point of the Scutum-Crux Arm at the tip of the Galactic Bar. Coupling the IR maps with velocity maps specific for different gas phases (HI 21cm, $^{12}$CO and $^{13}$CO, and RRLs), we estimate the properties of dust blended with each of the gas components and at different Galactocentric distances along the LOS. A statistical Pearson's coefficients analysis is used to study the correlation between the column densities and the intensity of the IR emission. This analysis provides evidence that the 2$\times$2 square degree field under consideration is characterized by the presence of a gas component not accounted for by the standard tracers, possibly associated with warm H$_{2}$ and cold HI. We demonstrate that the IR radiation in the range 8 $\mu$m $<$ $\lambda$ $<$ 500 $\mu$m is systematically dominated by emission originating within the Scutum-Crux Arm. By applying an inversion method, we recover the dust emissivities associated with atomic, molecular and ionized gas. Using the DustEM model we obtain an indication for PAHs depletion in the diffuse ionized gas. However, the main goal of this work is to discuss the impact of the missing column density associated with the dark gas component on the accurate evaluation of the dust properties, and to shed light on the limitations of the inversion method approach when this is applied to a small section of the Galactic Plane and when the working resolution allows sufficient de-blending of the gas components along the LOS.
G0.253+0.016: a molecular cloud progenitor of an Arches-like cluster (1111.3199)
Steven N. Longmore, Jill Rathborne, Nate Bastian, Joao Alves, Joana Ascenso, John Bally, Leonardo Testi, Andy Longmore, Cara Battersby, Eli Bressert, Cormac Purcell, Andrew Walsh, James Jackson, Jonathan Foster, Sergio Molinari, Stefan Meingast, A. Amorim, J. Lima, R. Marques, A. Moitinho, J. Pinhao, J. Rebordao, F. D. Santos
Nov. 14, 2011 astro-ph.CO, astro-ph.GA
Young massive clusters (YMCs) with stellar masses of 10^4 - 10^5 Msun and core stellar densities of 10^4 - 10^5 stars per cubic pc are thought to be the `missing link' between open clusters and extreme extragalactic super star clusters and globular clusters. As such, studying the initial conditions of YMCs offers an opportunity to test cluster formation models across the full cluster mass range. G0.253+0.016 is an excellent candidate YMC progenitor. We make use of existing multi-wavelength data including recently available far-IR continuum (Herschel/Hi-GAL) and mm spectral line (HOPS and MALT90) data and present new, deep, multiple-filter, near-IR (VLT/NACO) observations to study G0.253+0.016. These data show G0.253+0.016 is a high mass (1.3x10^5 Msun), low temperature (T_dust~20K), high volume and column density (n ~ 8x10^4 cm^-3; N_{H_2} ~ 4x10^23 cm^-2) molecular clump which is close to virial equilibrium (M_dust ~ M_virial) so is likely to be gravitationally-bound. It is almost devoid of star formation and, thus, has exactly the properties expected for the initial conditions of a clump that may form an Arches-like massive cluster. We compare the properties of G0.253+0.016 to typical Galactic cluster-forming molecular clumps and find it is extreme, and possibly unique in the Galaxy. This uniqueness makes detailed studies of G0.253+0.016 extremely important for testing massive cluster formation models.
Multi-line spectral imaging of dense cores in the Lupus molecular cloud (1108.4809)
Milena Benedettini, Stefano Pezzuto, Micheal G. Burton, Serena Viti, Sergio Molinari, Paola Caselli, Leonardo Testi
Aug. 24, 2011 astro-ph.GA
The molecular clouds Lupus 1, 3 and 4 were mapped with the Mopra telescope at 3 and 12 mm. Emission lines from high density molecular tracers were detected, i.e. NH$_3$ (1,1), NH$_3$ (2,2), N$_2$H$^+$ (1-0), HC$_3$N (3-2), HC$_3$N (10-9), CS (2-1), CH$_3$OH (2$_0-1_0$)A$^+$ and CH$_3$OH (2$_{-1}-1_{-1}$)E. Velocity gradients of more than 1 km s$^{-1}$ are present in Lupus 1 and 3 and multiple gas components are present in these clouds along some lines of sight. Lupus 1 is the cloud richest in high density cores, 8 cores were detected in it, 5 cores were detected in Lupus 3 and only 2 in Lupus 4. The intensity of the three species HC$_3$N, NH$_3$ and N$_2$H$^+$ changes significantly in the various cores: cores that are brighter in HC$_3$N are fainter or undetected in NH$_3$ and N$_2$H$^+$ and vice versa. We found that the column density ratios HC$_3$N/N$_2$H$^+$ and HC$_3$N/NH$_3$ change by one order of magnitude between the cores, indicating that also the chemical abundance of these species is different. The time dependent chemical code that we used to model our cores shows that the HC$_3$N/N$_2$H$^+$ and HC$_3$N/NH$_3$ ratios decrease with time therefore the observed column density of these species can be used as an indicator of the chemical evolution of dense cores. On this base we classified 5 out of 8 cores in Lupus 1 and 1 out of 5 cores in Lupus 3 as very young protostars or prestellar cores. Comparing the millimetre cores population with the population of the more evolved young stellar objects identified in the Spitzer surveys, we conclude that in Lupus 3 the bulk of the star formation activity has already passed and only a moderate number of stars are still forming. On the contrary, in Lupus 1 star formation is on-going and several dense cores are still in the pre--/proto--stellar phase. Lupus 4 is at an intermediate stage, with a smaller number of individual objects.
Filaments and ridges in Vela C revealed by Herschel: from low-mass to high-mass star-forming sites (1108.0941)
Tracey Hill, Frederique Motte, Pierre Didelon, Sylvain Bontemps, Vincent Minier, Martin Hennemann, Nicola Schneider, Philippe Andre, Alexander Men'shchikov, Loren D. Anderson, Doris Arzoumanian, Jean-Philippe Bernard, James di Francesco, Davide Elia, Teresa Giannini, Matt J. Griffin, Jason Kirk, Vera Konyves, Anthony P. Marston, Peter Martin, Sergio Molinari, Quang Nguyen Luong, Nicolas Peretto, Stefano Pezzuto, Helene Roussel, Marc Sauvage, Thierry Sousbie, Leonardo Testi, Derek Ward-Thompson, Glenn J. White, Christine D. Wilson, Annie Zavagno
Aug. 3, 2011 astro-ph.GA
We present the first Herschel PACS and SPIRE results of the Vela C molecular complex in the far-infrared and submillimetre regimes at 70, 160, 250, 350, and 500 um, spanning the peak of emission of cold prestellar or protostellar cores. Column density and multi-resolution analysis (MRA) differentiates the Vela C complex into five distinct sub-regions. Each sub-region displays differences in their column density and temperature probability distribution functions (PDFs), in particular, the PDFs of the `Centre-Ridge' and `South-Nest' sub-regions appear in stark contrast to each other. The Centre-Ridge displays a bimodal temperature PDF representative of hot gas surrounding the HII region RCW 36 and the cold neighbouring filaments, whilst the South-Nest is dominated by cold filamentary structure. The column density PDF of the Centre-Ridge is flatter than the South-Nest, with a high column density tail, consistent with formation through large-scale flows, and regulation by self-gravity. At small to intermediate scales MRA indicates the Centre-Ridge to be twice as concentrated as the South-Nest, whilst on larger scales, a greater portion of the gas in the South-Nest is dominated by turbulence than in the Centre-Ridge. In Vela C, high-mass stars appear to be preferentially forming in ridges, i.e., dominant high column density filaments.
Source extraction and photometry for the far-infrared and sub-millimeter continuum in the presence of complex backgrounds (1011.3946)
Sergio Molinari, Eugenio Schisano, Fabiana Faustini, Michele Pestalozzi, Anna Maria DiGiorgio, Scige John Liu
(Abridged) We present a new method for detecting and measuring compact sources in conditions of intense, and highly variable, fore/background. While all most commonly used packages carry out the source detection over the signal image, our proposed method builds from the measured image a "curvature" image by double-differentiation in four different directions. In this way point-like as well as resolved, yet relatively compact, objects are easily revealed while the slower varying fore/background is greatly diminished. Candidate sources are then identified by looking for pixels where the curvature exceeds, in absolute terms, a given threshold; the methodology easily allows us to pinpoint breakpoints in the source brightness profile and then derive reliable guesses for the sources extent. Identified peaks are fit with 2D elliptical Gaussians plus an underlying planar inclined plateau, with mild constraints on size and orientation. Mutually contaminating sources are fit with multiple Gaussians simultaneously using flexible constraints. We ran our method on simulated large-scale fields with 1000 sources of different peak flux overlaid on a realistic realization of diffuse background. We find detection rates in excess of 90% for sources with peak fluxes above the 3-sigma signal noise limit; for about 80% of the sources the recovered peak fluxes are within 30% of their input values.
Properties of Stellar Clusters around High-Mass Young Stars (0904.3342)
Fabiana Faustini, Sergio Molinari, Leonardo Testi, Jan Brand
[Abridged] Twenty-six high-luminosity IRAS sources believed to be collection of stars in the early phases of high-mass star formation have been observed in the NIR (J, H, K) to characterize the clustering properties of their young stellar population and gain insight into the initial conditions of star formation in these clusters (Initial Mass Function [IMF], Star Formation History [SFH]), and to deduce mean values for cluster ages. K luminosity functions (KLFs) are compared with simulated ones from a model that generates populations of synthetic clusters starting from assumptions on the IMF, the SFH, and the Pre-MS evolution, and using the average properties of the observed clusters as boundary conditions Twenty-two sources show evidence of clustering from a few up to several tens of objects, and a median cluster radius of 0.7 pc. A considerable number of cluster members present an infrared excess characteristic of young Pre-Main-Sequence objects. We find that the median stellar age ranges between 2.5 10^5 and 5 10^6 years, with evidence of an age spread of the same entity within each cluster. We also find evidence that older clusters tend to be smaller in size, in line with the fact that our clusters are on average larger than those around relatively older Herbig Ae/Be stars. The relationship of the mass of the most massive star in the cluster with both the clusters richness and their total stellar mass suggest that our modeled clusters may not be consistent with them resulting from random sampling of the IMF. Our results are consistent with a star formation which takes place continuously over a period of time which is longer than a typical crossing time.
Search for massive protostar candidates in the southern hemisphere: II. Dust continuum emission (astro-ph/0510422)
Maria T. Beltran, Jan Brand, Riccardo Cesaroni, Francesco Fontani, Stefano Pezzuto, Leonardo Testi, Sergio Molinari
Oct. 14, 2005 astro-ph
In an ongoing effort to identify and study high-mass protostellar candidates we have observed in various tracers a sample of 235 sources selected from the IRAS Point Source Catalog, mostly with dec < -30 deg, with the SEST antenna at millimeter wavelengths. The sample contains 142 Low sources and 93 High, which are believed to be in different evolutionary stages. Both sub-samples have been studied in detail by comparing their physical properties and morphologies. Massive dust clumps have been detected in all but 8 regions, with usually more than one clump per region. The dust emission shows a variety of complex morphologies, sometimes with multiple clumps forming filaments or clusters. The mean clump has a linear size of ~0.5 pc, a mass of ~320 Msolar for a dust temperature Td=30 K, an H_2 density of 9.5E5 cm-3, and a surface density of 0.4 g cm-2. The median values are 0.4 pc, 102 Msolar, 4E4 cm-3, and 0.14 g cm-2, respectively. The mean value of the luminosity-to-mass ratio, L/M ~99 Lsolar/Msolar, suggests that the sources are in a young, pre-ultracompact HII phase. We have compared the millimeter continuum maps with images of the mid-IR MSX emission, and have discovered 95 massive millimeter clumps non-MSX emitters, either diffuse or point-like, that are potential prestellar or precluster cores. The physical properties of these clumps are similar to those of the others, apart from the mass that is ~3 times lower than for clumps with MSX counterpart. Such a difference could be due to the potential prestellar clumps having a lower dust temperature. The mass spectrum of the clumps with masses above M ~100 Msolar is best fitted with a power-law dN/dM proportional to M-alpha with alpha=2.1, consistent with the Salpeter (1955) stellar IMF, with alpha=2.35.
A Shock-Induced PDR in the HH 80/81 Flow. Far Infrared Spectroscopy (astro-ph/0008391)
Sergio Molinari, Alberto Noriega-Crespo, Luigi Spinoglio
Aug. 24, 2000 astro-ph
The two spectrometers on board the Infrared Space Observatory were used to observe the Herbig-Haro objects HH 80, 81 and 80N, as well as their candidate exciting source IRAS 18162-2048. The fine structure lines of [OI]63um, [OI]145um and [CII]158um are detected everywhere, while [NII]122um and [OIII]88um are only detected toward the HH objects; line ratios confirm for the first time the collisionally excited HH nature of HH 80N. No molecular line is detected in any of the observed positions. We use a full shock code to diagnose shock velocities vs~100 km/s toward the HH objects, as expected from the optical spectroscopy. Since proper motions suggest velocities in excess of 600 km/s, the HH objects probably represent the interface between two flow components with velocity differing by ~vs. Aside from the flow exciting source, the [CII]158um line is everywhere brighter than the [OI]63um line, indicating the presence of a Photo-Dissociation Region (PDR) all along the flow. Continuum emission from the HH objects and from other positions along the flow is only detected longword of ~50 micron, and its proportionality to the [CII]158um line flux suggests it is PDR in origin. We propose that the FUV continuum irradiated by the HH objects and the jet is responsible for the generation of a PDR at the walls of the flow cavity. We develop a very simple model which strengthens the plausibility of this hypothesis.
A search for precursors of Ultracompact HII Regions in a sample of luminous IRAS sources. III: Circumstellar Dust Properties (astro-ph/0001231)
Sergio Molinari, Jan Brand, Riccardo Cesaroni, Francesco Palla
Jan. 13, 2000 astro-ph
The James Clerk Maxwell Telescope has been used to obtain submillimeter and millimeter continuum photometry of a sample of 30 IRAS sources previously studied in molecular lines and centimeter radio continuum. All the sources have IRAS colours typical of very young stellar objects (YSOs) and are associated with dense gas. In spite of their high luminosities (L>10000 solar units), only ten of these sources are also associated with a radio counterpart. In 17 cases we could identify a clear peak of millimeter emission associated with the IRAS source, while in 9 sources the millimeter emission was extended or faint and a clear peak could not be identified; upper limits were found in 4 cases only. Using simple greybody fitting model to the observed SED, we derive global properties of the circumstellar dust. The dust temperature varies from 24 K to 45 K, while the exponent of the dust emissivity vs frequency power-law spans a range 1.56<beta<2.38, characteristic of silicate dust; total circumstellar masses range up to more than 500 solar masses. We find that for sources with comparable luminosities, the total column densities derived from the dust masses do not distinguish between sources with and without radio counterpart. We interpret this result as an indication that dust does not play a dominant role in inhibiting the formation of the HII region. We examine several scenarios for their origin in terms of newborn ZAMS stars and although most of these fail to explain the observations, we cannot exclude that these sources are young stars already on the ZAMS with modest residual accretion that quenches the expansion of the HII region. Finally, we consider the possibility that the IRAS sources are high-mass pre-ZAMS (or pre-H-burning) objects deriving most of the emitted luminosity from accretion. | CommonCrawl |
Polynomial detection of matrix subalgebras
by Daniel Birmajer PDF
The double Capelli polynomial of total degree $2t$ is \begin{equation*} \sum \left \{ (\mathrm {sg} \sigma \tau ) x_{\sigma (1)}y_{\tau (1)}x_{\sigma (2)}y_{\tau (2)}\cdots x_{\sigma (t)}y_{\tau (t)} |\; \sigma , \tau \in S_t\right \}. \end{equation*} It was proved by Giambruno-Sehgal and Chang that the double Capelli polynomial of total degree $4n$ is a polynomial identity for $M_n(F)$. (Here, $F$ is a field and $M_n(F)$ is the algebra of $n \times n$ matrices over $F$.) Using a strengthened version of this result obtained by Domokos, we show that the double Capelli polynomial of total degree $4n-2$ is a polynomial identity for any proper $F$-subalgebra of $M_n(F)$. Subsequently, we present a similar result for nonsplit inequivalent extensions of full matrix algebras.
A. S. Amitsur and J. Levitzki, Minimal identities for algebras, Proc. Amer. Math. Soc. 1 (1950), 449–463. MR 36751, DOI 10.1090/S0002-9939-1950-0036751-9
Daniel Birmajer, On subalgebras of $n\times n$ matrices not satisfying identities of degree $2n-2$. Submitted to Linear Algebra and its Applications (2003).
Qing Chang, Some consequences of the standard polynomial, Proc. Amer. Math. Soc. 104 (1988), no. 3, 707–710. MR 964846, DOI 10.1090/S0002-9939-1988-0964846-8
M. Domokos, Eulerian polynomial identities and algebras satisfying a standard identity, J. Algebra 169 (1994), no. 3, 913–928. MR 1302125, DOI 10.1006/jabr.1994.1317
M. Domokos, A generalization of a theorem of Chang, Comm. Algebra 23 (1995), no. 12, 4333–4342. MR 1352536, DOI 10.1080/00927879508825467
Edward Formanek, Central polynomials for matrix rings, J. Algebra 23 (1972), 129–132. MR 302689, DOI 10.1016/0021-8693(72)90050-6
A. Giambruno and S. K. Sehgal, On a polynomial identity for $n\times n$ matrices, J. Algebra 126 (1989), no. 2, 451–453. MR 1024999, DOI 10.1016/0021-8693(89)90312-8
Edward Letzter, Effective detection of nonsplit module extensions. E-print ArXiv http:// arxiv.org/math.RA/0206141 (2002).
Ju. P. Razmyslov, The Jacobson Radical in PI-algebras, Algebra i Logika 13 (1974), 337–360, 365 (Russian). MR 0419515
Shmuel Rosset, A new proof of the Amitsur-Levitski identity, Israel J. Math. 23 (1976), no. 2, 187–188. MR 401804, DOI 10.1007/BF02756797
Louis Halle Rowen, Polynomial identities in ring theory, Pure and Applied Mathematics, vol. 84, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1980. MR 576061
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 15A24, 15A99, 16R99
Retrieve articles in all journals with MSC (2000): 15A24, 15A99, 16R99
Daniel Birmajer
Affiliation: Department of Mathematics and Computer Science, Nazareth College, 4245 East Avenue, Rochester, New York 14618
Email: [email protected]
Received by editor(s) in revised form: December 22, 2003
Published electronically: October 18, 2004
Communicated by: Martin Lorenz
MSC (2000): Primary 15A24, 15A99, 16R99 | CommonCrawl |
A Picture for the Mind: the Bloch Ball
Sunday, 14 August 2022 Blake Stacey
Now and then, stories will pop up in the news about the latest hot new thing in quantum computers. If the story makes any attempt to explain why quantum computing is special or interesting, it often recycles a remark along the lines of, "A quantum bit can be both 0 and 1 simultaneously." This, well, ehhhhh… It's rather like saying that Boston is at both the North Pole and the South Pole simultaneously. Something important has been lost. I figured I should take a stab at explaining what. Our goal today is to develop a mental picture for a qubit, the basic unit that quantum computers are typically regarded as built out of. To be more precise, we will develop a mental picture for the mathematics of a qubit, not for how to implement one in the lab. There are many ways to do so, and getting into the details of any one method would, for our purposes today, be a distraction. Instead, we will be brave and face the issue on a more abstract level.
A qubit is a thing that one prepares and that one measures. The mathematics of quantum theory tells us how to represent these actions algebraically. That is, it describes the set of all possible preparations, the set of all possible measurements, and how to compute the probability of getting a particular result from a chosen measurement given a particular preparation. To do something interesting, one would typically work with multiple qubits together, but we will start with a single one. And we will begin with the simplest kind of measurement, the binary ones. A binary test has two possible outcomes, which we can represent as 0 or 1, "plus" or "minus", "ping" and "pong", et cetera. In the lab, this could be sending an ion through a magnetic field and registering whether it swerved up or down; or, it could be sending a blip of light through a polarizing filter turned at a certain angle and registering whether there is or is not a flash. Or any of many other possibilities! The important thing is that there are two outcomes that we can clearly distinguish from each other.
For any physical implementation of a qubit, there are three binary measurements of special interest, which we can call the $X$ test, the $Y$ test and the $Z$ test. Let us denote the possible outcomes of each test by $+1$ and $-1$, which turns out to be a convenient choice. The expected value of the $X$ test is the average of these two possibilities, weighted by the probability of each. If we write $P(+1|X)$ for the probability of getting the $+1$ outcome given that we do the $X$ test, and likewise for $P(-1|X)$, then this expected value is $$ x = P(+1|X) \cdot (+1) + P(-1|X) \cdot (-1). $$ Because this is a weighted average of $+1$ and $-1$, it will always be somewhere in that interval. If for example we are completely confident that an $X$ test will return the outcome $+1$, then $x = 1$. If instead we lay even odds on the two possible outcomes, then $x = 0$. Likewise, $$ y = P(+1|Y) \cdot (+1) + P(-1|Y) \cdot (-1), $$ and $$ z = P(+1|Z) \cdot (+1) + P(-1|Z) \cdot (-1). $$
To specify the preparation of a single qubit, all we have to do is pick a value for $x$, a value for $y$ and a value for $z$. But not all combinations $(x,y,z)$ are physically allowed. The valid preparations are those for which the point $(x,y,z)$ lies on or inside the ball of radius 1 centered at the origin: $$ x^2 + y^2 + z^2 \leq 1. $$ We call this the Bloch ball, after the physicist Felix Bloch (1905–1983). The surface of the Bloch ball, at the distance exactly 1 from the origin, is the Bloch sphere. The points where the axes intersect the Bloch sphere — $(1,0,0)$, $(-1,0,0)$, $(0,1,0)$ and so forth — are the preparations where we are perfectly confident in the outcome of one of our three tests. Points in the interior of the ball, not on the surface, imply uncertainty about the outcomes of all three tests. But look what happens: If I am perfectly confident of what will happen should I choose to do an $X$ test, then my expected values $y$ and $z$ must both be zero, meaning that I am completely uncertain about what might happen should I choose to do either a $Y$ test or a $Z$ test. There is an inevitable tradeoff between levels of uncertainty, baked into the shape of the theory itself. One might even call that a matter… of principle.
We are now well-poised to improve upon the language in the news stories. The point that specifies the preparation of a qubit can be at the North Pole $(0,0,1)$, the South Pole $(0,0,-1)$, or anywhere in the ball between them. We have a whole continuum of ways to be intermediate between completely confident that the $Z$ test will yield $+1$ (all the way north) and completely confident that it will yield $-1$ (all the way south).
Now, there are other things one can do to a qubit. For starters, there are other binary measurements beyond just the $X$, $Y$ and $Z$ tests. Any pair of points exactly opposite each other on the Bloch sphere define a test, with each point standing for an outcome. The closer the preparation point is to an outcome point, the more probable that outcome. To be more specific, let's write the preparation point as $(x,y,z)$ and the outcome point as $(x',y',z')$. Then the probability of getting that outcome given that preparation is $$ P = \frac{1}{2}(1 + x x' + y y' + z z'). $$
An interesting conceptual thing has happened here. We have encoded the preparation of a qubit by a set of expected values, i.e., a set of probabilities. Consequently, all those late-night jazz-cigarette arguments over what probability means will spill over into the arguments about what quantum mechanics means. Moreover, and not unrelatedly, we can ask, "Why three probabilities? Why is it the Bloch sphere, instead of the Bloch disc or the Bloch hypersphere?" It would be perfectly legitimate, mathematically, to require probabilities for only two tests in order to specify a preparation point, or to require more than three. That would not be quantum mechanics; the fact that three coordinates are needed to nail down the preparation of the simplest possible system is a structural fact of quantum theory. But is there a deeper truth from which that can be deduced?
One could go in multiple directions from here: What about tests with more than two outcomes? Systems composed of more than one qubit? Very quickly, the structures involved become more difficult to visualize, and familiarity with linear algebra — eigenvectors, eigenvalues and their friends — becomes a prerequisite. People have also tried a variety of approaches to understand what quantum theory might be derivable from. Any of those topics could justify something in between a blog post and a lifetime of study.
SUGGESTED READINGS:
E. Rieffel and W. Polak, Quantum Computing: A Gentle Introduction (MIT Press, 2011), chapter 2
J. Rau, Quantum Theory: An Information Processing Approach (Oxford University Press, 2021), section 3.3
M. Weiss, "Python tools for the budding quantum bettabilitarian" (2022) | CommonCrawl |
An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization
Two bounds for integrating the virtual dynamic cellular manufacturing problem into supply chain management
July 2016, 12(3): 931-947. doi: 10.3934/jimo.2016.12.931
The risk-averse newsvendor game with competition on demand
Yuwei Shen 1, , Jinxing Xie 1, and Tingting Li 2,
Department of Mathematical Sciences, Tsinghua University, Beijing, 100084, China, China
School of Management Science and Engineering, Dongbei University of Finance and Economics, Dalian, 116025, China
Received July 2014 Revised April 2015 Published September 2015
This paper studies the effect of risk-aversion in the competitive newsvendor game. Multiple newsvendors with risk-averse preferences face a random demand and the demand is allocated proportionally to their inventory levels. Each newsvendor aims to maximize his expected utility instead of his expected profit. Assuming a general form of risk-averse utility function, we prove that there exists a pure Nash equilibrium in this game, and it is also unique under certain conditions. We find that the order quantity of each newsvendor is decreasing in the degree of risk-aversion and increasing in the initial wealth. Newsvendors with moderate preferences of risk-aversion make more profits compared with the risk-neutral situation. We also discuss the joint effect of risk-aversion and competition. If the effect of risk-aversion is strong enough to dominate the effect of competition, the total inventory level under competition will be lower than that under centralized decision-making.
Keywords: competition, Inventory, newsvendor model, risk-averse, Nash equilibrium..
Mathematics Subject Classification: Primary: 90B05; Secondary: 91A8.
Citation: Yuwei Shen, Jinxing Xie, Tingting Li. The risk-averse newsvendor game with competition on demand. Journal of Industrial & Management Optimization, 2016, 12 (3) : 931-947. doi: 10.3934/jimo.2016.12.931
K. J. Arrow, The theory of risk aversion, in Essays in the Theory of Risk-Bearing (ed. K. J. Arrow), Markham Publishing Company, 1971, 90-120. Google Scholar
V. Agrawal and S. Seshadri, Impact of uncertainty and risk aversion on price and order quantity in the newsvendor problem, Manufacturing & Service Operations Management, 2 (2000), 410-423. doi: 10.1287/msom.2.4.410.12339. Google Scholar
A. O. Brown and C. S. Tang, The impact of alternative performance measures on single-period inventory policy, Journal of Industrial and Management Optimization, 2 (2006), 297-318. doi: 10.3934/jimo.2006.2.297. Google Scholar
P. L. Brockett and L. L. Golden, A class of utility functions containing all the common utility functions, Management Science, 33 (1987), 955-964. doi: 10.1287/mnsc.33.8.955. Google Scholar
G. P. Cachon, Supply chain coordination with contracts, in Handbooks in Operations Research and Management Science, Vol. 11 (eds. S. C. Graves and A. G. de Kok), Elsevier, 2003, 227-339. doi: 10.1016/S0927-0507(03)11006-7. Google Scholar
X. Chen, M. Sim, D. S. Levi and P. Sun, Risk aversion in inventory management, Operations Research, 55 (2007), 828-842. doi: 10.1287/opre.1070.0429. Google Scholar
L. Eeckhoudt, C. Gollier and H. Schlesinger, The risk-averse (and prudent) newsboy, Management Science, 41 (1995), 786-794. doi: 10.1287/mnsc.41.5.786. Google Scholar
D. Fudenberg and J. Tirole, Game Theory, MIT Press, London, 1991. Google Scholar
I. Friend and M. E. Blume, The demand for risky assets, The American Economic Review, 65 (1975), 900-922. Google Scholar
A. Gasparro and J. Beckerman, Whole foods again lowers sales projections; specialty supermarket reports 6 Wall Street Journal (Online), 30 Jul 2014. Available from: http://search.proquest.com/docview/1549531450?accountid=14426. Google Scholar
K. Girotra and S. Netessine, How to build risk into your business model, Harvard Business Review, 89 (2011), 100-105. Google Scholar
C. A. Holt and S. K. Laury, Risk aversion and incentive effects, American Economic Review, 92 (2002), 1644-1655. doi: 10.1257/000282802762024700. Google Scholar
J. R. Hagerty, 3M begins untangling its 'hairballs' - making plastic hooks is harder than it seems; streamlining a four-state, 1,300-mile supply chain, Wall Street Journal, 17 May 2012. Available from: http://search.proquest.com/docview/1013910634?accountid=14426. Google Scholar
K. B. Hamal and J. R. Anderson, A note on decreasing absolute risk aversion among farmers in Nepal, Australian Journal of Agricultural Economics, 26 (1982), 220-225. doi: 10.1111/j.1467-8489.1982.tb00414.x. Google Scholar
B. Keren and J. S. Pliskin, A benchmark solution for the risk-averse newsvendor problem, European Journal of Operational Research, 174 (2006), 1643-1650. doi: 10.1016/j.ejor.2005.03.047. Google Scholar
M. Khouja, The single-period (news-vendor) problem: literature review and suggestions for future research, Omega-The International Journal of Management Science, 27 (1999), 537-553. doi: 10.1016/S0305-0483(99)00017-1. Google Scholar
S. A. Lippman and K. F. McCardle, The competitive newsboy, Operations Research, 45 (1997), 54-65. doi: 10.1287/opre.45.1.54. Google Scholar
W. Liu, S. J. Song and C. Wu, Impact of loss aversion on the newsvendor game with product substitution, International Journal of Production Economics, 141 (2013), 352-359. doi: 10.1016/j.ijpe.2012.08.017. Google Scholar
J. W. Pratt, Risk aversion in the small and in the large, Econometrica, 32 (1964), 122-136. Google Scholar
M. Parlar, Game theoretic analysis of the substitutable product inventory problem with random demand, Naval Research Logistics, 35 (1988), 397-409. doi: 10.1002/1520-6750(198806)35:3<397::AID-NAV3220350308>3.0.CO;2-Z. Google Scholar
Y. Qin, R. X. Wang, A. J. Vakhria, Y. W. Chen and M. M. H. Seref, The newsvendor problem: review and directions for future research, European Journal of Operational Research, 213 (2011), 361-374. doi: 10.1016/j.ejor.2010.11.024. Google Scholar
A. Saha, C. R. Shumway and H. Talpaz, Joint estimation of risk preference structure and technology using expo-power utility, American Journal of Agricultural Economics, 76 (1994), 173-184. doi: 10.2307/1243619. Google Scholar
M. E. Schweitzer and G. P. Cachon, Decision bias in the newsvendor problem with a known demand distribution: Experimental evidence, Management Science, 46 (2000), 404-420. doi: 10.1287/mnsc.46.3.404.12070. Google Scholar
F. W. Siegel and J. P. Hoban, Relative risk aversion revisited, The Review of Economics and Statistics, 64 (1982), 481-487. doi: 10.2307/1925947. Google Scholar
T. L. Urban, Inventory models with inventory-level-dependent demand: A comprehensive review and unifying theory, European Journal of Operational Research, 162 (2005), 792-804. doi: 10.1016/j.ejor.2003.08.065. Google Scholar
H. B. Wolfe, A model for control of style merchandise, Industrial Management Review, 9 (1968), 69-82. Google Scholar
C. X. Wang, The loss-averse newsvendor game, International Journal of Production Economics, 124 (2010), 448-452. doi: 10.1016/j.ijpe.2009.12.007. Google Scholar
C. X. Wang, S. Webster and N. C. Suresh, Would a risk-averse newsvendor order less at a higher selling price?, European Journal of Operational Research, 196 (2009), 544-553. doi: 10.1016/j.ejor.2008.04.002. Google Scholar
M. Wik, T. A. Kebede, O. Bergland and S. T. Holden, On the measurement of risk aversion from experimental data, Applied Economics, 36 (2004), 2443-2451. doi: 10.1080/0003684042000280580. Google Scholar
M. Wu, S. X. Zhu and R. H. Teunter, A risk-averse competitive newsvendor problem under the CVaR criterion, International Journal of Production Economics, 156 (2014), 13-23. doi: 10.1016/j.ijpe.2014.05.009. Google Scholar
Y. Z. Wang and Y. Gerchak, Supply chain coordination when demand is shelf-space dependent, Manufacturing & Service Operations Management, 3 (2001), 82-87. doi: 10.1287/msom.3.1.82.9998. Google Scholar
G. Xie, W. Y. Yue and S. Y. Wang, Optimal selection of cleaner products in a green supply chain with risk aversion, Journal of Industrial and Management Optimization, 11 (2015), 515-528. doi: 10.3934/jimo.2015.11.515. Google Scholar
T. J. Xiao and D. Q. Yang, Price and service competition of supply chains with risk-averse retailers under demand uncertainty, International Journal of Production Economics, 114 (2008), 187-200. doi: 10.1016/j.ijpe.2008.01.006. Google Scholar
Bin Zhou, Hailin Sun. Two-stage stochastic variational inequalities for Cournot-Nash equilibrium with risk-averse players under uncertainty. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 521-535. doi: 10.3934/naco.2020049
Kegui Chen, Xinyu Wang, Min Huang, Wai-Ki Ching. Compensation plan, pricing and production decisions with inventory-dependent salvage value, and asymmetric risk-averse sales agent. Journal of Industrial & Management Optimization, 2018, 14 (4) : 1397-1422. doi: 10.3934/jimo.2018013
Bin Chen, Wenying Xie, Fuyou Huang, Juan He. Quality competition and coordination in a VMI supply chain with two risk-averse manufacturers. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2903-2924. doi: 10.3934/jimo.2020100
Xiulan Wang, Yanfei Lan, Wansheng Tang. An uncertain wage contract model for risk-averse worker under bilateral moral hazard. Journal of Industrial & Management Optimization, 2017, 13 (4) : 1815-1840. doi: 10.3934/jimo.2017020
Min Li, Jiahua Zhang, Yifan Xu, Wei Wang. Effects of disruption risk on a supply chain with a risk-averse retailer. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021024
Zhiyuan Zhen, Honglin Yang, Wenyan Zhuo. Financing and ordering decisions in a capital-constrained and risk-averse supply chain for the monopolist and non-monopolist supplier. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021104
Jie Jiang, Zhiping Chen, He Hu. Stability of a class of risk-averse multistage stochastic programs and their distributionally robust counterparts. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2415-2440. doi: 10.3934/jimo.2020075
Xiaolin Xu, Xiaoqiang Cai. Price and delivery-time competition of perishable products: Existence and uniqueness of Nash equilibrium. Journal of Industrial & Management Optimization, 2008, 4 (4) : 843-859. doi: 10.3934/jimo.2008.4.843
Yanju Zhou, Zhen Shen, Renren Ying, Xuanhua Xu. A loss-averse two-product ordering model with information updating in two-echelon inventory system. Journal of Industrial & Management Optimization, 2018, 14 (2) : 687-705. doi: 10.3934/jimo.2017069
Wei Liu, Shiji Song, Ying Qiao, Han Zhao. The loss-averse newsvendor problem with random supply capacity. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1417-1429. doi: 10.3934/jimo.2016080
Ganfu Wang, Xingzheng Ai, Chen Zheng, Li Zhong. Strategic inventory under suppliers competition. Journal of Industrial & Management Optimization, 2020, 16 (5) : 2159-2173. doi: 10.3934/jimo.2019048
Prasenjit Pramanik, Sarama Malik Das, Manas Kumar Maiti. Note on : Supply chain inventory model for deteriorating items with maximum lifetime and partial trade credit to credit risk customers. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1289-1315. doi: 10.3934/jimo.2018096
Nana Wan, Li Li, Xiaozhi Wu, Jianchang Fan. Risk minimization inventory model with a profit target and option contracts under spot price uncertainty. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021093
Wei Liu, Shiji Song, Ying Qiao, Han Zhao, Huachang Wang. The loss-averse newsvendor problem with quantity-oriented reference point under CVaR criterion. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021085
Junichi Minagawa. On the uniqueness of Nash equilibrium in strategic-form games. Journal of Dynamics & Games, 2020, 7 (2) : 97-104. doi: 10.3934/jdg.2020006
Jian Hou, Liwei Zhang. A barrier function method for generalized Nash equilibrium problems. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1091-1108. doi: 10.3934/jimo.2014.10.1091
Yanhong Yuan, Hongwei Zhang, Liwei Zhang. A penalty method for generalized Nash equilibrium problems. Journal of Industrial & Management Optimization, 2012, 8 (1) : 51-65. doi: 10.3934/jimo.2012.8.51
Alain Bensoussan, Metin Çakanyildirim, Suresh P. Sethi. Censored newsvendor model revisited with unnormalized probabilities. Journal of Industrial & Management Optimization, 2009, 5 (2) : 391-402. doi: 10.3934/jimo.2009.5.391
Yeong-Cheng Liou, Siegfried Schaible, Jen-Chih Yao. Supply chain inventory management via a Stackelberg equilibrium. Journal of Industrial & Management Optimization, 2006, 2 (1) : 81-94. doi: 10.3934/jimo.2006.2.81
Elvio Accinelli, Bruno Bazzano, Franco Robledo, Pablo Romero. Nash Equilibrium in evolutionary competitive models of firms and workers under external regulation. Journal of Dynamics & Games, 2015, 2 (1) : 1-32. doi: 10.3934/jdg.2015.2.1
Yuwei Shen Jinxing Xie Tingting Li | CommonCrawl |
SN Applied Sciences
January 2020 , 2:107 | Cite as
Investigation of lithological control of iron enrichment in groundwater using geophysical techniques in Yenagoa, Southern Nigeria
K. S. Okiongbo
K. K. Oboshenure
A. R. C. Amakiri
First Online: 18 December 2019
Part of the following topical collections:
2. Earth and Environmental Sciences (general)
Hydrochemical analysis of water samples from Yenagoa in the Niger Delta shows widespread occurrence of iron (Fe) in the groundwater. The Fe concentration is more than 0.3 mg/L at many places, and the distribution is heterogeneous both vertically and horizontally. In order to identify the cause of the high heterogeneity, we carried out an integrated study consisting of hydrogeochemical, electrical resistivity sounding and induced polarization (IP) chargeability measurements at eleven sites and 2-D electrical resistivity profiling (at 2 sites). Data processing using inversion techniques resulted in 4-layered resistivity and chargeability—depth models. The results show that clean sand and gravel exhibit high resistivity but low chargeability and normalized chargeability values, whereas clay and sandy clay exhibit relatively low resistivity but high chargeability and normalized chargeability values. In sites where the aquifer is overlain by a thick clay layer, Fe concentration is high (Fe > 0.3 mg/L) in the groundwater and redox potential values range between 118 and 133 mV. We interpret that the low-permeability clay layer creates a relatively atmosphere-isolated state in the underlying aquifer, which is responsible for the reductive ambient subsurface groundwater environment. In sites where the aquifer is capped by silt, Fe concentration is low (< 0.3 mg/L) in the groundwater and redox potential values range between 115 and 164 mv indicating a mild oxidation environment. We interpret that the clay acts as a controlling factor to the Fe enrichment in the groundwater regime. Knowledge of the clay layer, which is identified in the present study, will be helpful in selecting suitable sites for boreholes.
Groundwater Electrical resistivity Induced polarization Normalized chargeability Yenagoa
In the last decade, several investigations were carried out in parts of the Niger Delta, Southern Nigeria, to determine the regulating processes and spatial distribution of Fe in the shallow alluvial aquifer [1, 2]. The results of these studies show that the concentration of Fe in the groundwater abstracted from boreholes is high. Aside, the distribution of Fe in the groundwater was reported to be extremely heterogeneous, both vertically and laterally, within a scale of tens of metres. The issue of high iron concentration (> 0.3 mg/L) in groundwater is a common problem, since over 90% of residents in most cities in the Niger Delta depend on water abstracted from shallow private boreholes. Presently, we remain remarkably ignorant of the cause of the high Fe heterogeneity in the groundwater.
Considering the above, it was pertinent to investigate the probable reasons responsible for the high spatial variation of Fe in the groundwater and provide a sustainable solution for mitigation. Most studies in this direction only concentrated on the geochemical aspects of Fe contamination and suggested various purification techniques for iron removal [3]. Designing various purification methods for iron removal is only temporary solution and suffers from many obvious problems such as waste disposal and maintenance and hence is not a sustainable solution. The strategic importance of groundwater in Yenagoa and the threats posed by excessive Fe concentration emphasize the significance of low-Fe groundwater sources and prompted the assessment of the hydrostratigraphy of the sedimentary sequences in the study area. In assessing the hydrostratigraphy of the sedimentary sequence, it was considered that investigating the alluvial aquifer over short distances ranging between tens to hundreds of metres and in different locations within the study area would perhaps help explain whether the variation in the groundwater Fe concentration was due to local variations in aquifer stratigraphy. Although borehole drilling could be one of the best ways to determine the lithological variation, this approach was considered tedious and would require drilling several boreholes of different depths. This will be time-consuming, laborious and cost-intensive. The use of non-invasive surface geophysical techniques is of great relevance in that reasonably factual subsurface information is obtained without any destruction to the environment within a relatively short time. Electrical resistivity and time domain induced polarization have shown a good complementarity in this regard. The geoelectrical method provides a wide range of variations in the subsurface electrical resistivity. The variations are often associated with water content and lithology; hence, it is one of the most powerful geophysical methods often used in providing solutions to hydrogeological problems [4, 5, 6]. Recently, induced polarization method which is based on the chargeability effect of the subsurface has proved to be of significant value in the investigation of lithological variability of unconsolidated sediments especially in the mapping of clay content. In this study, we explore the lithological control on Fe contamination using surface geophysical methods in Yenagoa and environs.
2 Description of the study area
2.1 Location, physiography and climate
Yenagoa is located within the Southern Nigeria sedimentary basin. It is the capital of Bayelsa State. The study area covers an area of about 50 km2 of Yenagoa, and its metropolis. Yenagoa is bounded by longitudes 006° 10′ 3.07″ and 00 6° 25′ 10.53″ East of the prime meridian and latitudes 04° 51′ 39.73″ and 05°.2′ 25.53″ North of the equator. Geographically, Yenagoa is within the coastal area of the Recent Niger Delta (Fig. 1) where the ground surface is relatively flat, sloping very gently seawards [7]. Its mean elevation is about 8 m above the mean sea level [8]. The study area has a tropical rain forest climate characterized by rainy season and dry season. The rainy season commences from April to October with a brief dry period in August. The dry season lasts between November and March. The mean annual rainfall is about 4500 mm [9] and about 85% of the mean annual rain falls during the wet season. The temperature varies between 25 and 32 °C. Fishing and farming are the main occupation of the people.
Map of the Niger Delta showing the study area
2.2 Geology and hydrogeology of the study area
The study area lies within the fresh water swamps, backswamps, deltaic plain, alluvium and meander belt geomorphic unit of the Niger Delta [9]). The Niger Delta is basically an alluvial plain and consists of the modern and Holocene delta top deposits. Grain-size profiles of the Holocene alluvial deposits consist of a fining-up sequence of sand capped by fine silts and clay indicating a fluvial environment of deposition [10]. The fine-grained silts and clay overlying the basal sandy sequence are often called the near surface aquitard. The near surface aquitard thickness varies between < 5 to about 12 m, and due to the varying clay, silt and fine sand content, [10] reported that the aquitard permeability is highly heterogeneous. The near surface aquitard becomes a confining unit if it is thick and impermeable, which prevents percolation of precipitation into the alluvial aquifer. Akpokodje [9] reported that groundwater flows from North to South in the region.
Three main subsurface lithostratigraphic units are reported in the Niger Delta [11]. From top to bottom, they are Benin, Agbada and Akata Formations. The Benin Formation which is fluvial in origin is the main aquifer. Groundwater occurs mainly under unconfined conditions in the Benin Formation. Abam [12] observed that the sediments of the Benin Formation were deposited during the Late Tertiary–Early Quaternary period and are about 2100 m thick. The sediments are lenticular and unconsolidated and consist of coarse- to medium-fine-grained sands with localized intercalations of clay/shale. Gravel and pebbles are minor components. Mbonu et al. [13] reported that the sands are moderately sorted and poorly cemented. The presence of thin clay beds creates discontinuities in the vertical and lateral continuity of the aquifer, resulting in the presence of local perched aquifers [10]. The aquifer is directly recharged through the infiltration of rain water. In the Niger Delta, the water table in many areas is close to the surface though subject to seasonal variations. The water table is about 3–4 m in the dry season [14], but rises considerably in the rainy season. Groundwater is the main source of drinking water for over 80% of the population in the study area.
3 Induced polarization (IP) method
In the electrical method of geophysical prospecting, current is injected using two current electrodes A and B. The passage of the electric current through the ground creates a potential difference (∆V) usually measured across a pair of potential electrodes C and D. If the inducing current is turned off, the difference of potential (∆V) does not immediately drop to zero, but decays slowly over a period of time. The recording of the decaying voltage gives a decay curve ∆Vip(t). In time domain surveys, the decay curve ∆Vip(t) is the object of study because it is characteristic of the medium in terms of initial magnitude, slope and relaxation time. The amplitude of ∆Vip is related to the polarizability of the earth materials [15]. This capacity to polarize is referred to as the IP response. The form of the primary wave and the IP decay is shown in Fig. 2. Induced polarization is due to two main sources: (1) membrane polarization and (2) electrode polarization. The presence of clay causes membrane polarization. The clay particles, which are negatively charged, attract positive ions from the electrolytes in the capillaries of the clay particles and thus behave as ion-selective membrane impeding their mobility through the capillaries.
Time domain IP discharge curve
Electrode polarization produces similar effect but occurs when metallic minerals are present. The flow of electrons through a metal is much faster than the flow of ions in the electrolyte, and hence, opposite charges accumulate on the surface of metallic grains that block the path of ionic flow through the pore fluid. In Fig. 2, the chargeability M is computed by integrating the signal Vip along the decay over n time windows, or gates. The chargeability of IP effect was measured by integrating the area under the IP decay curve according to the relation [16, 17] given below:
$$M = \frac{1}{{V_{o} }}\int\limits_{t1}^{{t_{2} }} {V(t)} {\text{d}}t$$
where Vo is the voltage measured before the current is turned off, t1 and t2 are the start and stop time intervals, respectively, and V(t) is the decaying voltage. The chargeability (M) is usually expressed in millisecond (msec) or milliVolt/Volt (mV/V).
4 Data acquisition and processing
Prior to the acquisition of geophysical data and drilling of boreholes, locations with contrasting dissolved Fe concentration (low and high) in the groundwater within Yenagoa and environs were selected for study. The selection of these locations was based on analysis of ten groundwater samples collected from the existing domestic boreholes spread over the study area. The depths of these boreholes range from 9 to 30 m. Geochemical analysis of the groundwater samples shows that groundwater from Tombia, Akenfa III, etc., (Fig. 3) exhibits Fe concentration within WHO acceptable limits (~ 0.02–0.3 mg/L), while groundwater samples from Amabolou, Azikoro, etc. (Fig. 3), have Fe concentrations greater than 0.3 mg/L. This analysis helped in planning the field layout of the geophysical profiles.
Map of study area showing VES-IP sounding, borehole locations and 2D traverses
4.1 Geoelectrical sounding and induced polarization
We acquired geoelectrical and IP data in eleven locations, i.e. five in low-Fe areas and six in high-Fe areas (Fig. 3). The electrical resistivity–IP soundings were carried out using the Schlumberger configuration. In general, soundings were carried out using the Abem Terrameter SAS 1000. Maximum current electrode separation AB/2 ranged between 100 and 150 m. In the Schlumberger configuration, current was injected into the ground through two outer electrodes A and B and the resulting voltage difference at two potential electrodes (C and D) was measured. An increase in the depth of current penetration is achieved by progressively increasing the electrode spacing. Field precautions observed to ensure good vertical electrical sounding (VES) data quality included firm grounding of the electrodes and checking for current leakage and creeps to avoid spurious measurements. During the survey, the resistance and chargeability were measured concurrently. These data were interpreted using IX1D (Interpex) software. The field resistivity data were converted to apparent resistivity (ρa) values and plotted against half-current spacing (AB/2) on log–log scale. Guided by the general trend of the field curves, partial curve smoothening of the field curves was made. The 1D inversion software (Interpex) takes advantage of least-squares optimization technique. The program iteratively compares the field data to a theoretical model curve. The starting model is modified or adjusted successively until the difference between the observation and the model output is reduced to a minimum. In constructing a model, we have used the principle that all maxima, minima and point of inflexion in a geoelectrical sounding curve indicate the existence of boundaries of different lithologies. Using this approach, the subsurface was divided into a number of horizontal layers of given thickness. The program iteratively changes the resistivities to obtain a best fit with the field data for the layer thicknesses chosen for the model. Due to the inherent problem of equivalence in geosounding data interpretation [18], lithological information from drilled boreholes was used to constrain all depth estimates in order to minimize the choice of equivalent models by fixing layer thicknesses and depths while allowing the resistivities to vary [19]. The resulting true resistivities represent the best average resistivity for the given layer and are shown in Tables 1 and 3, respectively.
Summary of VES-IP model results and their corresponding thicknesses at low-Fe areas
VES-IP No
Layer 1 (top soil)
Layer 2 (silty sand)
Layer 3 (sand)
Layer 4 (sandy clay)
(Ωm)
(ms)
(mS/m)
error (%)
VES-IP 1
ρ is bulk resistivity, η is chargeability, MN is normalized chargeability, and h is thickness
Because the chargeability represents a measure of polarization magnitude relative to conduction magnitude [17, 18] and thus is approximately linearly related to the bulk resistivity, we also calculated the normalized chargeability (MN), using the following expression:
$$MN = \frac{M}{\rho }\left( {mS/m} \right)$$
to separate the effects of conduction and polarization.Where M is the chargeability and ρ is the layer bulk resistivity.
4.2 Electrical resistivity imaging
We also acquired one each 2D electrical resistivity imaging profile in the low-Fe area and as well as in the high-Fe area using the Wenner array (Fig. 3). The 2D resistivity profile was acquired to supplement the vertical electrical sounding (VES) and IP sounding data. This is because the 2D resistivity imaging gives a clearer picture of the lateral and vertical variation of the subsurface geological sequences. The 2D resistivity imaging data were acquired manually using the Wenner configuration. Each 2D profile was 100 m in length. The electrode separation ranged between 5 and 30 m in an interval of 5 m, with a total of 21 electrode positions for each profile. Field measurements were taken using electrode spacing of 5.0 m at electrode positions 1, 2, 3 and 4 in each profile. Then, each electrode was moved a distance of 5.0 m (one unit electrode spacing), the active electrode positions being 2, 3, 4 and 5. This procedure was continued to the end of the profile with electrode positions for the last measurement being 18, 19, 20 and 21. The electrode spacing was then increased by 5.0 m, as mentioned above for measurements of next data level, such that the active positions were 1, 3, 5 and 7. The procedure was then repeated by moving each of the electrodes a distance 5.0 m (one unit electrode spacing) and maintaining the electrode spacing for the data level until the electrodes were at electrode positions 15, 17, 19 and 20. This procedure was continued until 6 data levels were observed, yielding a total of 63 data points in each of the profiles. RES2DINV computer code [19] was used in the inversion of the 2D data. The computer program takes advantage of the nonlinear optimization technique in which a 2D resistivity model of the subsurface is automatically determined for input apparent resistivity data [19, 20]. In this program, the subsurface is subdivided into a number of rectangular blocks based on the spread of the observed data. The 2D data were inverted using the least-squares inversion with standard least-squares constraint which minimizes the square difference between the observed and the calculated apparent resistivity values. The program displays the distribution of electrical properties in the form of 2D pseudo-section plot. 2D pseudo-section plot gives a simultaneous display of both horizontal and vertical variation of the subsurface resistivity and are useful for initial quality assessment [21]. In constructing a 2D pseudo-section plot, each measured value is put at the intersection of two 45o lines through centres of the quadripole. Each horizontal line is then associated with a specific value of n (inter electrode spacing) and gives a pseudo-depth of investigation. It is pertinent to note that the larger the n-values, the greater the depths of investigation [21].
4.3 Hydrogeochemical analysis
Eleven boreholes were drilled in the study area using rotary drilling method. Five boreholes were drilled in the low-Fe areas, while six were drilled in high-Fe areas. Each borehole was located close to a VES point. The locations of the boreholes in the low-Fe areas are Tombia, Kpansia (behind the market along the expressway), Igbogene, Akenfa III and Akenfa III (NNPC Road), while the high-Fe areas include Amabolou I, II and III, Biogbolo, Agbura and Azikoro (Fig. 3). The boreholes were developed, and groundwater samples were collected in clean 500-ml polyethylene bottles. Prior to sample collection, the boreholes were pumped continuously for about 10–20 minutes. In these samples, in situ measurements of temperature, redox potential (Eh) and pH were taken using precalibrated portable pH/ORP meter at the time of groundwater sampling. Major ions such as Na, K, Ca, Mg, Fe, HCO3, Cl and SO4 were determined including total dissolved solids (TDS) in the laboratory using standard procedures [22]. Major ions like sulphate (SO4−2) were determined by spectrophotometric turbidimetry. Using EDTA, calcium (Ca2+) and magnesium (Mg2+) were determined titrimetrically; chloride (Cl−) was determined by standard AgNO3 titration and bicarbonate (HCO3−) was determined using titration with HCl. Sodium (Na+) and potassium (K+) were measured using flame photometry; nitrate (NO3−) was determined by colourimetry with a UV–visible spectrophotometer (brucine method) while iron was measured using colourimeter with a UV–visible spectrophotometer at 520 nm. Subsequently, the groundwater composition was correlated with the colour characteristics of the sediments. Chemical composition of the groundwater at the low- and high-Fe areas as well as the redox potential values is shown in Tables 2, 4 and 5, respectively.
Chemical composition of the groundwater at low-Fe areas
Borehole No
Depth (m)
Na+
Ca2+
Mg2+
Cl−
HCO3
All parameters have been expressed as mg/L except pH, EC and Eh. The unit of EC is µS/cm, and that of Eh is mV
The drilled boreholes were lithologically logged and sampled at 3.0 m or more often when characteristics of the sediment changed based on their grain size and colour. Each sample was assigned to one of the three colours—grey, off-white and brown by virtual inspection of the sediments. The Fe concentration of about 3 g of the wet sediments was measured by AAS and after extraction with hydroxylamine hydrochloride (NH2OH.HCl) in 25% acetic acid, and filtering using a 0.45-µm cellulose acetic filter. The boreholes were screened at depth intervals of either oxidized brownish sand aquifers, off-white or greyish reduced sediments. The depth of these boreholes varied between 8 and 30 m. Fe concentration in the aquifer sediments is shown in Table 5.
In this study, resistivity soundings and profiling were carried out in locations with contrasting dissolved iron concentrations, low and high in the groundwater (Fig. 3). IP soundings were also carried out simultaneously to support the resistivity interpretation for investigating the lithological control of Fe enrichment in the groundwater.
Figure 4 shows resistivity and IP sounding curve types in locations where the dissolved Fe concentration is low, while Fig. 5 shows resistivity/IP curve, and a comparison between the I-D resistivity/IP model of VES-IP 1 (Table 1) with the lithological information obtained from the nearest borehole (B-6) in Tombia (Fig. 3). The I-D inversion results for the Schlumberger resistivity/IP soundings and results of the hydrochemical analysis from groundwater samples from boreholes near the resistivity/IP sounding points in the low-Fe areas are also presented in Tables 1 and 2, respectively. Correlation of the resistivity results of VES-IP1 with the lithological information from the nearest borehole (B-6) shows that the stratigraphic sequence consists of four layers (within the depth of investigation) in which the model resistivity of the third layer is higher than those of the upper and lower layers (Fig. 5). The resistivity curve obtained in this area is predominantly K-type (Fig. 4), and the stratigraphic sequence consists of top soil, silty sand, sand and a sandy clay (Fig. 5). The resistivity and thickness of the top soil vary between 19 and 43 Ωm and 0.5–1.1 m, but are 19–295 Ωm and 0.8–6.6 m in the silty sand layer underlying the top soil (Table 1). The resistivity and thickness of the sandy layer which serves as the aquifer vary between 219 and 2955 Ωm and 6.1–27 m, respectively, while the resistivity of the sandy clay layer ranges between 50 and 298 Ωm. The 2D resistivity cross section (Fig. 6) correlated well with the borehole information (Fig. 5) and shows the detailed variation of the subsurface lithological sequence. Figure 6 shows a clear trend of high apparent resistivity values at the top (values ranging between 200 and 680 Ωm), while at the bottom, the apparent resistivity values tend to decrease (values are lower than 100 Ωm). Correlation of the 2D resistivity cross section with the lithological information from the borehole located close to the profile line (Fig. 5) indicates that the top soil with an apparent resistivity range of 80–100 Ωm is underlain by a layer with an apparent resistivity range of 150–200 Ωm. The borehole data indicate that this layer is composed of silty sand. The apparent resistivity of the layer below the silty sand layer varies between 300 and 1000 Ωm. This layer serves as the aquifer and consists of sand. Below the aquifer is a layer with apparent resistivity range less than 100 Ωm. The lithological information from the borehole shows that this layer is a sandy clay layer.
Resistivity and IP sounding curves in the low-Fe areas
Resistivity–IP Sounding at Tombia Village (VES-IP 1, low-Fe area); a resistivity–IP model curves compared to field data; b model VES-IP results with borehole log and lithologic interpretation shown for comparison
Inverse model resistivity section at Tombia Village (low-groundwater Fe site). The vertical line indicates the location of borehole
The chargeability (M) and normalized chargeability (MN) of the top soil range between 0.79 and 2.47 ms and 0.014–0.073 mS/m but range between 1.3 and 3.8 ms and 0.009–0.186 mS/m, respectively, in the silty sand layer. In the sandy layer, the chargeability and normalized chargeability values are between 0.71 and 11.25 ms and 0.001–0.031 mS/m but range between 1.72 and 126.3 ms and 0.007–0.424 mS/m in the sandy clay layer. The chargeability of a given medium indicates polarizability of the medium. Thus, chargeability is related to the permittivity and resistivity of the subsurface materials as well as the porosity and moisture content in the subsurface media. The normalized chargeability (i.e. the ratio between chargeability and resistivity) has also been reported to be a good parameter for discriminating lithotypes [17, 18]. These authors suggested that clean sands have low chargeability and low normalized chargeability, while clay and clayey sands have high chargeability and normalized chargeability values.
A careful analysis of the model results (Figs. 5 and 8) shows a strong correlation between the resistivity and chargeability anomalies with high resistivity values corresponding to relatively low chargeability and normalized chargeability values (Table 1). Deposits of clean sand and gravel are easily distinguished by their high resistivity values from their surrounding clay and silt [23, 24, 25]. Small measurable IP effects are associated with clean sand and gravel deposits [26]. The model results show low chargeability and low normalized chargeability values in the second and third layers (i.e. silty sand and sand layers). This indicates that these layers are mainly composed of sandy materials and less disseminated clayey materials. In contrast, high chargeability and normalized chargeability values are observed in the sandy clay layer ranging between 1.72 and 126.3 ms and 0.007–0.424 mS/m relative to layers 2 and 3. This is consistent with the observation of Amaya et al.; Vonhala [26, 27] who suggested that strong IP effects are commonly observed in sediments containing clays disseminated on the surface of larger grains. Hence, the sandy clay layer displays large IP effects as a result of the presence of disseminated clay. The lithology suggested by the 2D resistivity interpretation (Fig. 6) correlated well with the observed subsurface materials obtained from ground-truthing in this location and also clearly delineate the basal sandy clay layer. Layer 3 serves as the aquifer in the area, and residents tap their water from this layer. Table 2 shows that the Fe concentrations obtained from the analysis of groundwater samples abstracted from boreholes in these areas are within WHO acceptable limits. Additionally, the measured redox potential values from the groundwater immediately after abstraction are relatively high, ranging between 115 and 164 mV, indicating a mild oxidation environment [28].
Figure 7 shows resistivity–IP sounding curves in locations that have high Fe concentration in the groundwater. For comparative study, IP sounding curves are presented along with resistivity sounding curves. Figure 8 shows field data, I-D inversion results for Schlumberger resistivity/IP sounding at Amabolou (VES-IP 6), a high-Fe-concentration area. The resulting I-D resistivity/IP model was compared to the borehole lithological information. Interpretation of the resistivity and IP sounding curves shows a large variation in resistivity and chargeability and hence normalized chargeability. Generally, resistivity, chargeability and normalized chargeability values vary between 5 and 2288 Ωm, 0.7–204 ms and 0.016–2.722 mS/m, respectively (Table 3). In Fig. 7, we observed that for values of AB/2 greater than 5 m, the resistivity and polarization curves have opposite slopes. The top soil is relatively dry and resistive and unpolarizable, but the lower layer, being wet, is lower in resistivity and high in polarizability. The low-resistivity anomaly and enhanced IP and normalized chargeability effect is attributed to the cationic exchange capacity due to the increase in clay volume [26].
Resistivity and IP sounding curves in the high-Fe areas
Resistivity–IP sounding at Amabolou Village (high-Fe area); a resistivity–IP model curves compared to field data; b model VES-IP results with borehole log and lithologic interpretation shown for comparison
Summary of VES-IP model results and their corresponding thicknesses at high-Fe areas
Layer 2 (clay)
RMS error(%)
VES-IP 10
Clay and clayey sand display large IP effect due to cationic exchange capacity [26, 27]. Table 3 shows that beside the top layer, the underlying layers, especially layers 2 and 3, have high chargeability and normalized chargeability values reflecting enhanced surface polarization caused by the presence of disseminated clay. Of particular interest with respect to the hydrogeology and the mobilization of Fe in the aquifer is the second layer, characterized by low resistivity (5–96 Ωm) and high chargeability and normalized chargeability values (0.67–13.9 ms and 0.039–2.242 mS/m). The correlation of VES-IP 6 interpretation results with the nearest lithological information (Fig. 8) shows that layer 2 is clay. The significant correlation of low resistivity and high chargeability affirms also that this layer is clay. The 2D resistivity cross section (Fig. 9) shows that underlying the top soil is a layer characterized by low apparent resistivity values in the depth range of about 2.5–7.8 m corroborating the results of the vertical electrical sounding (VES). The apparent resistivity values are generally less than 60 Ωm. The lithological information from the nearest borehole (Fig. 8) shows that this layer is a clay layer. Underlying the clay layer is a layer with slightly higher apparent resistivity values ranging 60–120 Ωm within the depth range of 8–13 m. The borehole data indicate that this layer is composed of fine sand in a matrix of finer sediments (clay) [29]. Below the sandy clay layer is the sandy layer with apparent resistivity range of about 120–220 Ωm. This clay layer (aquitard) acts as a confining layer. Analysis of groundwater samples abstracted from the sandy layer below the clay layer show elevated concentrations of dissolved Fe (Table 4). Redox potential values are relatively low and range between 116 and 133 mV, indicating a mild reducing environment. It is pertinent to mention that in the present study, analysis of the normalized chargeability results (Tables 1 and 3) did not give a clear difference of normalized chargeability values in layers with high clay content and coarse material in some layers contrary to the report of [26]. For instance, in VES-IP 7, 9 and 10 layer 2, results of the normalized chargeability did not show a clear trend of increasing values in this layer with high clay content and lower values when coarse material is the main soil content (VES-IP 9, 10 layer 4) and thus did not contribute significantly to the interpretation of the geological features in these layers. However, normalized chargeability values in these layers were used to complement the results of the chargeability.
Inverse model resistivity section at Azikoro Village (low-groundwater Fe site)
Chemical composition of the groundwater at high-Fe areas
SO42−
NO3−
All parameters have been expressed as mg/L except pH, EC and Eh. The unit of EC is µS/cm, and that of energy potential (Eh) is mV
5.1 Source of Fe and role of near surface clay layer on groundwater Fe distribution
Fe concentrations were extracted from aquifer sediments from both the low and high locations. The results show that Fe concentrations in the high-Fe aquifer sediments range between 0.26 and 0.90 mg/L but are between 0.5 and 0.82 mg/L in the low-Fe aquifer sediments. Table 5 lists the Fe concentrations in several sediment samples from borehole cores and indicates that aquifer sediments containing abundant Fe act as the supply source for the shallow groundwater. There is no significant overall difference in the amount of extractable Fe in the aquifer sediments. This is consistent with the studies of [27] who reported that paralic deposits often have plenty of Fe substances. The similar amount of extractable Fe in the aquifer sediments implies that Fe is present in the aquifer materials in sufficient amounts. Thus, transfer to the dissolved phase can cause a large increase in groundwater.
Fe concentration in the aquifer sediments
potential (mV)
Fe conc. (mg/L)
High-Fe locations
Low-Fe locations
We infer therefore that the spatial distribution of groundwater Fe is as a result of the variation of redox conditions in the host aquifer. This implies that a reductive ambient subsurface environment is favourable to Fe ions transferring from the aquifer matrix into the groundwater. Although the decomposition of organic matter in groundwater and soil can consume dissolved oxygen and thus create a reductive hydrochemistry, in this case this effect is assumed to be significantly small. We opine that a relatively atmosphere-isolated state in the aquifer is responsible for the stronger reducibility of groundwater in the high-Fe locations. The investigations of the sediment stratigraphy and lithology across the study area show a near surface aquitard composed of argillaceous materials (clay) widely occurs in the upper most subsurface sediments in the high-Fe locations. This near surface aquitard heterogeneity and variation in thickness lead to variation in vertical recharge, localized dilution and confinement, resulting in varying redox conditions in the aquifer affecting Fe release.
An integrated hydrogeophysical investigation consisting of electrical resistivity and induced polarization techniques in parts of the Niger Delta delineated a widespread clay layer characterized by low resistivity (5–96 Ωm) and high chargeability and normalized chargeability values (0.67–13.9 ms and 0.134–2.242 mS/m) overlying the aquifer in locations that show elevated dissolved Fe concentrations. The thickness of the clay layer varies, pinching out at some places. In locations where the clay layer pinches out, the Fe concentration is within WHO acceptable limits (< 0.3 mg/L), implying that the lithological set-up plays a significant role in understanding Fe enrichment in groundwater in the Niger Delta. The low-permeability clay layer acts a confining layer and thus helps in creating atmosphere-isolated state in the underlying aquifer which is responsible for the reductive ambient subsurface groundwater environment favourable to Fe ions transferring from the aquifer matrix into the groundwater. The knowledge of the clay layer will be very helpful in selecting suitable sites for the installation of boreholes.
We are grateful to the Post-graduate Geophysics students in the Department of Physics who assisted with the field work and Mr. Udofia for producing the maps.
We do not have conflict of interest in this paper.
Okiongbo KS, Douglas RK (2013) Hydrogeochemical analysis and evaluation of groundwater quality in Yenagoa city and environs, Southern Nigeria. Ife J Sci 15:209–222Google Scholar
Okiongbo KS, Douglas RK (2015) Evaluation of major factors influencing the geochemistry of groundwater using graphical and multivariate statistical methods in Yenagoa City, Southern Nigeria. J Appl Water Sci 5:27–37CrossRefGoogle Scholar
Ohimain EI, Angaye T, Okiongbo KS (2013) Removal of iron, coliforms and acidity from groundwater obtained from shallow aquifer using trickling Filter method. J Environ Sci Eng A 2:549–555Google Scholar
Niwas S, Singhal DC (1981) Estimation of aquifer transmissivity from Dar Zarrouk parameters in porous media. J Hydrol 50:393–399CrossRefGoogle Scholar
Dhakate R, Singh VS, Negi BC, Chandra S, Rao VA (2008) Geomorphological and geophysical approach for locating favourable groundwater zones in granitic terrain, Andhra Pradesh, India. J Environ Manag 88:1373–1383CrossRefGoogle Scholar
Okiongbo KS, Mebine P (2015) Estimation of aquifer hydraulic parameters from geoelectric method—A case study of Yenagoa and environs Southern Nigeria. Arab J Geosci 8:6085–6093CrossRefGoogle Scholar
Akpokodje EU, Etu-Efeotor JO (1987) The occurrence and Economic potential of clean sand deposits of the Niger Delta. J Afr Earth Sci 6:61–65Google Scholar
Akpokodje EU (1987) The engineering-geological characteristics and classification of the major superficial soils of the Niger Delta. Eng Geol 23:193–201CrossRefGoogle Scholar
Akpokodje EG (1986) A method of reducing the cement content of two stabilized Niger Delta soils. Q J Eng. Geol. Lond 19:359–363CrossRefGoogle Scholar
Amajor LC (1991) Aquifers in the Benin Formation (Miocene—Recent), Eastern Niger Delta, Nigeria. Lithostratigraphy, Hydraulics and water quality. Environ Geol Water Sci 17:85–101CrossRefGoogle Scholar
Short KC, Stauble AJ (1967) Outline of the geology of the Niger Delta. Bull AAPG 51:761–779Google Scholar
Abam TKS (1999) Dynamics and quality of water resources in the Niger Delta. Impacts of urban growth on surface water and groundwater quality. In: Proceedings of IUGG 99 symposium, vol 259. IAHS Publication, Birmigham, pp 429–437Google Scholar
Mbonu PDC, Ebeniro JO, Ofoegbu CO, Ekine AS (1991) Geoelectric sounding for the determination ofaquifer characteristics in parts of Umuahia area of Nigeria. Geophysics 56:84–291CrossRefGoogle Scholar
Ekine AS, Osobonye GT (1996) Surface geoelectric sounding for the determination of aquifer Characteristics in part of Bonny Local Government Area of Rivers State. Niger J Phys 85:93–99Google Scholar
Summer JS (1976) Principles of induced polarization for geophysical exploration. Elsevier, AmsterdamGoogle Scholar
Schon JP (1996) Physical properties of rocks: fundamentals and principles of petrophysics. Pergamon, New York, p 583Google Scholar
Slater LD, Lesmes D (2002) IP interpretation in environmental investigations. Geophysics 68:77–88CrossRefGoogle Scholar
Lesmes D, Frye KM (2001) Influence of pore fluid chemistry on the complex conductivity and induced polarization responses of Berea sandstone. J Geophys Res 160:4079–4090CrossRefGoogle Scholar
Loke MH, Baker RD (1996) Rapid least square inversion of apparent resistivity pseudosections by a quasi-Newton method. Geophys Prospect 44:131–152CrossRefGoogle Scholar
Griffiths DH, Barker RD (1993) Two-dimensional resistivity imaging and modelling in areas of complex geology. J Appl Geophys 29:211–226CrossRefGoogle Scholar
Samouelian A, Cousin I, Tabbagh A, Bruand A, Richard G (2005) Electrical resistivity survey in soil science: a review. Soil Tillage Res 83:173–193CrossRefGoogle Scholar
APHA (1998) Standard methods for the examination of water and wastewater, 19th edn. American Public Health Association, WashingtonGoogle Scholar
Baines D, Smith DG, Frose DG, Bauman P, Nimack G (2002) Electrical resistivity ground imaging [ERGI]: a new tool for mapping the lithology and geometry of channel-belts and valley-fills. Sedimentology 49:441–449CrossRefGoogle Scholar
Beresnev IA, Hruby CE, Davis CA (2002) The use of multi-electrode resistivity imaging in gravel prospecting. J Appl Geophys 49:245–254CrossRefGoogle Scholar
Magnusson MK, Fernlund JMR, Dahlin T (2010) Geoelectrical imaging in the interpretation of geological conditions affecting quarry operations. Bull Eng Geol Environ 69:465–486CrossRefGoogle Scholar
Amaya AG, Dahlin T, Barmen G, Rosberg JE (2016) Electrical resistivity tomography and induced polarization for mapping the subsurface of alluvial fans: a case study in Punata (Bolivia). Geosciences 6:1–13CrossRefGoogle Scholar
Vonhala H (1997) Mapping oil contaminated sand and till with spectral induced polarization (IP) method. Geophys Prospect 45:303–326CrossRefGoogle Scholar
Hasan MA, Ahmed KM, Sracek O, Bhattacharya P, von Brömssen M, Broms S, Fogelström J, Mazumder ML, Jacks G (2007) Arsenic in shallow groundwater of Bangladesh: investigations from three different physiographic settings. Hydrogeol J 15:1507–1522CrossRefGoogle Scholar
Okiongbo KS, Soronnadi-Ononiwu GC (2016) Characterizing aggregate deposits using electrical resistivity method: case history of sand search in the Niger Delta, Nigeria. J Earth Sci Geotech Eng 8:1–16Google Scholar
1.Geophysics Research Group, Department of PhysicsNiger Delta UniversityWilberforce IslandNigeria
2.Department of PhysicsRivers State UniversityPort-HarcourtNigeria
Okiongbo, K.S., Oboshenure, K.K. & Amakiri, A.R.C. SN Appl. Sci. (2020) 2: 107. https://doi.org/10.1007/s42452-019-1876-3
Received 29 May 2019
First Online 18 December 2019 | CommonCrawl |
Example Keywords: mobile world -jelly $73 Advanced search
upcScavenger » Physical Quantities » Wiki: Internal Energy
Internal energy
( Physical Quantities )
Cardinal functions
Description and definiti..
Internal energy changes
Internal energy of the i..
Internal energy of a clo..
Changes due to temperatu..
Changes due to volume at..
Internal energy of multi..
Internal energy in an el..
Bibliography of cited re..
Energy Internal Partial System
The internal energy of a thermodynamic system is the energy contained within it, measured as the quantity of energy necessary to bring the system from its Standard state internal state to its present internal state of interest, accounting for the gains and losses of energy due to changes in its internal state, including such quantities as magnetization.Crawford, F. H. (1963), pp. 106–107.Haase, R. (1971), pp. 24–28. It excludes the kinetic energy of motion and the potential energy of position of the system as a whole, with respect to its surroundings and external force fields, but it includes the thermal energy (i.e. internal kinetic energy). The internal energy of an isolated system cannot change, as expressed in the law of conservation of energy, a foundation of the first law of thermodynamics.
The internal energy cannot be measured absolutely. Thermodynamics concerns changes in the internal energy, not its absolute value. The processes that change the internal energy are transfers, into or out of the system, of matter, or of energy, as heat, or by thermodynamic work.Max Born (1949), Appendix 8, pp. 146–149. These processes are measured by changes in the system's properties, such as temperature, entropy, volume, electric polarization, and molar constitution. The internal energy depends only on the internal state of the system and not on the particular choice from many possible processes by which energy may pass into or out of the system. It is a State function, a thermodynamic potential, and an extensive property.
Thermodynamics defines internal energy macroscopically, for the body as a whole. In statistical mechanics, the internal energy of a body can be analyzed microscopically in terms of the kinetic energies of microscopic motion of the system's particles from translations, , and oscillation, and of the potential energies associated with microscopic forces, including chemical bonds.
The unit of energy in the International System of Units (SI) is the joule (J). The internal energy relative to the mass with unit J/kg is the specific internal energy. The corresponding quantity relative to the amount of substance with unit J/mol is the molar internal energy.
International Union of Pure and Applied Chemistry. Physical and Biophysical Chemistry Division (2023). 9781847557889, RSC Pub. . ISBN 9781847557889
The internal energy of a system depends on its entropy S, its volume V and its number of massive particles: . It expresses the thermodynamics of a system in the energy representation. As a State function, its arguments are exclusively extensive variables of state. Alongside the internal energy, the other cardinal function of state of a thermodynamic system is its entropy, as a function, , of the same list of extensive variables of state, except that the entropy, , is replaced in the list by the internal energy, . It expresses the entropy representation.Tschoegl, N.W. (2000), p. 17.Herbert Callen (1960/1985), Chapter 5.Münster, A. (1970), p. 6.
Each cardinal function is a monotonic function of each of its natural or canonical variables. Each provides its characteristic or fundamental equation, for example , that by itself contains all thermodynamic information about the system. The fundamental equations for the two cardinal functions can in principle be interconverted by solving, for example, for , to get .
In contrast, Legendre transforms are necessary to derive fundamental equations for other thermodynamic potentials and . The entropy as a function only of extensive state variables is the one and only cardinal function of state for the generation of Massieu functions. It is not itself customarily designated a 'Massieu function', though rationally it might be thought of as such, corresponding to the term 'thermodynamic potential', which includes the internal energy.Münster, A. (1970), Chapter 3.Bailyn, M. (1994), pp. 206–209.
For real and practical systems, explicit expressions of the fundamental equations are almost always unavailable, but the functional relations exist in principle. Formal, in principle, manipulations of them are valuable for the understanding of thermodynamics.
Description and definition
The internal energy U of a given state of the system is determined relative to that of a standard state of the system, by adding up the macroscopic transfers of energy that accompany a change of state from the reference state to the given state:
\Delta U = \sum_i E_i,
where \Delta U denotes the difference between the internal energy of the given state and that of the reference state, and the E_i are the various energies transferred to the system in the steps from the reference state to the given state. It is the energy needed to create the given state of the system from the reference state. From a non-relativistic microscopic point of view, it may be divided into microscopic potential energy, U_\text{micro,pot}, and microscopic kinetic energy, U_\text{micro,kin}, components:
U = U_\text{micro,pot} + U_\text{micro,kin}.
The microscopic kinetic energy of a system arises as the sum of the motions of all the system's particles with respect to the center-of-mass frame, whether it be the motion of atoms, molecules, atomic nuclei, electrons, or other particles. The microscopic potential energy algebraic summative components are those of the Chemical energy and nuclear particle bonds, and the physical force fields within the system, such as due to internal induced electric or magnetism dipole moment, as well as the energy of deformation of solids (stress-strain). Usually, the split into microscopic kinetic and potential energies is outside the scope of macroscopic thermodynamics.
Internal energy does not include the energy due to motion or location of a system as a whole. That is to say, it excludes any kinetic or potential energy the body may have because of its motion or location in external , electrostatics, or electromagnetics fields. It does, however, include the contribution of such a field to the energy due to the coupling of the internal degrees of freedom of the object with the field. In such a case, the field is included in the thermodynamic description of the object in the form of an additional external parameter.
For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Indeed, in most systems under consideration, especially through thermodynamics, it is impossible to calculate the total internal energy.I. Klotz, R. Rosenberg, Chemical Thermodynamics - Basic Concepts and Methods, 7th ed., Wiley (2008), p.39 Therefore, a convenient null reference point may be chosen for the internal energy.
The internal energy is an extensive property: it depends on the size of the system, or on the amount of substance it contains.
At any temperature greater than absolute zero, microscopic potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero point energy. A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system of given composition has attained its minimum attainable entropy.
The microscopic kinetic energy portion of the internal energy gives rise to the temperature of the system. Statistical mechanics relates the pseudo-random kinetic energy of individual particles to the mean kinetic energy of the entire ensemble of particles comprising a system. Furthermore, it relates the mean microscopic kinetic energy to the macroscopically observed empirical property that is expressed as temperature of the system. While temperature is an intensive measure, this energy expresses the concept as an extensive property of the system, often referred to as the thermal energy,Leland, T. W. Jr., Mansoori, G. A., pp. 15, 16. Thermal energy – Hyperphysics. The scaling property between temperature and thermal energy is the entropy change of the system.
Statistical mechanics considers any system to be statistically distributed across an ensemble of N microstates. In a system that is in thermodynamic contact equilibrium with a heat reservoir, each microstate has an energy E_i and is associated with a probability p_i. The internal energy is the mean value of the system's total energy, i.e., the sum of all microstate energies, each weighted by its probability of occurrence:
U = \sum_{i=1}^N p_i \,E_i.
This is the statistical expression of the law of conservation of energy.
Thermodynamics is chiefly concerned with the changes in internal energy \Delta U.
For a closed system, with matter transfer excluded, the changes in internal energy are due to heat transfer Q and due to thermodynamic work W done by the system on its surroundings.This article uses the sign convention of the mechanical work as often defined in engineering, which is different from the convention used in physics and chemistry; in engineering, work performed by the system against the environment, e.g., a system expansion, is taken to be positive, while in physics and chemistry, it is taken to be negative. Accordingly, the internal energy change \Delta U for a process may be written \Delta U = Q - W \quad \text{(closed system, no transfer of matter)}.
When a closed system receives energy as heat, this energy increases the internal energy. It is distributed between microscopic kinetic and microscopic potential energies. In general, thermodynamics does not trace this distribution. In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as microscopic kinetic energy; such heating is said to be Sensible heat.
A second kind of mechanism of change in the internal energy of a closed system changed is in its doing of work on its surroundings. Such work may be simply mechanical, as when the system expands to drive a piston, or, for example, when the system changes its electric polarization so as to drive a change in the electric field in the surroundings.
If the system is not closed, the third mechanism that can increase the internal energy is transfer of matter into the system. This increase, \Delta U_\mathrm{matter} cannot be split into heat and work components. If the system is so set up physically that heat transfer and work that it does are by pathways separate from and independent of matter transfer, then the transfers of energy add to change the internal energy: \Delta U = Q - W + \Delta U_\text{matter} \quad \text{(matter transfer pathway separate from heat and work transfer pathways)}.
If a system undergoes certain phase transformations while being heated, such as melting and vaporization, it may be observed that the temperature of the system does not change until the entire sample has completed the transformation. The energy introduced into the system while the temperature does not change is called latent energy or latent heat, in contrast to sensible heat, which is associated with temperature change.
Internal energy of the ideal gas
Thermodynamics often uses the concept of the ideal gas for teaching purposes, and as an approximation for working systems. The ideal gas consists of particles considered as point objects that interact only by elastic collisions and fill a volume such that their mean free path between collisions is much larger than their diameter. Such systems approximate monatomic gases such as helium and other . For an ideal gas the kinetic energy consists only of the translational energy of the individual atoms. Monatomic particles do not possess rotational or vibrational degrees of freedom, and are not energy level to higher energies except at very high .
Therefore, the internal energy of an ideal gas depends solely on its temperature (and the number of gas particles): U = U(n,T). It is not dependent on other thermodynamic quantities such as pressure or density.
The internal energy of an ideal gas is proportional to its mass (number of moles) n and to its temperature T
U = C_V n T,
where C_V is the isochoric (at constant volume) molar heat capacity of the gas. C_V is constant for an ideal gas. The internal energy of any gas (ideal or not) may be written as a function of the three extensive properties S, V, n (entropy, volume, mass). In case of the ideal gas it is in the following way
(1985). 9780444877482, North-Holland. ISBN 9780444877482
U(S,V,n) = \mathrm{const} \cdot e^\frac{S}{C_V n} V^\frac{-R}{C_V} n^\frac{R+C_V}{C_V},
where \mathrm {const} is an arbitrary positive constant and where R is the Gas constant. It is easily seen that U is a linearly homogeneous function of the three variables (that is, it is extensive in these variables), and that it is weakly convex function. Knowing temperature and pressure to be the derivatives T = \frac{\partial U}{\partial S}, P = -\frac{\partial U}{\partial V}, the ideal gas law PV = nRT immediately follows as below:
T = \frac{\partial U}{\partial S} = \frac{U}{C_V n}
P = -\frac{\partial U}{\partial V} = U \frac{R}{C_V V}
\frac{P}{T} = \frac{\frac{U R}{C_V V}}{\frac{U}{C_V n}} = \frac{n R}{V}
PV = nRT
Internal energy of a closed thermodynamic system
The above summation of all components of change in internal energy assumes that a positive energy denotes heat added to the system or the negative of work done by the system on its surroundings.
This relationship may be expressed in infinitesimal terms using the differentials of each term, though only the internal energy is an exact differential.
Adkins, C. J. (Clement John) (1983). 9780521254458, Cambridge University Press. ISBN 9780521254458
For a closed system, with transfers only as heat and work, the change in the internal energy is
\mathrm{d} U = \delta Q - \delta W,
expressing the first law of thermodynamics. It may be expressed in terms of other thermodynamic parameters. Each term is composed of an intensive variable (a generalized force) and its conjugate infinitesimal extensive variable (a generalized displacement).
For example, the mechanical work done by the system may be related to the pressure P and volume change \mathrm{d}V. The pressure is the intensive generalized force, while the volume change is the extensive generalized displacement:
\delta W = P \, \mathrm{d}V.
This defines the direction of work, W, to be energy transfer from the working system to the surroundings, indicated by a positive term. Taking the direction of heat transfer Q to be into the working fluid and assuming a reversible process, the heat is
\delta Q = T \mathrm{d}S,
where T denotes the temperature, and S denotes the entropy.
The change in internal energy becomes
\mathrm{d}U = T \, \mathrm{d}S - P \, \mathrm{d}V.
Changes due to temperature and volume
The expression relating changes in internal energy to changes in temperature and volume is
This is useful if the equation of state is known.
In case of an ideal gas, we can derive that dU = C_V \, dT, i.e. the internal energy of an ideal gas can be written as a function that depends only on the temperature.
\mathrm{d}U =C_{V} \, \mathrm{d}T +\leftT\left(\frac{\partial \mathrm{d}V.
The equation of state is the ideal gas law
P V = n R T.
Solve for pressure:
P = \frac{n R T}{V}.
Substitute in to internal energy expression:
dU =C_{V}\mathrm{d}T +\leftT\left(\frac{\partial\mathrm{d}V.
Take the derivative of pressure with respect to temperature:
\left( \frac{\partial P}{\partial T} \right)_{V} = \frac{n R}{V}.
Replace:
dU = C_{V} \, \mathrm{d}T + \left \mathrm{d}V.
And simplify:
\mathrm{d}U =C_{V} \, \mathrm{d}T.
To express \mathrm{d}U in terms of \mathrm{d}T and \mathrm{d}V, the term
\mathrm{d}S = \left(\frac{\partial S}{\partial T}\right)_{V}\mathrm{d}T + \left(\frac{\partial S}{\partial V}\right)_{T} \mathrm{d}V
is substituted in the fundamental thermodynamic relation
This gives
dU = T\left(\frac{\partial S}{\partial T}\right)_{V} \, dT +\leftT\left(\frac{\partial dV.
The term T\left(\frac{\partial S}{\partial T}\right)_{V} is the heat capacity at constant volume C_{V}.
The partial derivative of S with respect to V can be evaluated if the equation of state is known. From the fundamental thermodynamic relation, it follows that the differential of the Helmholtz free energy A is given by
dA = -S \, dT - P \, dV.
The symmetry of second derivatives of A with respect to T and V yields the Maxwell relation:
\left(\frac{\partial S}{\partial V}\right)_{T} = \left(\frac{\partial P}{\partial T}\right)_{V}.
This gives the expression above.
Changes due to temperature and pressure
When considering fluids or solids, an expression in terms of the temperature and pressure is usually more useful:
dU = \left(C_{P}-\alpha P V\right) \, dT +\left(\beta_{T}P-\alpha T\right)V \, dP,
where it is assumed that the heat capacity at constant pressure is related to the heat capacity at constant volume according to
C_{P} = C_{V} + V T\frac{\alpha^{2}}{\beta_{T}}.
The partial derivative of the pressure with respect to temperature at constant volume can be expressed in terms of the coefficient of thermal expansion
\alpha \equiv \frac{1}{V}\left(\frac{\partial V}{\partial T}\right)_{P}
and the isothermal compressibility
\beta_{T} \equiv -\frac{1}{V}\left(\frac{\partial V}{\partial P}\right)_{T}
and equating d V to zero and solving for the ratio d P/d T. This gives
Substituting () and () in () gives the above expression.
Changes due to volume at constant temperature
The internal pressure is defined as a partial derivative of the internal energy with respect to the volume at constant temperature:
\pi _T = \left ( \frac{\partial U}{\partial V} \right )_T.
Internal energy of multi-component systems
In addition to including the entropy S and volume V terms in the internal energy, a system is often described also in terms of the number of particles or chemical species it contains:
U = U(S,V,N_1,\ldots,N_n),
where N_j are the molar amounts of constituents of type j in the system. The internal energy is an extensive function of the extensive variables S, V, and the amounts N_j, the internal energy may be written as a linearly homogeneous function of first degree:
Landau (1980). 9780080230399 ISBN 9780080230399
U(\alpha S,\alpha V,\alpha N_{1},\alpha N_{2},\ldots )
= \alpha U(S,V,N_{1},N_{2},\ldots),
where \alpha is a factor describing the growth of the system. The differential internal energy may be written as
\mathrm{d} U = \frac{\partial U}{\partial S} \mathrm{d} S + \frac{\partial U}{\partial V} \mathrm{d} V + \sum_i\ \frac{\partial U}{\partial N_i} \mathrm{d} N_i\ = T \,\mathrm{d} S - P \,\mathrm{d} V + \sum_i\mu_i \mathrm{d} N_i,
which shows (or defines) temperature T to be the partial derivative of U with respect to entropy S and pressure P to be the negative of the similar derivative with respect to volume V,
T = \frac{\partial U}{\partial S},
P = -\frac{\partial U}{\partial V},
and where the coefficients \mu_{i} are the chemical potentials for the components of type i in the system. The chemical potentials are defined as the partial derivatives of the internal energy with respect to the variations in composition:
\mu_i = \left( \frac{\partial U}{\partial N_i} \right)_{S,V, N_{j \ne i}}.
As conjugate variables to the composition \lbrace N_{j} \rbrace, the chemical potentials are intensive properties, intrinsically characteristic of the qualitative nature of the system, and not proportional to its extent. Under conditions of constant T and P, because of the extensive nature of U and its independent variables, using Euler's homogeneous function theorem, the differential \mathrm d U may be integrated and yields an expression for the internal energy:
U = T S - P V + \sum_i \mu_i N_i.
The sum over the composition of the system is the Gibbs free energy:
G = \sum_i \mu_i N_i
that arises from changing the composition of the system at constant temperature and pressure. For a single component system, the chemical potential equals the Gibbs energy per amount of substance, i.e. particles or moles according to the original definition of the unit for \lbrace N_{j} \rbrace.
Internal energy in an elastic medium
For an elastic medium the mechanical energy term of the internal energy is expressed in terms of the stress \sigma_{ij} and strain \varepsilon_{ij} involved in elastic processes. In Einstein notation for tensors, with summation over repeated indices, for unit volume, the infinitesimal statement is
\mathrm{d}U=T\mathrm{d}S+\sigma_{ij}\mathrm{d}\varepsilon_{ij}.
Euler's theorem yields for the internal energy:.
U=TS+\frac{1}{2}\sigma_{ij}\varepsilon_{ij}.
For a linearly elastic material, the stress is related to the strain by
\sigma_{ij}=C_{ijkl} \varepsilon_{kl},
where the C_{ijkl} are the components of the 4th-rank elastic constant tensor of the medium.
Elastic deformations, such as sound, passing through a body, or other forms of macroscopic internal agitation or turbulent motion create states when the system is not in thermodynamic equilibrium. While such energies of motion continue, they contribute to the total energy of the system; thermodynamic internal energy pertains only when such motions have ceased.
James Joule studied the relationship between heat, work, and temperature. He observed that friction in a liquid, such as caused by its agitation with work by a paddle wheel, caused an increase in its temperature, which he described as producing a quantity of heat. Expressed in modern units, he found that c. 4186 joules of energy were needed to raise the temperature of one kilogram of water by one degree Celsius.
Exergy
Thermodynamic equations
Thermodynamic potentials
Helmholtz free energy
Bibliography of cited references
Adkins, C. J. (1968/1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, .
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, .
Max Born (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London.
Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, .
Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.
Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081.
(1986). 9780750626330, Butterworth Heinemann. ISBN 9780750626330
Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, .
Max Planck, (1923/1927). Treatise on Thermodynamics, translated by A. Ogg, third English edition, Longman, London.
Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, .
Lewis, Gilbert Newton; Randall, Merle: Revised by Pitzer, Kenneth S. & Brewer, Leo (1961). 9780071138093, McGraw-Hill Book Co.. ISBN 9780071138093
Categories: Physical Quantities, Thermodynamic Properties, State Functions, Statistical Mechanics, Energy | CommonCrawl |
How do we determine refractive index of a photonic crystal?
Normally, refractive index of a medium is defined as
$$n=\frac c{v_\text{p}},$$
where $c$ is the speed of light in vacuum, and $v_\text{p}$ is the phase speed of light in the medium. Phase speed is defined as
$$v_\text{p}=\frac{\omega}k,$$
where $\omega$ is the frequency, and $k$ is the wavenumber of light (which is the norm of the wavevector).
But in a photonic crystal we don't have a single well-defined wavevector: we have quasi-wavevector, which is defined up to a vector of the reciprocal lattice. So, how do we then determine the refractive index? Do we simply take the wavevector from the first Brillouin zone and calculate the phase velocity from it? (If yes, how can we justify this?) Or do we actually have multiple refractive indices so that a light beam splits into several beams as if by a diffraction grating?
optics waves refraction crystals
RuslanRuslan
In photonic crystals, the dielectric function $\varepsilon(r)$ plays the role of confining potential for the light, so the refractive index $n = \sqrt{\varepsilon(r)}$ is not uniform at least along one dimension. And in general, the dispersion relation $\omega(k)$ will deviate from the linear dispersion relation, especially near the Brillouin zone boundary. But for a small wavevector, near the zone center, the dispersion relation will be linear, because the light does not "see" the potential variation and thus it only sees an effective average dielectric constant (see the discussion on page 56 on this book, which is freely available to public).
wccwcc
$\begingroup$ This is all fine, but it still doesn't address the ambiguity of wavevector vs quasi-wavevector in relation to the phase speed. $\endgroup$ – Ruslan Jun 10 at 15:18
$\begingroup$ @Ruslan, phase velocity is hard to define exactly because of the ambiguity you mentioned, even when the dispersion relation is approximately linear. You can add reciprocal lattice vector to $k$ and the slope does not change. It's much more helpful to think about group velocity in photonic crystals. $\endgroup$ – wcc Jun 10 at 15:28
$\begingroup$ @Ruslan, your last sentence in OP also hints toward why there is no unique phase velocity in a photonic crystal. There is no unique wavefront to reference and measure the phase, as the multiply scattered waves coherently sum up to make a periodic wavefunction, just like that of electron in a crystalline solid. $\endgroup$ – wcc Jun 10 at 15:31
$\begingroup$ But normal crystals (e.g. diamond) can also be viewed as photonic crystals: they are also periodic, they sum up multiply scattered waves as you say, but still they do have a well-defined phase speed, don't they? $\endgroup$ – Ruslan Jun 10 at 15:36
$\begingroup$ @Ruslan, or do you mean diamonds have a well-defined dielectric constant in an average sense as I described? Any dielectric material has well-defined group velocity but you can't say the same for phase velocity. And I am not sure diamonds can be viewed as a photonic crystal - the carbon atoms are only angstroms apart, so the light mostly sees an average uniform dielectric material. $\endgroup$ – wcc Jun 10 at 16:05
I would not say that you have ill-defined wave-vectors. As you mention, commonly, when doing an analysis in the First Brillouin Zone we obtain the wavenumber modulo $\pi/(2 a)$, being $a$ the lattice parameter.
For a bilayer material, we have the following dispersion relation
$$\cos(\kappa a) = \cos\left(\frac{\omega a}{2 c_1}\right) \cos\left(\frac{\omega a}{2 c_2}\right) - \frac{c_1^2 + c_2^2}{2 c_1 c_2}\sin\left(\frac{\omega a}{2 c_1}\right) \sin\left(\frac{\omega a}{2 c_2}\right)\, .$$
If you plot the solutions over the first Brillouin zone you would obtain the following figure.
If instead, you invert the relationship and consider the shift for each branch you end up with the following figure.
Now, if you ask me, the refractive index is a material parameter and you would need to average to obtain it. Thus, it is a function of frequency and, in general, is anisotropic what leads to a tensor rather than a single scalar:
$$n = [n_{ij}(\omega)]\, .$$
nicoguaronicoguaro
Not the answer you're looking for? Browse other questions tagged optics waves refraction crystals or ask your own question.
How to derive inverse Fourier transform for periodic functions (in crystal lattice)?
Optical Bloch oscillation
Refractive index of dielectric in different frames of reference
Why does in the definition of 'optical path', only spatial phase change is taken into consideration & not that of time?
What is the equation of motion for multiple simultaneous pressure waves in a medium? (In the context of stimulated Brillouin scattering)
Special Relativity, refractive index and catching up with a wave
Phase-shifting by Acousto-Optic modulators
What really is the speed of light in a medium/vacuum, group or phase velocity?
What is the relationship between directions in reciprocal and real space of a photonic crystal?
Fraunhofer diffraction problem in Python: How to interpret discrete Fourier transform (DFT) spectrum? | CommonCrawl |
Why does the fine-structure constant $α$ have the value it does?
This is a follow-up to this great answer.
All of the other related questions have answers explaining how units come into play when measuring "universal" constants, like the value of the speed of light, $c$. But what about the fine-structure constant, $α$? Its value seems to come out of nowhere, and to cite the previously-linked question:
And this is, my friend, a real puzzle in physics. Solve it to the bottom, and you will win yourself a Nobel Prize!
I imagine we have clues about the reason of the value of the fine-structure constant. What are they?
electromagnetism quantum-electrodynamics renormalization physical-constants
David Z
MagixMagix
$\begingroup$ Related: physics.stackexchange.com/q/2725/2451 $\endgroup$ – Qmechanic♦ Nov 11 '18 at 5:13
$\begingroup$ related: physics.stackexchange.com/a/377449/137409 $\endgroup$ – dlatikay Nov 11 '18 at 12:44
No one knows, and, at the moment, there is no realistic prospect of computing the fine-structure constant from first principles any time soon.
We do know, however, that the fine-structure constant isn't a constant! It in fact depends on the energy of the interaction that we are looking at. This behaviour is known as 'running'. The well-known $\alpha \simeq 1/137$ is the low-energy limit of the coupling. At e.g., an energy of the Z-mass, we find $\alpha(Q=M_Z)\simeq 1/128$. This suggests that there is nothing fundamental about the low-energy value, since it can be calculated from a high-energy value.
In fact, we know more still. The fine-structure constant is the strength of the electromagnetic force, which is mediated by massless photons. There is another force, the weak force, mediated by massive particles. We know that at high energies, these two forces become one, unified force. Thus, once more, we know that the fine-structure constant isn't fundamental as it results from the breakdown of a unified force.
So, we can calculate the fine-structure constant from a high-energy theory in which electromagnetism and the weak force are unified at high-energy (and perhaps unified with other forces at the grand-unification scale).
This does not mean, however, that we know why it has the value $1/137$ at low energies. In practice, $\alpha \simeq 1/137$ is a low-scale boundary condition in theories in which the forces unify at high-energy. We know no principled way of setting the high-energy values of the free parameters of our models, so we just tune them until they agree sufficiently with our measurements. In principle it is possible the high-scale boundary condition could be provided by a new theory, perhaps a string theory.
$\begingroup$ Pet peeve of mine in this answer: it is not true that the electromagnetic and weak forces unify before the GUT scale. It's not even close. The name "electroweak unification" is a misnomer that refers only to the fact that the SM includes both forces; it does not mean the fields are unified like in grand unification. $\endgroup$ – knzhou Nov 11 '18 at 9:58
$\begingroup$ Yeah I know what you mean. How do you briefly explain or refer to 'EW unification' though? $\endgroup$ – innisfree Nov 11 '18 at 10:01
$\begingroup$ Maybe it is interesting that the part of the fine structure constant that is changing with energy is the elementary charge e. $\endgroup$ – asmaier Nov 11 '18 at 10:57
$\begingroup$ Maybe you could emphasize that the $U(1)_{\text{EM}}$ coupling doesn't just split off from some one "unified" electroweak coupling, but rather is a combination of two independent couplings, the $SU(2)_L$ and $U(1)_Y$. I think that actually shows the arbitrariness better. $\endgroup$ – knzhou Nov 11 '18 at 10:58
$\begingroup$ @asmaier Can you give a reference for this ? As far as I know, elementary charge is a fundamental constant, like h and c. $\endgroup$ – my2cts Nov 11 '18 at 14:30
One theory is that we live in a multiverse where physical constants such as $\alpha$ are different in different universes. This theory is speculative but based on plausible physics such as cosmic inflation and the large number of different vacuum states believed to exist in string theory.
We happen to live in a child universe with a small-but-not-too-small value of the fine structure constant, because such a value is compatible with the existence of the periodic table, organic chemistry, and life, while signifcantly different values are not.
G. SmithG. Smith
I know for sure why it is about 1/137. Because it never comes alone, but with some other dimensionless combinations of a problem parameters, so its value is only a part of a whole expression.
Some say it determines the strength of EM interaction. Let us see. We will proceed from QED, which is QM of Electrodynamics. And QM, you may like it or not, is first about probabilities and only then about energies. What is the probability of radiating a soft photon while charge scattering? It is unity (p=1). Let me cite Akhiezer-Berestetski QED textbook:
You see, alpha itself never comes alone, except for the Hydrogen spectrum problem considered first by Sommerfeld and then by Dirac. Alpha itself is small since the Hydrogen electrons have much smaller velocity than $c$ ($\alpha=v_0/c$). In heavier Hydrogen-like ions ($Z>1$) the ground state electron velocity is larger than $v_0=e^2/\hbar$, so alpha is not "alone" and the answer is context dependent.
Vladimir KalitvianskiVladimir Kalitvianski
Not the answer you're looking for? Browse other questions tagged electromagnetism quantum-electrodynamics renormalization physical-constants or ask your own question.
Why does the vacuum even have permeability and permittivity?
Where does the fine structure constant come from?
Is the Fine Stucture constant constant?
Is the fine structure constant actually a constant or does its value depend on the energy scale?
Where is the fine-structure constant in this list?
Fine structure constant definition
How does the Fine Structure Constant Vary with Energy?
About the fine structure constant value | CommonCrawl |
> astro-ph > arXiv:1508.06186v1
arXiv:1508.06186v1 (astro-ph)
Title:Galaxy And Mass Assembly (GAMA): The Bright Void Galaxy Population in the Optical and Mid-IR
Authors:S. J. Penny, M. J. I. Brown, K. A. Pimbblet, M. E. Cluver, D. J. Croton, M. S. Owers, R. Lange, M. Alpaslan, I. Baldry, J. Bland-Hawthorn, S. Brough, S. P. Driver, B. W. Holwerda, A. M. Hopkins, T. H. Jarrett, D. Heath Jones, L. S. Kelvin, M. A. Lara-Lopez, J. Liske, A. R. Lopez-Sanchez, J. Loveday, M. Meyer, P. Norberg, A. S. G. Robotham, M. Rodrigues
Abstract: We examine the properties of galaxies in the Galaxies and Mass Assembly (GAMA) survey located in voids with radii $>10~h^{-1}$ Mpc. Utilising the GAMA equatorial survey, 592 void galaxies are identified out to z~0.1 brighter than $M_{r} = -18.4$, our magnitude completeness limit. Using the $W_{\rm{H\alpha}}$ vs. [NII]/H$\alpha$ (WHAN) line strength diagnostic diagram, we classify their spectra as star forming, AGN, or dominated by old stellar populations. For objects more massive than $5\times10^{9}$ M$_{\odot}$, we identify a sample of 26 void galaxies with old stellar populations classed as passive and retired galaxies in the WHAN diagnostic diagram, else they lack any emission lines in their spectra. When matched to WISE mid-IR photometry, these passive and retired galaxies exhibit a range of mid-IR colour, with a number of void galaxies exhibiting [4.6]-[12] colours inconsistent with completely quenched stellar populations, with a similar spread in colour seen for a randomly drawn non-void comparison sample. We hypothesise that a number of these galaxies host obscured star formation, else they are star forming outside of their central regions targeted for single fibre spectroscopy. When matched to a randomly drawn sample of non-void galaxies, the void and non-void galaxies exhibit similar properties in terms of optical and mid-IR colour, morphology, and star formation activity, suggesting comparable mass assembly and quenching histories. A trend in mid-IR [4.6]-[12] colour is seen, such that both void and non-void galaxies with quenched/passive colours <1.5 typically have masses higher than $10^{10}$ M$_{\odot}$, where internally driven processes play an increasingly important role in galaxy evolution.
Comments: 22 pages, 12 figures. Accepted for publication in MNRAS
Cite as: arXiv:1508.06186 [astro-ph.GA]
(or arXiv:1508.06186v1 [astro-ph.GA] for this version)
Related DOI: https://doi.org/10.1093/mnras/stv1926
From: Samantha Penny Dr [view email]
[v1] Tue, 25 Aug 2015 15:10:25 UTC (10,969 KB) | CommonCrawl |
Optimization of alkaline pretreatment and enzymatic hydrolysis for the extraction of xylooligosaccharide from rice husk
Nuntawat Khat-udomkiri1,
Bhagavathi Sundaram Sivamaruthi1,
Sasithorn Sirilun1,
Narissara Lailerd1,2,
Sartjin Peerajan3 &
Chaiyavat Chaiyasut1
AMB Express volume 8, Article number: 115 (2018) Cite this article
Rice husk (RH) is the major agricultural waste obtained during rice hulling process, which can be a sustainable source of xylooligosaccharide (XOS). The current study deals with the production of XOS from Thai rice husk using alkaline pretreatment and enzyme hydrolysis method. The response surface methodology consisted of central composite design and Box–Behnken design was employed to achieve the maximum response in alkaline pretreatment and XOS production, respectively. The optimum conditions for alkaline pretreatment to recover maximum xylan yield were 12–18% of alkaline concentration, the temperature at 110–120 °C, and steaming time for 37.5–40 min. The FTIR results suggested that the extracted sample was the xylan fraction. The maximum XOS production of 17.35 ± 0.31 mg XOS per mL xylan was observed in the run conditions of 6.25 mg enzyme per g xylan, 9 h of incubation time, and 5% of xylan. The results revealed that the xylan extracted from RH by using an effective base couple with the steam application and the enzymatic hydrolysis help to maximize the yield of XOS, which can be further used in functional foods and dietary supplements.
Thailand is one of the leading rice producer and exporter. As per the US Department of Agriculture's Foreign Agricultural Service report, Thailand may produce about 20.4 million tons of rice during 2017–2018. Rice husk (RH) is the major agricultural waste obtained during rice hulling process, and the clearance of agricultural waste by firing cause air pollution. The burning of rice straw in Thailand was attributed to the release of 0.18% of greenhouse gas (Gadde et al. 2009).
RH is composed of cellulose (25%), hemicellulose (25%), lignin (20%), ash (17%), and crude protein (3%) (Ugheoke and Mamat 2012). The major reducing sugar after acid hydrolysis of rice husk is composed of 17.35% of xylose and 4.42% of glucose (Banerjee et al. 2009). Xylooligosaccharide (XOS) is the molecule containing straight chains of xylose connected by β-1,4-glycosidic linkage. In general, linked minimum of 2–20 xylose molecules are consider as XOS (Vázquez et al. 2000). XOS is well known prebiotic molecule that can selectively support the growth and stimulate the activity of beneficial gut microbiota, particularly Bifidobacterium spp. thereby it confers the health benefits to host (Finegold et al. 2014; Gibson and Roberfroid 1995). XOS reported for anti-cancer, anti-microbial, antioxidant, anti-allergic, anti-infection, anti-inflammatory activities, immunomodulatory, and cholesterol-lowering property (Aachary and Prapulla 2011; Mumtaz et al. 2008). Thus, XOS has been accepted as nutraceutical and feed additive.
The conversion of agricultural waste into useful product reduce the problem of management, treatment, and disposal of agricultural residues. The lignocellulosic rich agricultural residues such as corn cob (Samanta et al. 2012a; Boonchuay et al. 2014), corn stalks (Ergues et al. 2012), sugarcane bagasse (Jayapal et al. 2013), cotton stalks, wheat straw, sunflower stalks, tobacco stalks (Akpinar et al. 2009a), pigeon pea stalks (Samanta et al. 2013), and green coconut husks (Jayapal et al. 2014) have been used as raw material for XOS production.
Autohydrolysis, chemical process (acid or alkaline solution treatment), and chemical pretreatment for xylan extraction couple with enzymatic hydrolysis is the possible way to extract XOS from lignocellulosic substrates (Qing et al. 2013). Autohydrolysis is the process for XOS production by heating the lignocellulosic materials with water in specific equipment under controlled condition, while acid hydrolysis is cost effective. Both the techniques produce XOS contaminated with other undesirable substances like lignin, monosaccharides, and furfural (an aldehyde of furan), which cause serious adverse effects like respiratory irritation, lung congestion, hyperplasia, kidney and olfactory epithelial damage, oedema, and inflammation. So further purification process is required to remove the unwanted substances in the product. Nowadays, chemical pretreatment for xylan extraction followed by enzymatic hydrolysis to obtain XOS is an often-preferable method, due to the cost-effective and reduced production time. The alkaline extraction is the important practice to separate xylan from other lignocellulosic materials, which further enhances the efficiency of the enzyme during XOS production (Carvalho et al. 2013; Samanta et al. 2015). In addition, alkaline pretreatment can improve substrate digestibility, the major desired characteristic in XOS production by enzymatic hydrolysis (McIntosh and Vancov 2011).
The current study was aimed to optimize the conditions to achieve high xylan yield in alkaline pretreatment from RH and XOS production by enzymatic hydrolysis.
The rice husk (RH) was obtained from local organic farm in Chiang Mai province, Thailand. The food grade 1,4 beta xylanase (Pentopan™ MonoBG) was purchased from Novozymes, Denmark. Xylooligosaccharide standards including xylobiose, xylotriose, xylotetraose, and xylopentaose were obtained from Megazyme, Bray, Ireland. Xylose and arabinose standards were obtained from Wako Pure Chemical Industries, Osaka, Japan. The ion exclusion chromatography column, Shodex SUGAR SH1011, was purchased from Showa Denko K.K., Tokyo, Japan. Other chemicals used in the study were of analytical grade.
Rice husk preparation
The RH was dried at 50 °C for 12 h, and powdered by the mechanical blender. The milled RH was sieved through 0.595 mm size (No. 30) siever, and stored at room temperature until use.
Optimization of alkaline pretreatment of RH, and XOS production by RSM
Response surface methodology (RSM) and central composite design (CCD), and RSM and Box–Behnken design (BBD) were used to optimize the condition for alkaline pretreatment of RH, and XOS production using the statistical software package Design Expert, version 10.0., respectively (Stat-Ease Inc., Minneapolis, MN, USA). The basics and statistical analyses of RSM, and its applications in designing the experiment has reported previously (Woraharn et al. 2015, 2016; Chaiyasut et al. 2017). The recovery of xylan was the desired response after RH pretreatment. The concentration of alkaline (6–18%), steaming time (15–45 min), and steaming temperature (80–120 °C) have been selected as variable factors to achieve high yield of xylan as per the previous literature (Jayapal et al. 2013; Samanta et al. 2012a, b, 2013). A sum of 20 independent experiments composed of 14 combinations and six center point replicates were performed for alkaline pretreatment (Table 1), and 17 independent experiments performed XOS optimization. The following equation was used in CCD and BBD model.
$${\text{Y}} =\upbeta_{0} +\upbeta_{1} {\text{X}}_{1} +\upbeta_{2} {\text{X}}_{2} +\upbeta_{3} {\text{X}}_{3} +\upbeta_{11} {\text{X}}_{1}^{2} +\upbeta_{22} {\text{X}}_{2}^{2} +\upbeta_{33} {\text{X}}_{3}^{2} +\upbeta_{12} {\text{X}}_{1} {\text{X}}_{2} +\upbeta_{13} {\text{X}}_{1} {\text{X}}_{3} +\upbeta_{23} {\text{X}}_{2} {\text{X}}_{3}$$
where Y is the predicted response, β0 is model constant, βi (β1–3) is linear coefficients, βii (β11–33) is quadratic coefficients, βij (i.e. β12) is cross product coefficients; X1, X2 and X3 are independent variables.
Table 1 The variable factors and predicted and observed xylan recovery
For the optimization of XOS production, the enzyme concentration (X1) (mg/g xylan), incubation time (X2) (h), and xylan concentration (X3) (%w/v) were acted as a variable factor while the concentration of XOS as a desirable outcome. The range of enzyme concentration, incubation time, and xylan concentration are 6–12 h, 15–45 min, and 80–120 °C, respectively (Chapla et al. 2012). The experiments were carried out in triplicates.
Alkaline pretreatment of rice husk
The alkaline pretreatment to RH was carried out in duplicate according to Samanta et al. (2012a). Briefly, RH powder and alkaline solution (1.91–22.09% NaOH depended on the CCD model) was mixed at the ratio of 1:10 (at different alkaline concentration), and subjected to steaming process at various temperature for various duration based on the CCD model. After incubation, the solution was centrifuged at 5000 rpm for 20 min, and the supernatant was acidified with glacial acetic acid until the solution reached pH 5. Then, three volumes of 95% ice-cold ethanol were added to precipitate the xylan fraction and centrifuged at 4480×g for 10 min at 4 °C. The xylan precipitate was collected and dried at 55–65 °C until reach the constant weight. The dried pellet was weighed and stored at room temperature. The exact and relative yield of xylan was calculated according to formula 2.
$${\text{Xylan recovery (\%)}} = {\text{Dry weight of extracted xylan (g)}}/{\text{Weight of the sample (g)}} \times 100$$
The optimum condition to recover maximum xylan yield was used for the bulk xylan production and further analysis.
Analysis of extracted xylan
Hemicellulose quantification of extracted xylan was carried out according to the method of National Renewable Energy Laboratory (NREL) (Sluiter et al. 2008). The hydrolysate was filtered through 0.2 μm syringe filter to remove the residues and then determined by HPLC. The sugar sample was subjected to HPLC equipped with refractive index detector (Model 2414, Waters Corporation) using Shodex SUGAR SH1011 column. The analytical column was maintained at 60 °C. The samples were eluted with 0.05 M sulfuric acid with the flow rate of 1.0 mL/min. The concentration of sugars was determined using peak area of the standard (a mixture of xylose, mannose, and galactose), and the concentration of each sample was expressed as XGM (xylan + galactan + mannan) content (%) dry hemicellulose basis (Gao et al. 2014).
The functional group identification of alkaline extracted xylan was done using Fourier transform infrared spectrophotometer (Thermo Nicolet, Nexus 470 FT–IR) at spectral range of 400–4000, 4 cm−1 resolutions, and DTGS with a KBr window detector. One mg of alkaline extracted xylan was used for FTIR analysis (Samanta et al. 2012a).
Enzyme assay
The β-1,4-xylanase activity of commercial enzyme was carried out by using 1% birchwood xylan (Sigma, St. Louis, MO, USA) solution in citric acid—Na2HPO4 buffer (pH 6.0) as substrates. An equal volume of both commercial xylanase solution and substrate was incubated at 50 °C for 10 min (Bailey et al. 1992). At the indicated interval, the reaction was stopped by adding 3,5-dinitrosalicylic acid (DNS) solution and reducing sugar was quantified (Miller 1959). One unit of the xylanase activity (U) was liberated as the amount of xylanase that released 1 μmol of reducing sugar from the substrate xylan per min at pH 6.0, 50 °C.
Enzymatic hydrolysis of alkaline extracted xylan and XOS determination
Enzymatic hydrolysis of rice husk xylan was done by adding 1 mL of commercial xylanase solution in 9 mL of alkaline extracted xylan solution in 50 mM citric acid-Na2HPO4 buffer (pH 6.0), and incubated at 50 °C for various time interval based on BBD model. After incubation reaction was arrested by placing the reaction tube in boiling water bath for 5 min. Then, the XOS mixtures were centrifuged at 6000 rpm for 10 min and filtered through 0.2 μm syringe filter to remove the residues. The samples were quantified by HPLC equipped with refractive index detector (Model 2414, Waters Corporation) using Shodex SUGAR SH1011 column. The samples were eluted with 0.05 M sulfuric acid with flow rate of 0.8 mL/min. The concentration of sugars was determined using peak area ratio of the mixture xylooligosaccharide standards including xylobiose (X2), xylotriose (X3), xylotetraose (X4), and xylopentaose (X5). Glycerol was used as the internal standard. The xylooligosaccharide concentration of samples was expressed as mg XOS/mL xylan.
The alkaline pretreatment of rice husk and enzymatic hydrolysis of alkaline pretreated xylan were carried out in duplicates and triplicates, respectively. All values were expressed as mean ± SD. The difference between the group means was analyzed by one-way analysis of variance (ANOVA). The differences were considered significant at P < 0.05.
Optimization of alkaline pretreatment of RH for xylan recovery
The effect of alkaline concentration (X1), steaming time (X2), and steaming temperature (X3) of RH on the yield were determined by three factors central composite design. Twenty independent experiments with six center points were performed. The predicted recovery of all the center points (X1 = 12%; X2 = 30 min; X3 = 100 °C) were 13.87%, whereas the actual experimental values were varied. About 16.24 ± 0.53, 15.07 ± 2.24, 13.67 ± 1.31, 17.20 ± 4.30, 10.69 ± 0.01, and 10.16 ± 1.19% of recovery was observed in the run numbers 5, 6, 8, 9, 11, and 19, respectively (Table 1).
The analysis of variance, regression coefficients, and response surface plots were carried out using design expert. The analysis of variance of a quadratic model for the yield of alkaline extracted xylan was shown in Table 2.
Table 2 Analysis of variance for quadratic model of alkaline treatment of rice husk
The regression model for xylan extraction was significant (P < 0.0001) with appreciable R2 (96.16%) and adjusted R2 (92.70%), and non-significant lack of fit (P = 0.1600). The results suggested that the model equation was adequate for the prediction of recovery yield of alkaline extracted xylan from RH. The CCD-generated a quadratic equation for xylan recovery yield (Y) was as follows:
$${\text{Y}} = 1 7 4. 5 9 2+ 2. 2 8 5 {\text{X}}_{ 1} - 3. 1 1 3 {\text{X}}_{ 2} - 3. 2 8 6 {\text{X}}_{ 3} - 0.0 4 3 {\text{X}}_{ 1}^{ 2} + 0.00 6 {\text{X}}_{ 2}^{ 2} + 0.0 1 6 {\text{X}}_{ 3}^{ 2} + 0.0 3 2 {\text{X}}_{ 1} {\text{X}}_{ 2} - 0.0 20{\text{X}}_{ 1} {\text{X}}_{ 3} + 0.0 2 8 {\text{X}}_{ 2} {\text{X}}_{ 3} \ldots$$
The estimated regression coefficients revealed that the steaming temperature (P < 0.0001) and steaming time (P = 0.0003) were had a significant impact in alkaline pretreatment and xylan yield (Table 3).
Table 3 Estimated regression coefficients for alkaline pretreatment from rice husk
In addition, the steaming temperature and steaming time also exhibited the synergistic effect (P = 0.001) during alkaline pretreatment (Fig. 1). The results suggested that both the factors, steaming time and temperature, have elevated the effect on the recovery of alkaline extracted xylan. The maximum xylan yield was, as per Table 1, observed in the run conditions 1, 4, and 20 with the yield of 54.49 ± 0.61, 44.39 ± 3.42, and 48.55 ± 2.12%, respectively. The optimum conditions for alkaline pretreatment to recover maximum xylan yield were 12–18% of alkaline concentration, the temperature at 110–120 °C, and steaming time for 37.5–40 min.
Response surface plot for alkaline pretreatment of rice husk describing the interaction of independent variables such as alkaline concentration, steaming temperature and steaming time. a Effect of alkaline concentration and streaming time, b effect of alkaline concentration and streaming temperature, and c effect of streaming time and streaming temperature
FTIR analysis
The extracted xylan was analyzed by FTIR. The FTIR fingerprint pattern was compared with standard beechwood xylan (Fig. 2). FTIR spectra of the sample were closely similar to the standard xylan fraction with high bands at 1600–1000 cm−1. The broad bands were noticed between 3600 and 3200 cm−1. The results clearly suggested that the extracted sample was xylan fraction.
The FTIR pattern of extracted xylan
Optimization of xylooligosaccharide (XOS) production
The influence of enzyme (xylanase) concentration, incubation time, and xylan concentration on XOS production was evaluated. The predicted XOS production of all the center points (enzyme concentration = 3.75 mg/g of xylan; incubation time = 9 h; xylan concentration = 3%) were 13.19 mg XOS per mL of xylan, whereas the experimental values were varied. About 12.98 ± 0.01, 12.96 ± 0.65, 13.18 ± 0.47, 13.42 ± 0.47, and 13.44 ± 0.24 mg XOS per mL xylan were recovered in run numbers 1, 3, 6, 8 and 14, respectively (Table 4).
Table 4 The variable factors and predicted and observed xylooligosaccharide yield
The analysis of variance of a quadratic model for the XOS production was shown in Table 5. The regression model for xylan extraction was significant (P < 0.0001) with appreciable R2 (99.47%) and adjusted R2 (98.80%), and non-significant lack of fit (P = 0.9278). The results suggested that the model equation was adequate for the prediction of XOS production from xylan by enzyme treatment. The BBD generated a quadratic equation for XOS (Y) production from alkaline extracted xylan of RH was as follows:
Table 5 Analysis of variance for quadratic model of xylooligosaccharide production from xylan
$${\text{Y}} = 1 6. 2 9 2- 0.0 9 8 {\text{X}}_{ 1} + 0.00 5 {\text{X}}_{ 2} - 2. 80 7 {\text{X}}_{ 3} + 0.0 1 4 {\text{X}}_{ 1}^{ 2} + 0.00 4 {\text{X}}_{ 2}^{ 2} + 0. 5 80{\text{ X}}_{ 3}^{ 2} - 0.0 1 3 {\text{X}}_{ 1} {\text{X}}_{ 2} + 0.0 3 9 {\text{X}}_{ 1} {\text{X}}_{ 3} - 0.00 4 {\text{X}}_{ 2} {\text{X}}_{ 3} \ldots .$$
The estimated regression coefficients revealed that the xylan concentration (P < 0.0001) had a strong significant impact on XOS production (Table 6). The interactions of all the variables and its impact on XOS production has been represented as response surface plot (Fig. 3). The results suggested that the enzyme concentration and incubation time exhibited a strong interaction effect (P = 0.0468) on XOS production (Fig. 3). The maximum XOS production was observed in the run conditions 7, 2, 13, and 4 with the yield of 17.35 ± 0.31, 17.32 ± 0.79, 17.29 ± 0.22, and 16.94 ± 0.32 mg XOS per mL xylan, respectively.
Table 6 Estimated regression coefficients for xylooligosaccharide production from xylan
Response surface plot for enzyme hydrolysis of xylan describing the interaction of independent variables such as enzyme concentration, xylan concentration, and incubation time. a Effect of enzyme concentration and incubation time, b effect of enzyme concentration and xylan concentration, and c effect of incubation time and xylan concentration
XOS has reported for the prebiotic properties, and people are interested in XOS because of its health benefits (Lin et al. 2016). The production of XOS from rice husk (RH) is one of the effective ways to utilize the agricultural wastes. Even though several methods are available to extract xylan from RH, alkaline pretreatment has been used commonly. Alkaline pretreatment accelerates the breakdown of lignocellulosic biomass by smiting the ester bonds in lignin and hemicellulose, which further facilitates the xylan and lignin solubility (Akpinar et al. 2009b; Samanta et al. 2012a).
In this study, NaOH solution was used for xylan extraction, since it showed the higher yield when compared to other strong bases like KOH (Jayapal et al. 2013; Samanta et al. 2012a, b, 2013). The steaming time and steaming temperature are the critical factors that influence the xylan yield. About 54.49 ± 0.61% of xylan yield was observed with alkaline concentration, steaming time, and temperature of 12%, 30 min, and 133.64 °C, respectively (Table 1). To confirm the composition of sugar in alkaline pretreated xylan, the hemicellulose quantification was identified and reported as the XGM content. The highest xylan extraction yield in the run conditions of 12% NaOH concentration, 30 min of steaming time, and 133.64 °C of steaming temperature exhibited the 58.07% of XMG content per total hemicellulose content. The effect of alkaline concentration, temperature, and extraction time on the recovery yield of hemicellulose has been investigated in several previous studies (Nasir and Saleh 2016; Yilmaz et al. 2012; Zhou et al. 2013). In the current study, the increasing NaOH concentration was not significantly increased the xylan extraction yield (P = 0.3541). This result is consistent with the increasing NaOH concentration in the range of 10–20% exhibited the xylan extraction yield from 38.4 to 42.5%. However, this increasing of xylan extraction yield did not statistically important (Yılmaz et al. 2012). The previous report suggested that increasing of extraction temperature from 70 to 95 °C were raised exhibited the yield of hemicellulose extraction (Cheng et al. 2011; Zhou et al. 2013). In this study, the increased extraction temperature significantly influenced on extraction yield of xylan in the opposite manner (P < 0.0001). Additionally, the increased steaming time significantly reduced the alkaline extracted xylan yield (P = 0.0003) which is consistent with the earlier study (Yılmaz et al. 2012). Practically, the confirmation of CCD model usually performed to prove the predicted value in the real situation. The alkaline pretreatment condition was changed as 6.47%, 15 min, and 80 °C of alkaline concentration, steaming time, and steaming temperature, respectively, and the yield of xylan was found as 10.32 ± 0.47%. This actual result exhibited the non-significant data from the predicted value of xylan yield (P = 0.4416). The result suggested that the quadratic equation was the effective prediction for the xylan extraction.
The FTIR spectra enable to check the purity and identity of a biomolecule in addition to the indication for the presence of functional groups (Faix 1991; Gonçalves and Ruzene 2001; Ruzene et al. 2008). We performed the FTIR analysis of extracted xylan, and the FTIR pattern showed the high similarity between xylan sample and beechwood xylan (Fig. 2). The rice husk xylan specific bands were observed at 1600–1000 cm−1, whereas the specific bands of wheat straw xylan fall in the range of 1100–1000 cm−1 region (Ruzene et al. 2008). The broad bands were noticed at a wavelength of 3600–3200 cm−1 that was due to the presence of hydroxyl group in the samples. The same band pattern was reported previously (Chaikumpollert et al. 2004; Ruzene et al. 2008). The absorbance at 3422, 2927, 1421, 1251, 1166, 1049, 986 and 897 cm−1 are associated with xylan. The bands between 1166 and 1000 cm−1 are signature bands of xylan. The bands at 897 cm−1 are the indication of C1 group frequency or ring frequency, was observed in extracted xylan and standard xylan. It is the features of beta xylosidic bonds of each sugar monomers (Gupta et al. 1987). The bands between 1200 and 1000, 897 cm−1, and around 1046 cm−1 were attributed to 4-O-methylglucuronoxylan, β-glycosidic linkages between the sugar moieties, and C–O, C–C stretching or C–OH bending in hemicelluloses, respectively. The absence of pectin in the extracted xylan can be substantiated by the absence of band at 1520 cm−1 (Kačuráková et al. 1999; Kacurakova et al. 1994; Gupta et al. 1987). The FTIR pattern of the sample revealed that the extracted xylan did not present the pectin in their structure.
The xylan is the desirable substance used for further hydrolysis processes including acid and enzymatic hydrolysis to recover XOS with various degree of polymerization. The enzymatic hydrolysis method has been accepted widely to reduce the use of corrosive chemicals and solvents. Furthermore, acid hydrolysis process needs controlled environment to operate and also produce some unwanted substances like monosaccharide and furfural. Due to the specificity of the enzyme, the unwanted contamination was less in enzyme hydrolysis procedure (Aachary and Prapulla 2011; Samanta et al. 2015).
The maximum XOS production of 17.35 ± 0.31 mg XOS per mL xylan was observed in the run conditions of 6.25 mg per g xylan of the enzyme, 9 h of incubation time, and 5% of xylan (Table 4). Several studies have been reported the important factors influenced the production yield of XOS such as pH, incubation temperature, incubation time, enzyme dose, and substrate concentration. In this study, three factors including enzyme dose, substrate concentration, and reaction time are the interesting factors for XOS production by the commercial enzyme. From the previous reports, enzyme dose was varied from 2 to 200 U from XOS production presented in previous studies (Brienzo et al. 2010; Chapla et al. 2012; Gowdhaman and Ponnusami 2015; Jayapal et al. 2013; Samanta et al. 2014; Siti-Normah et al. 2012). The increasing of enzyme concentration from 2.65 to 13.25 U were significantly increased the reducing sugars after enzyme hydrolysis (Samanta et al. 2014, 2016). In the present study, the decreasing enzyme concentration exhibited the decreased XOS yield. However, the enzyme dose did not a significant factor after the estimated regression coefficients analysis (P = 0.5936). Incubation time is one of the factors that directly influenced by XOS production yield. Similar to this study, the increasing incubation time showed the raised of XOS production yield (Samanta et al. 2014, 2016). The effect of substrate concentration for XOS production was carried out by different concentration of xylan. When xylan concentration was increased, the XOS production yield significantly reduced (P < 0.0001). This result agreed with the previous observations of other authors revealed that the dissolution property of xylan reduced when increasing the substrate concentration reflected the decreased of XOS yields, owing to decrease the enzyme activity by the present of impurities in the substrate as well as increase viscosity of substrate solution. Furthermore, the reduction of water content in the medium presented in high concentration of xylan reflected the decreased in XOS production yields (Gowdhaman and Ponnusami 2015; Siti-Normah et al. 2012). The confirmation of BBD model was proved by changing the conditions for XOS production as enzyme concentration (1.25 mg/g substrate), incubation time (2 h), and xylan concentration (1%), and the actual XOS yield (13.84 ± 0.29 mg XOS per mL xylan) was not significantly different from its predicted value (14.0011 mg XOS per mL xylan) from the quadratic equation (P = 0.4249). These indicated that the generated equation (Eq. 4) can be used to predict the optimal operation conditions to produce XOS from xylan.
The xylan was extracted efficiently by alkaline pretreatment coupled with the steam application. The steaming time, steaming temperature, the interaction between steaming temperature and steaming time, and interaction of steaming temperature, were the significant factors that directly influence on recovery yield of alkaline extracted xylan. The FTIR analysis showed a typical signal pattern for the hemicellulosic factions. The XOS production was significantly (P < 0.05) influenced by xylan concentration, the interaction between enzyme concentration and incubation time, and interaction of xylan concentration. Also, the results revealed that the xylan extracted from RH as an effective base couple with the steam application and the enzymatic hydrolysis help to maximize the yield of XOS, which can be further used in functional foods and dietary supplements.
BBD:
Box–Behnken design
CCD:
central composite design
3,5-dinitrosalicylic acid
FTIR:
Fourier-transform infrared spectroscopy
HPLC:
KOH:
NaOH:
NREL:
National Renewable Energy Laboratory
rice husk
RSM:
response surface methodology
XGM:
Xylan + Galactan + Mannan
XOS:
xylooligosaccharide
Aachary AA, Prapulla SG (2011) Xylooligosaccharides (XOS) as an emerging prebiotic: microbial synthesis, utilization, structural characterization, bioactive properties, and applications. Compr Rev Food Sci Food Saf 10(1):2–16
Akpinar O, Erdogan K, Bostanci S (2009a) Production of xylooligosaccharides by controlled acid hydrolysis of lignocellulosic materials. Carbohydr Res 344(5):660–666
Akpinar O, Erdogan K, Bostanci S (2009b) Enzymatic production of xylooligosaccharide from selected agricultural wastes. Food Bioprod Process 87:145–151
Bailey MJ, Biely P, Poutanen K (1992) Interlaboratory testing of methods for assay of xylanase activity. J Biotechnol 23(3):257–270
Banerjee S, Sen R, Pandey RA, Chakrabarti T, Satpute D, Giri BS, Mudliar S (2009) Evaluation of wet air oxidation as a pretreatment strategy for bioethanol production from rice husk and process optimization. Biomass Bioenergy 33(12):1680–1686
Boonchuay P, Techapun C, Seesuriyachan P, Chaiyaso T (2014) Production of xylooligosaccharides from corncob using a crude thermostable endo-xylanase from Streptomyces thermovulgaris TISTR1948 and prebiotic properties. Food Sci Biotechnol 23(5):1515–1523
Brienzo M, Carvalho W, Milagres AM (2010) Xylooligosaccharides production from alkali-pretreated sugarcane bagasse using xylanases from Thermoascus aurantiacus. Appl Biochem Biotechnol 162(4):1195–1205
Carvalho AFA, Neto PO, da Silva DF, Pastore GM (2013) Xylo-oligosaccharides from lignocellulosic materials: chemical structure, health benefits and production by chemical and enzymatic hydrolysis. Food Res Int 51(1):75–85
Chaikumpollert O, Methacanon P, Suchiva K (2004) Structural elucidation of hemicelluloses from Vetiver grass. Carbohydr Polym 57(2):191–196
Chaiyasut C, Pengkumsri N, Sirilun S, Peerajan S, Khongtan S, Sivamaruthi BSS (2017) Assessment of changes in the content of anthocyanins, phenolic acids, and antioxidant property of Saccharomyces cerevisiae mediated fermented black rice bran. AMB Expr 7:114
Chapla D, Pandit P, Shah A (2012) Production of xylooligosaccharides from corncob xylan by fungal xylanase and their utilization by probiotics. Bioresour Technol 115:215–221
Cheng H, Zhan H, Fu S, Lucia LA (2011) Alkali extraction of hemicellulose from depithed corn stover and effects on soda-AQ pulping. BioResources 6(1):196–206
Ergues I, Sanchez C, Mondragon I, Labidi J (2012) Effect of alkaline and autohydrolysis processes on the purity of obtained hemicellulose from corn stalks. Bioresour Technol 103:239–248
Faix O (1991) Classification of lignin from different botanical origins by FT–IR spectroscopy. Holzforschung 45:21–27
Finegold SM, Li Z, Summanen PH, Downes J, Thames G, Corbett K, Dowd S, Krak M, Heber D (2014) Xylooligosaccharide increases bifidobacteria but not lactobacilli in human gut microbiota. Food Funct 5(3):436–445
Gadde B, Menke C, Wassman R (2009) Rice straw as a renewable energy source in India, Thailand, and the Philippines: overall potential and limitations for energy contribution and greenhouse gas mitigation. Biomass Bioenergy 33:1532–1546
Gao X, Kumar R, Wyman CE (2014) Fast hemicellulose quantification via a simple one-step acid hydrolysis. Biotechnol Bioeng 111(6):1088–1096
Gibson GR, Roberfroid MB (1995) Dietary modulation of the human colonic microbiota: introducing the concept of prebiotics. J Nutr 125(6):1401–1412
Gonçalves AR, Ruzene DS (2001) Bleachability and characterization by Fourier transform infrared principal component analysis of Acetosolv pulps obtained from sugarcane bagasse. Appl Biochem Biotechnol 91–93:63–70
Gowdhaman D, Ponnusami V (2015) Production and optimization of xylooligosaccharides from corncob by Bacillus aerophilus KGJ2 xylanase and its antioxidant potential. Int J Biol Macromol 79:595–600
Gupta S, Madan RN, Bansal MC (1987) Chemical composition of Pinus caribuca hemicellulose. Tappi 70:113–114
Jayapal N, Samanta AK, Kolte AP, Senani S, Sridhar M, Suresh KP, Sampath KT (2013) Value addition to sugarcane bagasse: xylan extraction and its process optimization for xylooligosaccharides production. Ind Crops Prod 42:14–24
Jayapal N, Sondhi N, Jayaram C, Samanta AK, Kolte AP, Senani S (2014) Xylooligosaccharides from green coconut husk. In: Proceedings of global animal nutrition conference on climate resilient livestock feeding systems for global food security, Bangalore, India, 20–22 April 2014
Kacurakova M, Ebringerová A, Hirsch J, Hromadkova Z (1994) Infrared study of arabinoxylans. J Sci Food Agric 66:423–427
Kačuráková M, Wellner N, Ebringerová A, Hromádková Z, Wilson RH, Belton PS (1999) Characterization of xylan-type polysaccharides and associated cell wall components by FT–IR and FT–Raman spectroscopies. Food Hydrocoll 13(1):35–41
Lin S, Chou L, Chien Y, Jung-Su Chang J, Lin C (2016) Prebiotic Effects of xylooligosaccharides on the improvement of microbiota balance in human subjects. Gastroenterol Res Pract 2016(2016):5789232
McIntosh S, Vancov T (2011) Optimization of dilute alkaline pretreatment for enzymatic scarification of wheat straw. Biomass Bioenergy 35(7):3094–3103
Miller GL (1959) Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem 31(3):426–428
Mumtaz S, Rehman SU, Huma N, Jamil A, Nawaz H (2008) Xylooligosaccharide enriched yoghurt: physicochemical and sensory evaluation. Pak J Nutr 7(4):566–569
Nasir MAM, Saleh SH (2016) Characterization of hemicelluloses from oil palm empty fruit bunches obtained by alkaline extraction and ethanol precipitation. Malaysian J Anal Sci 20(4):849–855
Qing Q, Li H, Kumar R, Wyman CE (2013) Xylooligosaccharides production, quantification, and characterization in context of lignocellulosic biomass pretreatment. In: Wyman CE (ed) Aqueous pretreatment of plant biomass for biological and chemical conversion to fuels and chemicals. Wiley, Chichester
Ruzene DS, Silva DP, Vicente AA, Goncalves AR, Teixeira JA (2008) An alternative application to the Portuguese agro-industrial residue: wheat straw. Biotechnol Appl Biochem 147:85–96
Samanta AK, Jayapal N, Kolte AP, Senani S, Sridhar M, Suresh KP, Sampath KT (2012a) Enzymatic production of xylooligosaccharides from alkali solubilized xylan of natural grass (Sehima nervosum). Bioresour Technol 112:199–205
Samanta AK, Senani S, Kolte AP, Sridhar M, Sampath KT, Jayapal N, Devi A (2012b) Production and in vitro evaluation of xylooligosaccharides generated from corn cobs. Food Bioprod Process 90(3):466–474
Samanta AK, Jayapal N, Kolte AP, Senani S, Sridhar M, Mishra S, Prasad CS, Suresh KP (2013) Application of pigeon pea (Cajanus cajan) stalks as raw material for xylooligosaccharides production. Appl Biochem Biotechnol 169(8):2392–2404
Samanta AK, Jayapal N, Kolte AP, Senani S, Sridhar M, Dhali A, Suresh KP, Jayaram C, Prasad CS (2014) Process for enzymatic production of xylooligosaccharides from the xylan of corn cobs. J Food Process Preserv 39(6):729–736
Samanta AK, Jayapal N, Jayaram C, Roy S, Kolte AP, Senani S, Sridhar M (2015) Xylooligosaccharides as prebiotics from agricultural by-products: production and applications. Bioact Carbohydr Dietary Fibre 5:62–71
Samanta AK, Kolte AP, Elangovan AV, Dhali A, Senani S, Sridhar M, Suresh KP, Jayapal N, Jayaram C, Roy S (2016) Value addition of corn husks through enzymatic production of xylooligosaccharides. Braz Arch Biol Technol 59:e16160078
Siti-Normah MDS, Sabiha-Hanim S, Noraishah A (2012) Effects of pH, temperature, enzyme and substrate concentration on xylooligosaccharides production. Int J Innov Res Sci Eng Technol 12(6):1181–1185
Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton D, Crocker D (2008) Determination of structural carbohydrates and lignin in biomass. NREL Laboratory Analytical Procedure NREL/TP-510-42618. National Renewable Energy Laboratory, Golden
Ugheoke IB, Mamat O (2012) A critical assessment and new research directions of rice husk silica processing methods and properties. Maejo Int J Sci Technol 6(3):430–448
Vázquez MJ, Alonso JL, Domı́nguez H, Parajó JC (2000) Xylooligosaccharides: manufacture and applications. Trends Food Sci Technol 11(11):387–393
Woraharn S, Lailerd N, Sivamaruthi BS, Wangcharoen W, Peerajan S, Sirisattha S, Chaiyasut C (2015) Development of fermented Hericium erinaceus juice with high content of l-glutamine and l-glutamic acid. Int J Food Sci Technol 50:2104–2112
Woraharn S, Lailerd N, Sivamaruthi BS, Wangcharoen W, Sirisattha S, Peerajan S, Chaiyasut C (2016) Evaluation of factors that influence the l-glutamic and γ-aminobutyric acid production during Hericium erinaceus fermentation by lactic acid bacteria. Cyta-J Food 14(1):47–54
Yılmaz CH, Cekmecelioglu D, Dervisoglu M, Kahyaoglu T (2012) Effect of extraction conditions on hemicellulose yields and optimisation for industrial processes. Int J Food Sci Technol 47(12):2597–2605
Zhou JH, Zhang JY, Li HM, Sun GW, Liang FZ (2013) Extraction of hemicellulose from corn stover by KOH solution pretreatment and its characterization. Adv Mat Res 821–822:1065–1070
CC involved in the designing of the experiments, analysis of data, and support the manuscript writing. BSS involved in data analysis, manuscript preparation, critical review of the results. NK, SS, NL participated in the RSM study, and statistical analysis. SP was responsible for acquiring the raw experimental data and processing. All authors read and approved the final manuscript.
We gratefully acknowledge the Royal Golden Jubilee Ph.D. Scholarship under the Thailand Research Fund, National Research Council of Thailand (NRCT), and Chiang Mai University grant (CMU-grant) for their financial support. We wish to acknowledge Faculty of Pharmacy and Chiang Mai University, Chiang Mai, Thailand for the necessary provision.
All data are fully available within the text.
No animal or human subjects were used in this study.
Royal Golden Jubilee Ph.D. Scholarship under the Thailand Research Fund, National Research Council of Thailand (NRCT), and Chiang Mai University grant (CMU-grant).
Innovation Center for Holistic Health, Nutraceuticals, and Cosmeceuticals, Faculty of Pharmacy, Chiang Mai University, Chiang Mai, 50200, Thailand
Nuntawat Khat-udomkiri
, Bhagavathi Sundaram Sivamaruthi
, Sasithorn Sirilun
, Narissara Lailerd
& Chaiyavat Chaiyasut
Department of Physiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
Narissara Lailerd
Health Innovation Institute, Chiang Mai, 50200, Thailand
Sartjin Peerajan
Search for Nuntawat Khat-udomkiri in:
Search for Bhagavathi Sundaram Sivamaruthi in:
Search for Sasithorn Sirilun in:
Search for Narissara Lailerd in:
Search for Sartjin Peerajan in:
Search for Chaiyavat Chaiyasut in:
Correspondence to Chaiyavat Chaiyasut.
Khat-udomkiri, N., Sivamaruthi, B.S., Sirilun, S. et al. Optimization of alkaline pretreatment and enzymatic hydrolysis for the extraction of xylooligosaccharide from rice husk. AMB Expr 8, 115 (2018) doi:10.1186/s13568-018-0645-9
Alkaline pretreatment
Enzyme hydrolysis | CommonCrawl |
Computer Graphics Tutorial
Computer Graphics Home
Computer Graphics Basics
Line Generation Algorithm
Circle Generation Algorithm
Polygon Filling Algorithm
Viewing & Clipping
2D Transformation
Computer Graphics Curves
Computer Graphics Surfaces
Visible Surface Detection
Computer Graphics Fractals
Computer Graphics - Quick Guide
Computer Graphics - Resources
Computer Graphics - Discussion
In the 2D system, we use only two coordinates X and Y but in 3D, an extra coordinate Z is added. 3D graphics techniques and their application are fundamental to the entertainment, games, and computer-aided design industries. It is a continuing area of research in scientific visualization.
Furthermore, 3D graphics components are now a part of almost every personal computer and, although traditionally intended for graphics-intensive software such as games, they are increasingly being used by other applications.
Parallel Projection
Parallel projection discards z-coordinate and parallel lines from each vertex on the object are extended until they intersect the view plane. In parallel projection, we specify a direction of projection instead of center of projection.
In parallel projection, the distance from the center of projection to project plane is infinite. In this type of projection, we connect the projected vertices by line segments which correspond to connections on the original object.
Parallel projections are less realistic, but they are good for exact measurements. In this type of projections, parallel lines remain parallel and angles are not preserved. Various types of parallel projections are shown in the following hierarchy.
Orthographic Projection
In orthographic projection the direction of projection is normal to the projection of the plane. There are three types of orthographic projections −
Front Projection
Top Projection
Side Projection
Oblique Projection
In oblique projection, the direction of projection is not normal to the projection of plane. In oblique projection, we can view the object better than orthographic projection.
There are two types of oblique projections − Cavalier and Cabinet. The Cavalier projection makes 45° angle with the projection plane. The projection of a line perpendicular to the view plane has the same length as the line itself in Cavalier projection. In a cavalier projection, the foreshortening factors for all three principal directions are equal.
The Cabinet projection makes 63.4° angle with the projection plane. In Cabinet projection, lines perpendicular to the viewing surface are projected at ½ their actual length. Both the projections are shown in the following figure −
Isometric Projections
Orthographic projections that show more than one side of an object are called axonometric orthographic projections. The most common axonometric projection is an isometric projection where the projection plane intersects each coordinate axis in the model coordinate system at an equal distance. In this projection parallelism of lines are preserved but angles are not preserved. The following figure shows isometric projection −
Perspective Projection
In perspective projection, the distance from the center of projection to project plane is finite and the size of the object varies inversely with distance which looks more realistic.
The distance and angles are not preserved and parallel lines do not remain parallel. Instead, they all converge at a single point called center of projection or projection reference point. There are 3 types of perspective projections which are shown in the following chart.
One point perspective projection is simple to draw.
Two point perspective projection gives better impression of depth.
Three point perspective projection is most difficult to draw.
The following figure shows all the three types of perspective projection −
In 3D translation, we transfer the Z coordinate along with the X and Y coordinates. The process for translation in 3D is similar to 2D translation. A translation moves an object into a different position on the screen.
The following figure shows the effect of translation −
A point can be translated in 3D by adding translation coordinate $(t_{x,} t_{y,} t_{z})$ to the original coordinate (X, Y, Z) to get the new coordinate (X', Y', Z').
$T = \begin{bmatrix} 1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ t_{x}& t_{y}& t_{z}& 1\\ \end{bmatrix}$
P' = P∙T
$[X′ \:\: Y′ \:\: Z′ \:\: 1] \: = \: [X \:\: Y \:\: Z \:\: 1] \: \begin{bmatrix} 1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ t_{x}& t_{y}& t_{z}& 1\\ \end{bmatrix}$
$= [X + t_{x} \:\:\: Y + t_{y} \:\:\: Z + t_{z} \:\:\: 1]$ | CommonCrawl |
YEP XV "Information Diffusion on Random Networks"
« YES X : "Understanding Deep Learning: Generalization, Approximation and Optimization"
Lecture day on the occasion of visit Ravi Kumar »
When available, the slide presentations of the speakers have been added to this website, please see "Abstracts".
The "Information diffusion on random graphs" workshop is the 15th workshop in the 'Young European Probabilists' yearly workshops.
Diffusion processes in networks manifests themselves in many real-life scenarios, such as epidemic spreading, viral marketing and power blackouts. This YEP workshop focuses information diffusion on networks. The phenomenon of information diffusion recently attracted vast attention across a wide range of research fields, including mathematics, physics, computer science, and social sciences. Therefore, this YEP will focus not only on purely probabilistic aspects, but also take an algorithmic and application perspective. The aim of the workshop is to bring together junior and senior researchers from probability and from other fields, and to bridge the corresponding scientific communities.
The workshop will have three mini courses by internationally renowned researchers, giving an opportunity to junior as well as senior attendants to learn about a new topic related to information diffusion. Other than that, the workshop will consist of invited talks by junior and senior researchers.
Remco van der Hofstad TU Eindhoven
Nelly Litvak University of Twente/TU Eindhoven
Clara Stegehuis TU Eindhoven
Tutorial speakers:
Frank Ball University of Nottingham
Mia Deijfen Stockholm University
Renaud Lambiotte University of Oxford
Claudio Castellano Sapienza, Rome
Eric Cator Radboud University Nijmegen
Wei Chen Microsoft Research Asia
Petter Holme Tokyo Institute of Technology
Marton Karsai ENS Lyon
Juliá Komjathy TU Eindhoven
Lasse Leskelä Aalto University
Naoki Masuda University of Bristol
Peter Mörters Cologna University
David Sirl University of Nottingham
Chi Tran Université des Sciences et Technologies de Lille
Daniel Valesin University of Groningen
Rose Yu Northeastern University
Contributed talks/posters
During the conference we have a few slots available for contributed talks by participants.
Caio Alvez (contributed)
In this talk we will discuss our recent work introducing a model of preferential attachment random graphs where the asymptotic ratio between vertices and edges of the graph is governed by a non-increasing regularly varying function f: N-> [0,1], which we call the edge-step function. We prove general results about the associated empirical degree distribution, as well as topological results about the graph's clique number and diameter. Except for the case of the diameter of slowly varying functions, which exhibit a wider range of behavior, our results depend essentially on the index of regularity of f at infinity. We then discuss applications of the above results for the contact process and bootstrap percolation process in these random graphs. Joint work with Rémy Sanchis Rodrigo Ribeiro and Daniel Valesin.
Frank Ball (tutorial)
Epidemics on networks
PRESENTATION: Epidemics on networks 1 Epidemics on networks 2
There has been considerable interest in the past two decades in models for the spread of epidemics on networks. The usual paradigm is that the population is described by an undirected random graph and disease can spread only along the edges of the graph. This mini-course gives an introduction to the analysis of SIR (susceptible-infective-recovered) epidemics on configuration model (and related) networks, which is by far the most studied class of such epidemics. Topics covered include:
branching process approximation for the early stages of an epidemic, which determines whether or not an epidemic with few initial infectives can become established and lead to a major outbreak;
susceptibility sets and the final outcome of a major outbreak;
effective degree analysis of models, which yields a functional central limit theorem (CLT) for the temporal behaviour and a CLT for the final outcome of a major epidemic;
models with superimposed household structure, a key component of human populations which can have a significant impact on disease dynamics;
vaccination schemes, including acquaintance vaccination which targets high-degree individuals.
Wei Chen (invited)
Information and Influence Propagation in Social Networks: Modeling and Influence Maximization
PRESENTATION: Information and Influence Propagation in Social Networks
Information and influence propagation is a fundamental phenomenon in social networks that leads to many applications both for business and for public good, such as viral marketing, social recommendations, rumor control, epidemic prevention, etc. In this talk, I will survey the research area on information/influence diffusion dynamics and the influence maximization problem, which is the problem of selecting a small number of seed nodes in a social network such that their influence spread is maximized. The talk will cover basic stochastic diffusion models, algorithmic techniques for scalable influence maximization, as well as some of my recent research work on influence-based centrality, competitive and complementary influence diffusion, etc.
Emilio Cruciani (contributed)
We investigate the behavior of a simple majority dynamics on networks of agents whose interaction topology exhibits a community structure. By leveraging recent advancements in the analysis of dynamics, we prove that, when the states of the nodes are randomly initialized, the system rapidly and stably converges to a configuration in which the communities maintain internal consensus on different states. This is the first analytical result on the behavior of dynamics for non-consensus problems on non-complete topologies, based on the first symmetry-breaking analysis in such setting.
Our result has several implications in different contexts in which dynamics are adopted for computational and biological modeling purposes. In the context of Label Propagation Algorithms, a class of widely used heuristics for community detection, it represents the first theoretical result on the behavior of a distributed label propagation algorithm with quasi-linear message complexity. In the context of evolutionary biology, dynamics such as the Moran process have been used to model the spread of mutations in genetic populations (Lieberman, Hauert, and Nowak 2005); our result shows that, when the probability of adoption of a given mutation by a node of the evolutionary graph depends super-linearly on the frequency of the mutation in the neighborhood of the node and the underlying evolutionary graph exhibits a community structure, there is a non-negligible probability for species differentiation to occur.
Mia Deijfen (tutorial)
Competing growth on lattices and graphs
PRESENTATION: Competition -References
Competing first passage percolation describes the growth of two competing infections on an underlying graph structure. It was first studied on the Z^d-lattice. The main question is if the infection types can grow to occupy infinite parts of the lattice simultaneously, the conjecture being that the answer is yes if and only if the infections grow with the same intensity. Recently, the model has been analyzed on more heterogeneous graph structures, where the degrees of the vertices can have an arbitrary distribution. In this case, it turns out that also the degree distribution plays a role in determining the outcome of the competition. I will give a survey of existing results, both on Z^d and on heterogeneous graphs, and describe open problems. I will also describe related competition models such as the multitype contact process and models driven by moving particles.
Peter Gracar (contributed)
Spread of infection by random walks - Multi-scale percolation along a Lipschitz surface
A conductance graph on $\mathbb{Z}^d$ is a nearest-neighbor graph where all of the edges have positive weights assigned to them. We first consider a point process of particles on the nearest neighbour graph $(\mathbb{Z}^d,E)$ and show some known results about the spread of infection between particles performing continuous time simple random walks. Next, we extend consider the case of uniformly elliptic random graphs on $\mathbb{Z}^d$ and show that the infection spreads with positive speed also in this more general case. We show this by developing a general multi-scale percolation argument using a two-sided Lipschitz surface that can also be used to answer other questions of this nature. Joint work with Alexandre Stauffer.
Petter Holme (invited)
Temporal networks of human interaction
The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions—modeling your system as temporal networks—can make predictions and mechanistic understanding more accurate. Just as there can be network structures affecting disease spreading, temporal structures can also govern the spreading dynamics. We will discuss recent developments in the analysis of temporal networks, including community detection, the definition of time scales, random walks and various forms of spreading processes. We argue that adding time to network representations fundamentally changes our usual network concepts—so much that it is perhaps meaningless to think of temporal networks as an extension of the network paradigm.
Juliá Komjathy (invited)
How to stop explosion by penalising transmission to hubs
In this talk we study the spread of information in infinite inhomogeneous spatial random graphs.
To model the spread of information in social networks, we take a spatial random graph that is scale free, that is, the degree of a vertex follows a power law with exponent tau in (2,3). One common approach to model the spread information is then to equip each edge with a random and iid transmission cost L, and study the cost of the least-cost past between vertices. In these graphs, it was observed earlier than it is possible to reach infinitely many vertices within finite cost, as long as the cumulative distribution function of L is not doubly-exponentially flat close to 0. This phenomenon is called explosion, and it seems off from reality for cases where individual contact is necessary, e.g., spreading of viruses, etc.
We introduce a penalty to transmit the information to hubs, and increase the cost of transmission through an edge with expected degrees W and Z by a factor that is a power of the product WZ.
We find a threshold behaviour between explosion, depending on how steep the cumulative distribution function of L increases at 0: it should be at least polynomially steep, where the exponent depends on both the power-law exponent tau and the penalty-exponent.
This behaviour is arguably a better representation of information spreading processes in social networks than the case without penalizing factor.
Renaud Lambiotte (tutorial)
PRESENTATION: Random walks on networks 1 Random walks on networks 2
Diffusion and Communities in Networks
The presence of communities, or clusters, in networks is well known to affect diffusive processes. Conversely, tracking the trajectories of random walkers on the graph can be used to uncover communities hidden in large graphs. During this tutorial, I will review the relations between the two sides of the problem, and present in detail community detection methods based on first-order and higher-order Markov models, as well as methods allowing to uncover non-assortative communities in networks.
Lasse Leskelä (invited)
Statistical graph models induced by overlapping communities of variable sizes and strengths
Information transmission in today's society is more and more realized through overlapping communities of various sizes and strengths. This talk discusses a statistical network model where a pair of nodes sharing a community are linked with probability determined by the community strength. The model is parametrized by a limiting empirical joint distribution of community sizes and strengths, allowing to capture the property that large communities often provide weaker pairwise links. A natural property of the model is that high variability of community sizes causes the degree distribution to have heavy tails. The main focus of this talk is to discuss the effect of size-strength correlations on graph parameters relevant to information diffusion, especially the transitivity spectrum. Based on joint work with Mindaugas Bloznelis (Vilnius U).
Naoki Masuda (invited)
Epidemic processes on dynamically switching networks: Effects of commutator and concurrency
Epidemic processes on temporally varying networks are complicated by complexity of both network structure and temporal dimensions. We analyse the susceptible-infected-susceptible (SIS) epidemic model on regularly switching networks, where each contact network is used for a finite fixed duration before switching to another. First, we analyse the epidemic threshold under a deterministic approximation called the individual-based approximation. We show that, under this approximation, temporality of networks lessens the epidemic threshold such that infections persist more easily in temporal networks than in their static counterparts. We further show that the commutator bracket of the adjacency matrices at different times is empirically a useful predictor of the impact of temporal networks on the epidemic threshold. The second topic is the effects of concurrency (i.e., the number of neighbours that a node has at a given time point) on the epidemic threshold in the stochastic SIS dynamics. For a particular switching network model, we show that network dynamics can suppress epidemics (i.e., yield a higher epidemic threshold) when nodes' concurrency is low (where stochasticity effects are stronger) and can enhance epidemics when the concurrency is high.
Peter Mörters (invited)
Metastability of the contact process on evolving scale-free networks
We study the contact process in the regime of small infection rates on scale-free networks evolving by stationary dynamics. A parameter allows us to interpolate between slow (static) and fast (mean-field) network dynamics. For two paradigmatic classes of networks we investigate transitions between phases of fast and slow extinction and in the latter case we analyse the density of infected vertices in the metastable state. The talk is based on joint work with Emmanuel Jacob (ENS Lyon) and Amitai Linker (Universidad de Chile).
Gergely Odor (contributed)
In sensor based source localization we attempt to detect the source of an epidemic process spreading in a graph, given the time of infection of the sensor nodes. We are interested in the minimal number of sensors we need to select for perfect detection when the epidemic is deterministic (i.e. each sensor reports its distance from the source), and the graph is drawn from the Erdos-Renyi distribution. When the sensors are selected before any of the observations are made, this problem reduces to the Metric Dimension problem, which has already been analysed for Erdos-Renyi graphs. In this talk, we consider a modified version of the problem, when the sensors are selected sequentially, adaptively to previous observations. We present tight bounds for the reduction in the number of required sensors compared to the non-adaptive version of the problem.
Guilherme Reis (contributed)
Interacting diffusions on random graphs
We consider systems of diffusion processes whose interactions are described by a graph. For example, traditional mean-field interacting diffusions correspond to a complete interaction graph. In recent years some effort has been directed to understanding more general interactions. When the interaction graph is random, in the particular case of the Erd\H{o}s-R\'{e}nyi random graph, we show how the behavior of this particle system changes whether the mean degree of the Erd\"{o}s-R\'{e}nyi graph diverges to infinity or converges to a constant. When the mean degree converges to a constant we exploit a locality property of this system. Loosely speaking, the locality property states that information does not propagate too fast over the graph for this kind of particle system.
Markus Schepers (contributed)
The local clustering coefficient in hyperbolic random graphs
The local clustering coefficient is a quantity which has been studied for its influence on diffusive processes on a graph. For a given vertex of the graph it measures the extent to which its neighbourhood resembles a complete graph. Hyperbolic random graphs are given by a collection of points distributed uniformly in a hyperbolic disk with edges between nearby vertices. This model was invented by Krioukov et al. and has been suggested as a suitable model for real-world networks such as the Internet.
In this project we study the local clustering coefficient averaged over all vertices and averaged over all vertices of degree k in the hyperbolic random graph in the probabilistic limit (convergence in probability) as the number of vertices n tends to infinity.
We consider both the case of a fixed degree k, as well as a sequence of degrees (kn) tending to
infinity. We derive exact analytic limiting expressions as well as the asymptotic scaling
(including the multiplicative constant).
(joint work with: Nikolaos Fountoulakis, Pim van der Hoorn, Tobias Müller)
David Sirl (invited)
A network epidemic model with preventive rewiring
Network epidemic models have developed enormously in the last 20 years or so in response to some of the unrealistic assumptions of homogeneity in most simple epidemic models. A significant feature of most epidemic-on-a-network models is that the epidemic evolves on a static network.
We consider an SIR (Susceptible - Infectious - Removed) epidemic spreading on a configuration-model network (a random network with specified degree distribution), with the addition of some simple network dynamics. The addition is to allow susceptible individuals to "drop" connections to infectious neighbours. A further extension permits such susceptible individuals to then "rewire" to connect instead with someone else in the population.
For the model with dropping only (i.e. with no rewiring), we present some limit theorems (in the limit of large population size) for the temporal evolution of the model and for the final size of the epidemic (the number of initial susceptibles that are ultimately recovered). For the model with rewiring included too, we show that whilst the preventive behaviour of rewiring is always rational at the individual level, it may have negative consequences at the population level.
This work is joint with Frank Ball (Nottingham), Tom Britton (Stockholm) and KaYin Leung (Stockholm).
Réka Szabo (contributed)
We consider an inhomogeneous percolation model on an oriented regular tree, where besides the usual bonds, additional bonds of a certain length are also present. Percolation is defined on this graph, by letting these additional edges be open with probability q and every other edge with probability p. We give an improved lower bound for the critical curve which delimits the set of pairs (p, q) for which there is almost surely no infinite cluster. Furthermore, we show that the cluster of the root has the same distribution as the family tree of a certain multi-type branching process, which allows us to state some limit theorems. Joint work with D. Valesin and B. N. B. de Lima.
Sam Thomas (contributed)
We study the behaviour of random walk on dynamical percolation. In this model, the edges of a graph are either open or closed and refresh their status at rate μ, while at the same time a random walker moves on G at rate 1, but only along edges which are open. In this talk I present recent results proving cutoff in the case when G is the complete graph and the bond percolation parameter is of order 1/n, ie we consider a random walk on dynamical Erdos-Renyi graph. We do this via an explicit coupling argument. Joint work with Perla Sousi
Chi Tran (invited)
User-driven exploration of social networks with application in epidemiology
To understand the spread of certain diseases such as HIV or HCV, the modelling of social networks (sexual partners or people who inject drug together) is important. In the case of HCV, the network is hidden since drug use is illegal. We have designed in Paris a 'Respondent-driven' study to discover the social network of people who inject drugs (PWIDs). The underlying idea is to have the graph explored by random (branching) walks: each interviewee receives a certain number of coupons that she/he distributes to her/his injection partners. After having described the general case, we focus on what happens for the family of Stochastic Block Model graphs. Which proportion of the graph can we discover and what can be said on the topologies that are found ?
Viktoria Vadon (contributed)
Percolation on the Random Intersection Graph with Communities
The Random Intersection Graph with Communities (RIGC) models a network based on individuals and communities they are part of, with two key features: each community has its arbitrary internal structure described by a small graph, and communities are allowed to overlap. It generalizes the classical Random Intersection Graph (RIG) model, and is constructed based on a Bipartite Configuration Model. We study percolation, i.e., independent removal of edges, as a simple model for a randomized information spread: we view the connected component of a vertex as the cluster this vertex is able to broadcast information to. We show that percolation on the RIGC, in particular, percolation on the classical RIG, is (again) an RIGC with different parameters, and prove that percolation on the RIGC exhibits a phase transition, in terms of whether a linear-sized component persists. We may touch on robustness, and why robustness of edge and vertex percolation behave differently.
Daniel Valesin (invited)
The asymmetric multitype contact process
We study a class of interacting particle systems known as the multitype contact process on Z^d. In this model, sites of Z^d can be either empty or occupied by an individual of one of two species. Individuals die with rate one and send descendants to neighboring sites with a rate that depends on their (the parent's) type. Births are not allowed at sites that are already occupied. We assume that one of the types has a birth rate that is larger than that of the other type, and larger than the critical value of the standard contact process. We prove that, if initially present, the stronger type has a positive probability of never going extinct. Conditionally on this event, it takes over a ball of radius growing linearly in time. We also completely characterize the set of stationary distributions of the process and prove a complete convergence theorem. Joint work with Pedro L. B. Pantoja and Thomas Mountford.
Rose Yu (invited)
Learning Graph Diffusion with Deep Neural Networks
Diffusion processes on graphs have complex dynamics. Due to their complexity, learning graph diffusion often relies on strong assumptions or is computationally expensive. Deep neural networks provide flexible models for modeling complex data. While existing deep neural networks have shown to be highly effective in, for example, computer vision and natural language processing, off-the-shelf deep models have limited utility in modeling graph-structured data.
In this talk, I will showcase how to design deep neural networks to learn the dynamics of graph diffusion. In particular, I will discuss (1) Diffusion Convolution Recurrent Neural Networks (DCRNN): a neural sequence model for spatiotemporal forecasting and (2) DAG to DAG Recursive Neural Networks (D2DRNN): a message passing neural network for DAG to DAG translation. I will also demonstrate successful applications of these models to real-world traffic prediction and Boolean expression simplification tasks.
Xiangying (Zoe) (contributed)
The Contact Process on Random Graphs and Galton-Watson Trees
The key to our investigation is an improved (and in a sense sharp) understanding of the survival time of the contact process on star graphs. Using these results, we show that for the contact process on Galton-Watson trees, when the offspring distribution (i) is subexponential the critical value for local survival $\lambda_2=0$ and (ii) when it is geometric($p$) we have $\lambda_2 \le C_p$, where the $C_p$ are much smaller than previous estimates. We also study the critical value $\lambda_c(n)$ for ``prolonged persistence'' on graphs with $n$ vertices generated by the configuration model. In the case of power law and stretched exponential distributions where it is known $\lambda_c(n) \to 0$ we give estimates on the rate of convergence. Physicists tell us that $\lambda_c(n) \sim 1/\Lambda(n)$ where $\Lambda(n)$ is the maximum eigenvalue of the adjacency matrix. Our results show that this is not correct.
Dong Yao (contributed)
The symbiotic contact process
We consider a contact process on $\ZZ^d$ with two species that interact in a symbiotic manner. Each site can either be vacant or host individuals of species A and/or B. Multiple occupancy by the same species at a single site is prohibited. Symbiosis is represented by a reduced death rate $\mu \in [0,1) $. If only one specie is present at a site then that particle dies with rate 1 but if both species are present then the death rate is reduced to $\mu$ for the two particles at that site. We prove that the critical infection rate $\lambda_c(\mu)$ for weak survival is of order $\sqrt{\mu}$, which coincides with the mean field calculation. We also investigate the nature of the phase transition. We show that in dimension $d=1$ the survival of the system is through oriented percolation. We also show that, for all dimensions, the phase transition is continuous and $\lambda_c(\mu)$ is 1 (regardless of the value of $\mu$), if we let particles move around with a rate going to infinity. The talk is based on ongoing work with Rick Durrett.
Xiu-Xiu Zhan (contributed)
Information Diffusion Backbones in Temporal Networks
Information diffusion on a temporal network can be modeled by viral spreading processes such as the Susceptible-Infected (SI) spreading process. An infected node meaning that the node possesses the information could spread the information to a Susceptible node with a given spreading probability β whenever a contact happens between the two nodes. Progress has been made in the understanding of how temporal network features and the choice of the source node affect the prevalence, i.e. the percentage of nodes reached by the information. In this work, we explore further: which node pairs are likely to contribute to the actual diffusion of information, i.e. appear in a diffusion trajectory? How is this related to the local temporal connection features of the node pair? Such deep understanding of the role of node pairs is crucial to explain and control the prevalence of information spread. First, we propose the construction of an information diffusion backbone G_B (β) for an SI spreading process with an infection probability β on a temporal network. The backbone is a weighted network where the weight of each node pair indicates how likely the node pair contributes to a diffusion process starting from an arbitrary node. Second, we investigate the relation between the backbones with different infection probabilities on a temporal network. We find that the backbone topologies obtained for low and high infection probabilities approach the backbone G_B (β→0) and G_B (β=1), respectively. The backbone G_B (β→0) equals the integrated weighted network, where the weight of a node pair counts the total number of contacts in between, a local temporal connection feature. Finally, we discover a local connection feature among many other features that could well predict which node pairs are likely to appear in G_B (β=1), whose computation complexity is high. This local feature encodes the time that each contact occurs, pointing out the importance of temporal features in determining the role of node pairs in a dynamic process beyond the features of the integrated network.
Link to the online registration form: REGISTRATION
Information on travel, location etc. : INFORMATION
Eurandom
Eindhoven, Netherlands + Google Map
https://www.tue.nl/en/university/departments/mathematics-and-computer-science/ | CommonCrawl |
Was Aristotle really wrong about gravity?
When I was in 9th grade, I learned that Aristotle was responsible for holding back physics for centuries because he said that heavier objects fall faster than lighter objects. Finally, in the 16th century Galileo disproved this theory by dropping two balls of different masses from the Leaning Tower of Pisa showing that they both fell at the same speed.
And when I took physics in 12th grade, I learned that Newton's Law of Gravitation explains the results of Galileo's experiment, showing that the acceleration of an object near the earth's surface is always the same $g=GM/R^2=9.80 m/s^2$, where $G$ is the gravitational constant, $R$ is the distance of the object to the center of the earth, and $M$ is the mass of the earth.
This all seemed to conclusively disprove Aristotle's theory that heavier objects fall faster than lighter objects. However, this line of argument neglects to consider Newton's Third Law, which implies that the falling object forces the earth to move at acceleration proportional to the mass of the falling object. And this will cause the distance between the falling object and the center of the earth to decrease faster for heavier falling objects, implying that the heavier objects do in fact fall faster than lighter objects.
So my question is why was I taught that Aristotle was completely wrong when his prediction seems to be totally in agreement with Newtonian mechanics?
Added: If you don't believe me, just check out the differential equations obtained from Newton's Law of Gravitation:
$MR''=GmM/|R-r|^2$ and $mr''=-GmM/|r-R|^2$,
where $m$ is the mass of the object, $M$ is the mass of the earth, $R$ is the position of the earth, $r$ is the position of the object. Making it simpler, we get:
$R''=Gm/|R-r|^2$ and $r''=-GM/|r-R|^2$.
When $m$ is large, $R''$ is large, implying that $|R'-r'|$ becomes large faster than when $m$ is small, implying that the object will eventually move faster towards the earth when $m$ is large than when $m$ is small.
See a similar question here: https://physics.stackexchange.com/questions/3534/dont-heavier-objects-actually-fall-faster-because-they-exert-their-own-gravity
physics ancient-greece gravity aristotle
Rodrigo de Azevedo
Craig FeinsteinCraig Feinstein
$\begingroup$ First of all, it is wrong to "read" ancient theory "with insight" ... Aristotle (and ancient "natural philosophers") never formulated a precise law (with some mathematical formalism available) expressing the relation between speed and time, nor made "predictions" regarding the behaviour of bodies in motion. This is the main reason why it is quite useless trying to compare those theories with "modern" (i.e.based on mathematical formulation of relations capable of numerical predictions) ones. Note : the "Leaning Tower" experiment has (quite certainly) never been performed by Galileo. $\endgroup$
$\begingroup$ For sure A did not think at gravity in the same way as Newton did ! :) In my opinio, speaking of "right" or "wrong" in historical context is not very useful. Since Galileo, we "live" in a world were physical science is "made of" mathematical laws and theory capable of numerical prediction and it is hard to think in a non-math way. Galileo uses the same math of Arstotle's time : Eudoxus/Euclide's theory of proportion and he was able to find and test the correct law for free falling bodies. 1/2 $\endgroup$
$\begingroup$ @CraigFeinstein You've got everything completely mixed up. Newton's laws are easily used to demonstrate that gravitational acceleration and gravitational acceleration only (unless one considers noninertial coordinates) is independent of mass: $F_g=\frac{GMm}{r^2}=ma\implies \frac{GM}{r^2}=a$, i.e. the acceleration of the object only depends on the other object (with mass $M$) which it is attracted by in Newtonian mechanics. It is obvious that Aristotle was wrong. In this light, it is clear that your question rests on a misconception about the laws of physics. $\endgroup$
– Danu ♦
$\begingroup$ @CraigFeinstein Well, yes, but that's generally ignored when dealing with Newtonian physics problems. $\endgroup$
– HDE 226868 ♦
$\begingroup$ A very similar question can be found here on Physics. Your question seems to be along these lines, not historical ones. And yes, it's true, but the reason you're not taught it - which appears to be what you're asking, which would also make it off-topic - is that it's generally unnecessary in most calculations. $\endgroup$
I'll try with some calculations : please, check it and the formulae used ...
A solid ball with a mass $m$ of $1$ kg falls (with the usual approxiamtions : no drag, etc.) with an acceleration $a$ that is about $10 \ m/sec^2$.
This means that falling from a tower $80$ meters heigh, it will touch ground after $4$ sec, with a final velocity of about $40 \ m/sec$.
You are right : in the same time, the Earth will "fall towards" the ball, attracted by the same gravitational force.
The mass $M$ of the Earth is about $6 \times 10^{24} \ kg$.
The acceleration $A$ of the Earth that is proportional to $a$ as the reciprocal of the masses; i.e. :
$A = a \times m/M = 10/(6 \times 10 ^{24}) \approx 2 \times 10 ^{-24} \ m/sec^2$.
After $4$ seconds, the Earth reachs a velocity of fall of $8 \times 10 ^{-24} \ m/sec$ and it has traversed a space $s \approx 16 \times 10 ^{-24} \ m$.
This means that, due to the reciprocal attraction, the two bodies will touch each other in slightly less than $4 \ sec$ and that the "real" space traversed by the falling ball with respect to the Earth is about :
$80$ meters minus the space traversed by the "falling" Earth during the short time of the fall.
It can be useful to recall that :
the size of atoms is measured in picometers : trillionths ($10 ^{-12}$) of a meter.
If the ball has a mass $m'$ of $1000 \ kg$, the force with which it "pulls" the Earth will be $1000$ times greater, producing an acceleration $A' \approx 2 \times 10 ^{-21} \ m/sec^2$.
This implies that in this second case the space traversed by the Earth duing the fall of the heavier ball will be $s' \approx 16 \times 10 ^{-21} \ m$.
According to Aristotle (Physiscs, Book VII) the "law" of dynamics is :
"if a power $\alpha$ moves a body $\beta$ during time $\delta$ for a distance $\gamma$, then an equal power $\alpha$ will move a body half of $\beta$ along a distance twice as $\gamma$ in the same time".
In an anachronistic way, we can say :
$F \propto V$.
Thus, if we apply this law to "our model" of free fall, with the weight of the body as the force, we have that - assuming that after an initial short time of acceleration the falling body will reach a constant "terminal velocity" - in the second case the acquired speed $v'$must be $1000$ times the first one : $v$.
This means that after $4$ seconds the heavier ball has traversed a space : $s'= v' \times t$, i.e. $s' = 1000 \times v \times t = 1000 \times s$, where $s$ is the space traversed by the lighter body after a fall of $4$ seconds.
As you can see the "same factor" : $1000$ acts in a completely different way in the two models.
Due to the huge mass of the Earth, for bodies of "normal" size (meaning body of our daily life experience) it has no experimentally verifiable effect on the behaviour of different bodies falling due to the gravitational force.
In the Aristotelian model, that factor has an evident experimentally verifiable effect on the behaviour of different bodies falling due to the "tendency towards the centre".
This is exactly where is the "conceptual" difference : to perform some sort of experimental test.
Thus, the answer to :
is a definitive YES if we try to answer the question from the point of view of modern science, a point of view that is not that of aristotelian natural philosophy.
If instead we want to compare two different qualitative "worldviews", things are different (see at least the philosophical debate involving : Thomas Kuhn, The Incommensurability of Scientific Theories, Scientific Revolutions, Historicist Theories of Scientific Rationality, Imre Lakatos and Paul Feyerabend).
i) In our computations we made approximations; approximation is a modern concept.
Without precise mathematical laws there are no possible approximations.
Aristotle's "natural laws" are not approximations in the modern sense; they are qualitative description of facts, like the one made by the same Aristotle on botany and zoology (whih were impressively accurate, by the way).
ii) Writing the "aristotelian equation" we make an "historical mistake" : he never thinked in terms of mathematical formuale.
Please, note that the mathematics used by Galileo in his analysis of the free falling body problem was only the theory of proportions of Eudoxus/Euclid, that was already available in Aristotle's time.
Thus, the "tools" available to ancient natural philosophers were more or less the same compared to the ones available to Galileo (not so with Newton ...).
Postscriptum
The above "experiment" was alredy discussed by Galileo, in advance of the correct formulation of the law of universal gravitation. See :
Galileo Galilei, Dialogues concerning two new sciences (1638 - Engl tr Henry Crew & Alfonso de Salvio - Dover ed), pag.64 :
SALV. We infer therefore that large and small bodies move with the same speed provided they are of the same specific gravity.
SIMP. Your discussion is really admirable; yet I do not find it easy to believe that a bird-shot falls as swiftly as a cannon ball.
SALV. Why not say a grain of sand as rapidly as a grindstone? But, Simplicio, I trust you will not follow the example of many others who divert the discussion from its main intent and fasten upon some statement of mine which lacks a hair's-breadth of the truth and, under this hair, hide the fault of another which is as big as a ship's cable. Aristotle says that "an iron ball of one hundred pounds falling from a height of one hundred cubits reaches the ground before a one-pound ball has fallen a single cubit." I say that they arrive at the same time. You find, on making the experiment, that the larger outstrips the smaller by two finger-breadths, that is, when the larger has reached the ground, the other is short of it by two finger-breadths; now you would not hide behind these two fingers the ninety-nine cubits of Aristotle, nor would you mention my small error and at the same time pass over in silence his very large one. Aristotle declares that bodies of different weights, in the same medium, travel (in so far as their motion depends upon gravity) with speeds which are proportional to their weights; this he illustrates by use of bodies in which it is possible to perceive the pure and unadulterated effect of gravity, eliminating other considerations, for example, figure as being of small importance, influences which are greatly dependent upon the medium which modifies the single effect of gravity alone.Thus we observe that gold, the densest of all substances, when beaten out into a very thin leaf, goes floating through the air; the same thing happens with stone when ground into a very fine powder.
But if you wish to maintain the general proposition you will have to show that the same ratio of speeds is preserved in the case of all heavy bodies, and that a stone of twenty pounds moves ten times as rapidly as one of two; but I claim that this is false and that, if they fall from a height of fifty or a hundred cubits, they will reach the earth at the same moment.
Mauro ALLEGRANZAMauro ALLEGRANZA
$\begingroup$ Wow, not much movement of the earth. $\endgroup$
– Craig Feinstein
$\begingroup$ From your quote of Aristotel it seems he was talking not about gravity, but other kinds of forces. And, in fact, his law works for dragging a body with constant friction. $\endgroup$
– Anixx
In short, you were taught that Aristotle was wrong because he was wrong. He didn't make a prediction, he made an observation about rock and feather, and then sloppily generalized it to all objects without a second thought. The subtle effects you are describing weren't even noticable in his time, but that a feather falls slower because it is much more affected by air resistance, would have been obvious to sailors, or anyone who dealt with winds, even then. Already in antiquity John Philoponus pointed out that if one corrects for that the sole basis for Aristotle's conclusion disappears: "But this is completely erroneous, and our view may be completely corroborated by actual observation more effectively than by any sort of verbal argument. For if you let fall from the same height two weights, one many times heavier than the other you will see that the ratio of the times required for the motion does not depend [solely] on the weights, but that the difference in time is very small."
But you were taught that Aristotle was "holding back science" not because he was just wrong about falling bodies. As Philoponus pointed out, it was merely an illustration of a general attitude, unfortunately adopted by many after him, that facts about nature can be reasoned out of their heads with spotty and misconstrued observations, if any at all. To be fair, Aristotle's contribution wasn't all negative, he gave first systematic descriptions in what now became established natural sciences, and tried to organize and structure what was known about the world in his time. But his method of inquiry was wrong headed, and it took a lot of time and effort to overcome it.
$\begingroup$ I wonder if what really held back science wasn't Aristotle per se but sloppy translations. Is it possible that 'heavy' and 'dense' were at one point more or less interchangeable in casual conversation, either in Aristotle's day or later? $\endgroup$
– TLDR
No. Aristotle was not necessarily wrong. This is in substance Carlo Rovelli's view in Aristotle's Physics: a Physicist's Look. As the abstracts announces it
Aristotelian physics is a correct and non-intuitive approximation of Newtonian physics in the suitable domain (motion in fluids), in the same technical sense in which Newton theory is an approximation of Einstein's theory.
If one agrees that falling occurs in a fluid, then it is not different from 'sinking'. Heavier bodies sink faster. (Buoyancy which can play a crucial role is also due to gravity).
sand1sand1
$\begingroup$ This is a really nice article! $\endgroup$
– Arnold Neumaier
Yes, Aristotle was wrong about gravity. But I think it is unfair to say "that Aristotle was responsible for holding back physics for centuries". The ones who held back physics for centuries were the late-antique and mediaeval (Christian, Muslim and Jewish) so-called philosophers who transformed Aristotelianism into an ossified dogmatic doctrine. Aristotle himself was always willing to change his mind and to consider alternative explanations.
fdbfdb
That Aristotle (and you with your example) are wrong is proved by the following simple argument: imagine two bricks of equal mass. Each of them falls with certain acceleration. Now glue them together and let them fall. According to Aristotle two bricks will fall faster than each brick separately. It is evident that this is absurd: what difference does it make whether the bricks are glued together or not?
I suppose this argument is due to Galileo, but I am not 100% sure.
Actually Aristotle's "physics" says that two bricks will fall twice faster than one brick.
$\begingroup$ This argument is due to Galileo, but unfortunately it is circular. Assuming that gluing makes no difference already presupposes that Aristotle is wrong. philsci-archive.pitt.edu/2524/1/… To Aristotle glued bricks become a single item, and behave as such with its "natural place" and weight that determines the rate of approaching that place. Two unglued bricks are two separate items. $\endgroup$
$\begingroup$ It seems intuitive to us today that force field acts the same way on both, but that's because we absorbed Newtonian forces and fields. To Aristotle there is no reason why the "natural pull" should not distinguish between objects of different nature. Even without "natural motions" it is conceivable that reaction forces introduced between the bricks by gluing can alter how they move. There really is no way to reason it out without observations :) $\endgroup$
$\begingroup$ I read once (and I don't remember where any more) that this argument actually predates Galileo by perhaps 100-200 years. But nothing was really made of it at the time. What Galileo did was provide mathematical models for simple physical situations. That was the real breakthrough. It's one of the cornerstones of the "scientific revolution". $\endgroup$
– Carl Offner
$\begingroup$ @Conifold: Alexandre's (Galilei's) argument is not circular. We do not assume that gluing makes no differencem, but we prove it by gluing only in our imagination. $\endgroup$
Aristotle concluded in his law of motion that the speed of an object depends on the viscosity of the medium it is in. In keeping with this line of thinking, since a perfect vacuum has zero viscosity, the speed of a falling object should approach infinity, as viscosity approaches zero. Galileo in his incline plane experiment identified the role of gravity, explaining it to be Aristotle's attractive force pulling toward the " natural place". In addition, Galileo's Leaning Tower of Pisa experiment addressed the question: do falling masses of different sizes fall at the same speed? He concluded that they will fall at the same speed. Newton corrected Galileo's conclusion with his own gravitational theory which states that the the force of gravity exert the same acceleration on objects regardless of size; an idea that has gained acceptance over the years. Just as Aristotle law of gravitation, ignoring the role of mathematics available to him at the time, was was the first approximation of the role of gravity on falling objects, Galileo, utilizing the same math that was available to Aristotle, came up with a second approximation of the role of gravity. Newton with more advance mathematics came up with his law of gravitation, a third approximation to the law of gravitational motion; and of course, Einstein made a radical departure from all existing theories of the time with his upgrade.
Rudolph DouglasRudolph Douglas
Actually Carlo Rovelli considers the situation ignoring any viscosity/friction effects and points out that a body still falls slower in a denser medium.
Consider a mass of specific gravity 2 falling through water. The net gravitation force is mg/2 but the mass is still m, hence the acceleration is halved.
Of course the specific gravity of air is 0.0013 so that effect is negligible in air. Aristotle thought that in a zero dense medium the mass would fall infinitely fast and he was obviously wrong.
"if a thing moves through the thickest medium such and such distance in such and such time, it moves through the void with a speed beyond any ratio". (Physics 215b)
People who think Aristotle's treatment was purely qualitative should try reading him. Look at Physics 215b and 216a in particular. There's plenty of calculation there. It seems to me that most of his problems arise from not understanding the concept of zero - hardly surprising since it wasn't introduced into maths for another thousand years (by the Indians). Maybe his horror of the void comes down to the same issue.
watergeuswatergeus
Physicists now tend to think in terms of effective theories; that is a theory which is accurate in a certain energy regime but fails in another. Thus, for example we get super-gravity as a low energy effective (ie approximate) theory of string theory.
It's not quite fair to compare Newton and Aristotle after all a span of two millenia separates them. This is a hefty amount of time by any measure; and the earlier work should be evaluated in its own terms, in the context of its time, as well as it's influence.
Julian Barbour in his History of Dynamics clearly points out that Aristotles critical approach to the problem of change, space, time & motion was a clear precursor to later thinking about motion; and I think, if I recall rightly, that he said that Newtons laws were in some sense, in an embryonic form in his works.
It was the later infatuation with Aristotle beginning in the Renaissance after Averroes rediscovery of Aristotles works and his commentaries on them; and also the close association of the Church with Aristotle and then then the rise of anti-clericalism in Europe after the bourgeois revolutions that our modern dislike of Aristotle stems from; the popular literature is littered with disdain for his works; this is not an accurate view of his importance; and nor, to be honest of the Greek work on dynamics of which Aristotle is the most prominent representative.
It's not that Aristotle himself who held back scientific thought; after all, he himself was not backward in critiquing Plato, Parmenides or Democritus when he felt their ideas hadn't sufficient justification. He was not over-awed by them, perhaps due to the closeness in time to these giant figures, after all, he himself was a student of Plato.
Had the early to late medivals understood this aspect of his critical activity and took it upon themselves then perhaps scientific thought might have been recovered more rapidly; instead, they found themselves over-awed by their achievements and on the whole, could do no more than tinker with them, and it took the time to the early moderns before general scientific culture in Europe had risen to a sufficient height and had become fluid and vigorous that they could begin to develop where the Ancient world had stopped.
After Einstein, we see gravity as the curvature of spacetime. This means that spacetime, far from being the passive theatre in which events happen, also has dynamical properties. Did anyone before Einstein forsee this possibility? Well, Clifford did when he stated that this curvature was all that there was in terms of physical change in the world.
Whilst Aristotle didn't forsee this, he was capable of asking the question: "does place itself have a place"? That question, if pushed further, suggests that space is dynamical. After all, a stone has a place, the place where we put it - here or there. To ask whether place has a place shows just how careful scientific questioning then was in questioning verities we would normally simply take for granted.
What was Aristotles answer to the question raised? He wasn't able to come to a definitive answer. He simply said it was a difficult question. And in truth it is. It took the genius of Einstein to show that place actually did have a place. Or rather, spacetime, and which amounts to the same thing here
Mozibur UllahMozibur Ullah
You are right that a bigger object will reach the ground in a miscroscopically smaller time if dropped independently. Note however that if dropped at the same time, this will still not be the case, as the earth will be attracted to the force of gravity created by both of those objects.
rickmcn1986rickmcn1986
$\begingroup$ Welcome to HSM, rickmcn1986. Because of the nature of this particular Stack Exchange site, we want references with all our answers. In addition, it seems that this is in the same vein as the other answers. Can you spruce this up in some way? Thanks. $\endgroup$
Actually, if we are talking about Earth, there is another sense in which Aristotle was (accidentally) right, Earth has an atmosphere, and an atmosphere exerts a drag force on falling objects. It is not difficult to calculate terminal velocity and if we assume drag is proportional to velocity (R = kv), at terminal velocity acceleration and hence resultant force is zero, mg - kv = 0, kv=mg, v=mg/k so if the objects are roughly the same size and shape so k is constant, terminal velocity is proportional to mass! Interesting.
Roger WalterRoger Walter
Aristotle was considering the max speed of an object being pulled through a resistant media - eg: an ox pulling a cart, or a feather falling through air.
It is obvious to any idiot that two 1 lb weights falling side by side hit the ground at the same time as 1 2lb weight formed by gluing the two together.
It is far less obvious how long it will take an identical shape and size weighing twice as much while falling through a resistant media. It will certainly be faster, but since resistance is NOT directly proportionate to speed, it will not be twice as fast.
Still, give Aristotle some credit, he was NOT taking about objects falling in a vacuum.
emtpgeekemtpgeek
$\begingroup$ Please consider specific historical evidence when answering. "Obvious to any idiot" is generally not a useful basis for putting forward a response. On the other hand, "obvious in view of idea x given in book y" might be a more useful contribution (provided, of course, that book y was e.g. known to the considered author in particular, or generally known in the relevant community and time). $\endgroup$
– terry-s
Not the answer you're looking for? Browse other questions tagged physics ancient-greece gravity aristotle or ask your own question.
How did people believe Aristotle's law of gravity for so long?
Did Newton develop the concept of gravity first for falling objects or for celestial motion?
Why did Aristotle make mistakes in his laws of motion?
When was the twin paradox first formulated?
How did artillery and physics co-evolve during 1400-1700?
About Newton's apple | CommonCrawl |
Application of hyperbolic geometry in link prediction of multiplex networks
Zeynab Samei1 &
Mahdi Jalili2
Scientific Reports volume 9, Article number: 12604 (2019) Cite this article
Recently multilayer networks are introduced to model real systems. In these models the individuals make connection in multiple layers. Transportation networks, biological systems and social networks are some examples of multilayer networks. There are various link prediction algorithms for single-layer networks and some of them have been recently extended to multilayer networks. In this manuscript, we propose a new link prediction algorithm for multiplex networks using two novel similarity metrics based on the hyperbolic distance of node pairs. We use the proposed methods to predict spurious and missing links in multiplex networks. Missing links are those links that may appear in the future evolution of the network, while spurious links are the existing connections that are unlikely to appear if the network is evolving normally. One may interpret spurious links as abnormal links in the network. We apply the proposed algorithm on real-world multiplex networks and the numerical simulations reveal its superiority than the state-of-the-art algorithms.
Many real biological, social and technological systems are modeled as networks in which nodes and links represent entities and different kinds of connections respectively. Network analysis in complex systems such as biology, ecology, computer science and sociology has become very important and applicable1. One of the major topics in network science is to predict missing, forthcoming and spurious links2. Many different link prediction algorithms have been introduced that use structure information of networks. Most of them are classified in similarity-based prediction methods, which work under the assumption that the probability of existing a link between two nodes is depended to their similarity3.
There are other types of link prediction methods such as Hierarchical Structure Model and Stochastic Block Model which are based on maximum likelihood analysis2,4,5. Recently the study of hyperbolic geometry based on the network structure has become useful in solving the link prediction problem. Considering the hyperbolic geometry of networks, HyperMap method was proposed by Papadopoulos et al.6. This method first map target networks into hyperbolic space, and then predict the missing links using the hyperbolic coordinates of node pairs7.
Recent studies8,9,10 have shown that many real network systems are modeled better in multiple layers to show different kinds of interactions between individuals11,12. Multiplex networks are a special kind of multilayer networks in which the number of nodes in all layers is the same. Some studies have shown that the structural features of different layers in multiplex networks are indeed correlated to each other1,13. So it can be supposed that considering the interlayer information can enhance the performance of link prediction in each layer of a multiplex network. In this paper, we investigate the node similarity index based on hyperbolic geometry and the layer relevance of the multiplex networks for predicting the spurious and missing links. In this method, we improve the performance of the link prediction based on hyperbolic distance considering both the popularity and similarity of nodes by combining the similarity indices in multiplex networks.
Using the interlayer information in solving the missing link prediction in multiplex networks has been considered before in a number of works. Pujari et al.14 used a decision tree classifier to predict the interaction of the coauthorship network in a multiplex collaboration network with three layers. Hristova et al.15 used a supervised classifier for link prediction in a two layer network containing Foursquare and Twitter using the interlayer information. In another work, Sharma et al.16 proposed a new method considering weight for each layer of multiplex network and used it to solve the link prediction problem in the target layer. Yao et al.17 proposed a novel method based on interlayer and intralayer information to solve the missing link prediction problem in multiplex networks. Hajibagheri et al.18 proposed a holistic method considering the information of all layers simultaneously in link prediction of a target layer in multiplex networks. Also, Guimerà et al. used Stochastic Block Models to predict missing and spurious links in noisy networks. Zeng et al.19 studied the impact of spurious link identification methods on distortion of networks' structure and dynamics. In another study, Zhang et al.20 measured the inter-similarity using the local diffusion processes in bipartite networks. Samei et al.21 proposed a method to identify spurious links in multiplex networks. In fact, they proposed a method to employ interlayer information to improve the performance of spurious link prediction in the target layer.
In the context of hyperbolic geometry of network, Krioukov et al.22 introduced the mapping of networks to hyperbolic space. They used the underlying hyperbolic geometry of network to study the functionality and structure of complex networks. They showed that the strong clustering and the heterogeneous degree distribution are natural reflections of the negative curvature and other properties of the hyperbolic geometry of complex networks. Then, Papadopoulos et al.23 studied the impact of popularity and similarity in networks' growth. They developed a framework to suggest that new connections can be made between node pairs with an optimized trade-off between popularity and similarity. In another work, Papadopoulos et al.24 presented the HyperMap method to map a network to its underlying hyperbolic space and used the hyperbolic distance as a similarity measure to solve the link prediction problem. Different from these works, other methods were introduced to infer hidden geometry of complex networks6,25. Recently, Muscoloni et al.26,27 introduced a nonuniform popularity-similarity optimization model (N-PSO). This model was used to predict the missing links using the community structure of the networks in N-PSO that improved the performance of the link predictors significantly. Muscoloni et al. also proposed an intelligent machine to infer the network hyperbolic geometry based on an "angular coalescence" phenomenon28. A minimum curvilinear automata has been recently proposed to embed hyperbolic geometry of networks and used it for link prediction29.
In this paper, our proposed similarity indices based on hyperbolic geometry of network benefit both intralayer and interlayer information to solve the spurious and missing link prediction in multiplex networks. Based on that the experimental results on four single layer synthetic networks and six real multiplex networks show that the performance has been improved when the hyperbolic-based methods is used and the node pairs similarity measures are computed considering both interlayer and interlayer information.
Consider G = (G1, G2, …., GM) as a multiplex network with N nodes in each of M layers, where Gα = (Vα, Eα)represents the network of layer α with V as the set of nodes and E as the set of links11,12,30. We can assume \({A}^{[\alpha ]}=\{{a}_{ij}^{[\alpha ]}\}\) as the adjacency matrix of each layer Gα, where for 1 ≤ α ≤ M and 1 ≤ i, j ≤ |Vα|31:
$${a}_{ij}^{[\alpha ]}=\{\begin{array}{cc}1\, & if({v}_{i}^{[\alpha ]},\,{v}_{j}^{[\alpha ]})\in {E}^{\alpha }\\ 0 & otherwise\end{array}$$
In the context of unsupervised link prediction, many similarity measures are defined to find the likelihood of link existence between each node pair (i, j). In multiplex networks the similarity score in layer α is shown by \({s}_{ij}^{\alpha }\). After computing the similarity scores for all potential node pairs in each layer, a ranking method can be used to choose the top ranked pairs which have more chance to make a connection. The key issue is how to calculate the similarity scores based on the known topology of networks. Recent studies have shown that the structure of the layers in multiplex networks are mostly dependent32,33. Hence, one of the main challenges in solving the link prediction problem in multiplex networks is to find an appropriate similarity measure that can benefit the relevant information of all layers30. Based on this, here we use both the information of the target layer (intralayer information) and other layers (interlayer information) and combine them based on layer relevance to improve the performance of link prediction compared with the single-layer based methods.
In the case of missing link prediction, the goal is to estimate the probability of existence of non-observed links based on the current topology of network and available node's features in network G(V, E). Since the missing links are not known, we assume that a fraction of observed links E, is missing and the goal of link prediction is to identify them. In order to do that, in each iteration a fraction of the observed links E, is removed based on k-fold decomposition method and the proposed methods are supposed to predict them. In the case of identifying spurious links, the task is to evaluate whether the observed links are reliable enough based on the current topology of the network. In order to do that, in each iteration, some nonexistent links are randomly added to the link set and the proposed methods are supposed to identify them34. Precision is used here to quantify the accuracy of a link prediction method which is defined as:
$$Precision=\frac{|TP|}{|TP|+|FP|}$$
where |TP| is the number of positive predictions that are truly predicted and |FP| is the number of positive predictions that are wrongly predicted.
Node similarity index
Description of the similarity indices is given in the following.
Existing measures
Preferential Attachment (PA): This index is based on the node degrees and for each node pair i and j is defined as:
$${s}_{ij}^{PA}=\Vert {{\rm{\Gamma }}}_{i}\Vert \times \Vert {{\rm{\Gamma }}}_{j}\Vert $$
where ||Γi|| indicates the number of neighbors of node i.
Common Neighbors (CN): For each node pair i and j, this index counts the number of neighbors that are common between them and is defined based on the assumption that node pairs with more common neighbors are more likely to make connection. It is defined as:
$${s}_{ij}^{CN}=\Vert {{\rm{\Gamma }}}_{i}\cap {{\rm{\Gamma }}}_{j}\Vert $$
CAR: This measure considers both common neighbors of each node pair and the number of connections between the common neighbors, and is computed as below:
$${{s}}_{{ij}}^{{CAR}}={{s}}_{{ij}}^{{CN}}\cdot {{s}}_{{ij}}^{{LCL}}$$
where \({s}_{ij}^{CN}\) is the number of common neighbors between (i, j) and \({s}_{ij}^{LCL}\) is the number of links between nodes in the common neighbors set35.
CJC: This measure is a modified version of Jacard measure and is defined as below:
$${{s}}_{{ij}}^{{CJC}}=\frac{{{s}}_{{ij}}^{{CAR}}}{\parallel {{\Gamma }}_{{i}}\cup {{\Gamma }}_{{j}}\parallel \,}$$
where \({s}_{ij}^{CAR}\) is the similarity measure CAR defined above and \(\Vert {{\rm{\Gamma }}}_{i}\cup {{\rm{\Gamma }}}_{j}\Vert \) is total number of neighbors of nodes i and j35.
Hyperbolic distance (HP): This measure computes the hyperbolic distance (Eq. (8)) of each node pair i and j based on Hypermap method. HyperMap is based on Maximum Likelihood Estimation. It finds the radial and angular coordinates ri, θi for all nodes i ≤ N, which maximizes the likelihood:
$${L}=\prod _{1\le i < j\le N}{p}{({x}_{ij})}^{{\alpha }_{ij}}{[1-p({x}_{ij})]}^{1-{\alpha }_{ij}}$$
where the product is computed over all node pairs i, j and xij is defined as the hyperbolic distance between pair i, j:
$$\begin{array}{rcl}{x}_{ij} & = & {\rm{arccosh}}(\cosh \,{r}_{i}\,\cosh \,{r}_{j}-\,\sinh \,{r}_{i}\,\sinh \,{r}_{j}\,\cos \,{\rm{\Delta }}{\theta }_{ij})\\ & & \approx {r}_{i}+{r}_{j}+2ln\,\sin ({\rm{\Delta }}{\theta }_{ij}/2)\\ & & \approx {{r}}_{{\boldsymbol{i}}}+{{r}}_{{\boldsymbol{j}}}+2ln\,({\rm{\Delta }}{\theta }_{{ij}}/2)\,\end{array}$$
$${\rm{\Delta }}{\theta }_{ij}=\pi -|\pi -|{\theta }_{i}-{\theta }_{j}||$$
and p(xij) is the Fermi-Dirac connection probability:
$${p}({{x}}_{{ij}})=\frac{1}{1+{{e}}^{\frac{1}{2{T}}({{x}}_{{ij}}-{R})}}$$
where R ~ lnN. The estimated radial coordinate of node i is based on its degree in the network (ki) via ri ~ lnN − lnki. Therefore, if node degrees are correlated in different layers so will be the radial coordinates36.
Proposed measures
Node degree or popularity plays an important role in defining the similarity measures and many of them are based on common neighbors and preferential attachment. The underlying principle behind preferential attachment is that new connections are mainly made to more popular nodes. However, Papadopoulos et al.23 showed that popularity is just one aspect of attractiveness, while similarity could be considered as another aspect. They developed a framework where new connections consider a trade-off between popularity and similarity.
We know that the degree distribution of many real networks follow power-law distribution. However, as it can be seen in As real multilayer networks, we consider six networks (see Table 1). The multilayer networks are converted to multiplex networks by assuming that all layers have the same number of nodes (the maximum number of nodes of all layers). Explanation of these networks is as follow:
Table 1 The topological features of six real multiplex networks. In the table, i is the number of layers, N is the number of nodes and E is the number of edges in each layer.
Table 1, the degree distribution of the multiplex networks with small size do not follow power-law distribution. The previous experimental results indicated that HP's performance was better in the networks with power-law distribution and less good in those that does not obey power-law degree distribution7. The reason for that would be the way the nodes' radial coordinates are calculated. Because one of the parameters which is considered in HyperMap method to estimate the radial coordinates is the power-law exponent of the network. Therefore, if the network does not have a power-law degree distribution, the link-prediction accuracy of HP decreases. In order to overcome this shortcoming and benefit the advantages of both popularity and similarity features of nodes, we proposed two approaches that are detailed in the following.
Weighted Common neighbors (WCN): We generate a weighted version of CN that computes the weight of common neighbors considering the hyperbolic distance of them with the target node pairs. There are some studies about converting the original similarity measure to the weighted one, however it has been shown that that such conversion may reduce the prediction performance37. The pseudo-code of the proposed method is as follows:
Approximate the hyperbolic coordinates of each node.
Compute the matrix H of hyperbolic distance of the existing links in the network.
h = average of H
Γij = list of common neighbors of node pair (i, j) in the test list of missing or spurious link prediction.
for each k ∈ Γij
if H(i, j) < h
node pair (i, k) is a strong tie and has more weight
WCN(i, j) = WCN(i, j) + 1 + 1/H(i, k);
node pair (i, k) is a weak tie and takes the weight as CN
WCN(i, j) = WCN(i, j) + 1;
Repeat step 5 for node pair (k, j)
Sort all links in the test list in decreasing (for missing link prediction) or increasing (for spurious link prediction) order
Ranking CN and HP (CN-HP)
This method benefits the advantages of both CN and HP measures. It uses a ranking method to combine the prediction given by both of these measures. In order to do that, one of the well-known classical rank aggregation methods, Borda's method is used38. It is based on absolute positioning of the ranked elements rather than their relative rankings. A Borda score for each element is calculated based on the ranking of it in the aggregated list. For a set of full list L = [L1, L2, L3, …., Ln], the Borda's score for element x and list Lk is given by:
$${B}_{{L}_{i}(x)}=\{count(y)|{L}_{i}(y) < {L}_{i}(x){\rm{\& }}y\in {L}_{i}\}$$
and the total Borda's score of element x is:
$$B(x)=\mathop{\sum }\limits_{i=1}^{n}{B}_{{L}_{i}(x)}$$
The advantage of Borda ranking method is that we can aggregate different kinds of measures with different categories and values and obtain a rank-based score. Also the computational complexity of this method is linear; however it does not satisfy the Condorcet criterion. In the proposed method, two lists of CN and HP scores for any node pair are constructed, and the final score for each node pair is computed by aggregating their ranking score using Borda method, i.e. in the case of missing/spurious link prediction the aggregated scores of CN and HP of all node pairs are computed based on Eq. (12) and are sorted descending/ascending. The top-k elements of the final list are the predicted links (k is the number of expected missing/spurious links).
Impact of layer relevance
In the case of HP as a similarity measure, the procedure of mapping each layer to its hyperbolic space can be done in different directions. One direction would be to jointly embed the different layers of a given multiplex and infer single radial and angular coordinates for each node. A second direction would be to aggregate the different layers using different operations such as those proposed in39, and then embed the aggregated network to infer single coordinates for nodes. Finally, a third direction would be to infer the node coordinates in each layer independently as considered here.
As it was mentioned above, we map each layer of each real multiplex to its hyperbolic space using the HyperMap method6,24. The method takes the network adjacency matrix and the network parameters T, γ. It then approximates the angular and radial coordinates of all nodes in the network. Parameter γ is the power law degree distribution exponent which is approximated separately for layers using the method introduced by Clauset et al.40, and T is the temperature. To estimate the values of T, the Nonuniform Popularity × Similarity Optimization N-PSO model is used27. The N-PSO model grows synthetic complex networks and it is equivalent to the hyperbolic H2 model. The inputs to this model are the final network size N, the average node degree k, power-law coefficient γ and the network parameters T. The N-PSO model is used to construct synthetic networks with the same size N and average degree k and power-law exponent γ, using different values for T. The estimated values of T are then the values that best match the degree distribution and average clustering between the layer and the corresponding synthetic network.
In order to test whether this measure can be a good one for the link prediction, we classify the hyperbolic distance of all node pairs, and compute the probability of the existence of a link between the pairs in each bin. To this end, first the hyperbolic distances of all node pairs are sorted in ascending order and divided to k bins. Bin bi contains the node pairs with the hyperbolic distance in the range of [di, di+1]. Then, the probability pi of having a link between the node pairs of each bin is computed based on the network topology. The results are shown in Fig. 1. As it is shown, the probability of existing a link between each node pairs decreases, while their hyperbolic distance increases. Two nodes have a smaller hyperbolic distance as much as they are popular or similar to each other, in this case the probability of existing a connection between them increases. Thus, this measure can be a candidate for the similarity score for the link prediction problem. It is worth noting that the behavior of different layers are almost the same in all multiplex networks, with being more similar in the bigger networks including Rattus and SacchPomb.
Probability of existing a link between node pairs based on their hyperbolic distance in different layers of multiplex networks denoted by L1, L2 and L3.
In order to employ the interlayer information, for node pair (i, j) in target layer α, we first calculate its similarity within each layer based on the proposed methods above. This enables us to compare prediction performance of the algorithms. In order to compare the prediction performance of the proposed prediction framework, we exploit different algorithms for quantifying the relevance between layers including link overlap, Pearson correlation, Spearman correlation and hyperbolic angular correlation12. The results show that the link overlap has the best effect on the link prediction performance. It is defined in the following.
Link Overlap (LO): This measure identifies the ratio of common links in two layers, i.e. if α and β are two layers in a multiplex network, LO is the fraction of the same node pair that connects in both layers α and β and is defined as:
$${O}^{\alpha ,\beta }=\frac{{2\sum }_{i=1}^{N}{\sum }_{j > i\,}^{N}{A}_{ij}^{[\alpha ]}.{A}_{ij}^{[\beta ]}}{{\sum }_{i=1}^{N}{\sum }_{j > i\,}^{N}{A}_{ij}^{[\alpha ]}+{\sum }_{i=1}^{N}{\sum }_{j > i\,}^{N}{A}_{ij}^{[\beta ]}}$$
Where A[α] is the adjacency matrix of layer α that takes value of 0 for each disconnected node pair and 1 for each connected node pair, and N is the number of nodes. The value of Oα,β is in the range of [0, 1], where 0 indicates that the layers are completely irrelevant and 1 indicates that the layers are quite relevant. The similarity measure is defined as:
$$\forall i,j\in V:{S}_{ij}={s}_{ij}^{\alpha }+\mathop{\sum }\limits_{\beta =1}^{M}\,\eta {\mu }^{\alpha \beta }{s}_{ij}^{\beta }(\alpha \ne \beta )$$
where \({s}_{ij}^{\alpha }\) is the similarity index of target layer α and \({s}_{ij}^{\beta }\) is the similarity index of any other layer β. μαβ represents the correlation between layers α and β (link overlap), which can be explained as the weight of interlayer information involved from any layer β in link prediction in layer α and η is the tunable parameter. The correlations between different layers are shown in the Fig. 2. As it is shown, for all networks the link overlap correlation between different layers is positive. Furthermore, the highest layer relevance belongs to the Vicker network and the lower relevance belongs to larger and sparser networks. Our experiments show that LO is mostly consistent with other correlation metrics, but it has the most positive effect in the extent the interlayer information can improve the link prediction performance.
Link Overlap of different layers of six real multiplex networks.
We perform experiments on four single-layer synthetic networks to evaluate the similarity measures and six real multilayer networks to investigate the impact of interlayer information. The synthetic networks are evolved based on N-PSO model described above and their structural features and the precision of spurious and missing link prediction methods are presented in Figs 3 and 4. In the N-PSO model, the true node coordinates are generated for the networks. In the case of missing link prediction, we remove a fraction of edges using k-fold decomposition in each iteration and regenerate the node coordinates of the new network using the Hypermap method. Similarly, in the case of spurious link prediction, we add a fraction of nonexistent links to the network in each iteration and regenerate the node coordinates of the new network using the Hypermap method. There is no restriction in selecting the parameters for N-PSO model. It is preferred to generate networks with features that are near to real networks (large and sparse with power-law degree distribution) and temperature is chosen to be 0.3 and 0.6. Since the two parameters λ and T of Hypermap are set manually, so the approximation of hyperbolic coordinates is more accurate in synthetic networks.
The missing link prediction performance of the synthetic networks based on N-PSO model, with (a) N = 500, m = 4, λ = 3, T = 0.3, (b) N = 500, m = 4, λ = 3, T = 0.6, (c) N = 1000, m = 4, λ = 3, T = 0.3, (d) N = 1000, m = 4, λ = 3, T = 0.6. Different similarity measures are used, including Preferential Attachment (PA), Common Neighbors (CN), Hyperbolic Distance (HP), CAR, CJC, Weighted Common Neighbors (WCN) and Rank-CN-HP. The results show the mean values over 20 independent experiments.
The spurious link prediction performance of the synthetic networks based on N-PSO model, with (a) N = 500, m = 4, λ = 3, T = 0.3, (b) N = 500, m = 4, λ = 3, T = 0.6, (c) N = 1000, m = 4, λ = 3, T = 0.3, (d) N = 1000, m = 4, λ = 3, T = 0.6. Different similarity measures are used, including Preferential Attachment (PA), Common Neighbors (CN), Hyperbolic Distance (HP), CAR, CJC, Weighted Common Neighbors (WCN) and Rank-CN-HP. The results show the mean values over 20 independent experiments.
Based on these reasons as it can be seen, in all cases the performance of hyperbolic distance (HP) is better than the other measures and the precision of the proposed methods (Weighted Common Neighbors (WCN) and Rank-HP-CN) is the highest in most cases. Thus, hyperbolic distance and its derived methods can be good choices as similarity measures for link prediction.
As real multilayer networks, we consider six networks (see Table 1). The multilayer networks are converted to multiplex networks by assuming that all layers have the same number of nodes (the maximum number of nodes of all layers). Explanation of these networks is as follow:
Vicker41: It is a 3 layer multiplex network with 29 nodes representing the students of a school in Australia. The layers are defined as the contact relationship, co-working and best friends.
Lazega42,43: This multilayer network represents the partnership of corporate law between associates and partners. The layers correspond to co-working, friendship and advice relationship.
CKM44: This multilayer network represents the interactions between physicians. It contains 3 layers that correspond to friendship, discussion and asking for advice.
CElegans45,46: It is a biological multilayer network in which nodes represent neurons and layers correspond to chemical monadic, chemical polyadic and electric interactions.
Rattus47,48: It is a multiplex genetic and protein interactions network of the Rattus Norvegicus. It contains two main layers of physical association and direct interaction.
SacchPomb47,48: It is a multiplex genetic and protein interactions network of the Saccharomyces Pombe. It includes three kinds of relationships, including direct interaction, colocalization and physical association.
The experimental results of the proposed link prediction methods on six real networks is presented in this section. For each multiplex network, the layer with the most density is chosen as the target layer. In the case of missing link prediction, 15% of links in the target layer are considered to be hidden and based on k-fold decomposition method, the performance of the similarity measures are examined over 20 independent experiments. For spurious link prediction, random links are added to the network and the performance of the similarity measures are examined over 20 independent experiments. In order to evaluate the impact of employing the layer relevance and the extra information of other layers, we separately study the performance of the algorithms on single-layer (when only information of the target layer is considered) and multiplex (when inter-layer information is also considered) fashions. We employ the layer correlation based on link overlap and compute the similarity measures based on Eq. (14).
Figure 5 shows the precision of missing link prediction of different similarity measures. For each measure there are two bars. The left bar shows the performance of the similarity measure while considering only the intralayer information of the target layer and the right bar is the performance of the similarity measure while using both intralayer and interlayer information. As it can be seen, in all cases incorporating the interlayer information improves performance of the missing link prediction and this is more pronounced in CElegans. The proposed similarity measures Rank-CN-HP has the best performance in most cases.
The missing link prediction performance of the multiplex networks based on different similarity measures. The results are based on the mean values over 20 independent experiments. 'Single-Layer Information" corresponds to the case when only intralayer information of the target layer is considered. "Multiplex Information" corresponds to the case when both intralayer information of the target layer and interlayer information of other layers are considered.
Figure 6 shows the performance of the algorithms on spurious link prediction. For this problem, we also consider the cases when only intralayer information of the target layer is considered and when both intralayer and interlayer information are considered. As it can be seen, in most cases, including the interlayer information in the prediction process improves the performance. Furthermore, predictions based on PA similarity measures have the worst performance in most cases. In contrast to the missing link prediction, in this case Rank-CN-HP is not better than CN or HP in some of the networks, but WCN has the best performance in all multiplex networks. Our experiments show that in small networks, in the case of HP, the approximated radial coordinates of nodes in hyperbolic space for both true positive (correctly predicted) links and false negative links are almost in the same range. But the average degree of true positive links is significantly higher than the false negatives. It means that the radial coordinates of nodes which corresponds to their popularity are not precisely approximated, since the degree distribution of the target layers do not obey the power-law. HP mostly represents the similarity of node pairs, and thus it is not expected in most cases to have high performance. Therefore, combining this similarity measure with CN in different ways help to overcome the shortcoming of HP in covering the popularity attribute of each node. On the other hand, in large networks and especially in those with scale-free degree distribution, approximating the underlying hyperbolic geometry is more precise, but these networks are mostly sparse and similarity measures such as CN, CAR and CJC may not be quite successful in link prediction. Thus, in such cases combining the popularity-based measures with HP can improve the link prediction. The Rank-CN-HP and WCN methods both use CN as the popularity factor, and HP as the similarity factor. The difference is that in Rank-CN-HP the proposed similarity measure uses CN and HP independently, i.e. these two measures are first computed independently for each node pair, and then ranked based on Borda rank aggregating algorithm to achieve the final score that considers both CN and HP with the same weight. Whereas in the WCN method, we compute HP-distance for the common neighbors of each node pair and compare them with a threshold. If the HP-distance is less than the threshold, that node pair is assumed to have a strong tie, i.e. they are more similar to each other, and thus a fraction of HP-distance is added to the weight of that common neighbor; otherwise it is computed as the original CN. Therefore, in the WCN method the two similarity measures are dependent to each other.
The spurious link prediction performance of the multiplex networks based on different similarity measures. The results are based on the mean values over 20 independent experiments. 'Single-Layer Information" corresponds to the case when only intralayer information of the target layer is considered. "Multiplex Information" corresponds to the case when both intralayer information of the target layer and interlayer information of other layers are considered.
In this work, two novel methods based on the hyperbolic geometry of the multiplex networks are proposed to discover spurious and missing links in multiplex networks. The hyperbolic underlying of complex networks considers two parameters of popularity and similarity of nodes that both play important role in link prediction problem. Since the common local similarity measures mostly consider only the node degree (popularity), we suggest to enhance their predictability by adding the similarity feature to them. As we can see, in the case of missing link prediction specifically in social networks, each node is more likely to connect to nodes with similar features (his friends) as well as popular nodes (influencers). Another hypothesis is that interlayer relevance can be helpful in link prediction. Based on this hypothesis, recently a new method was proposed that considered the existing similarity measures in both target layer and other layers and combined the similarity measures via a correlation metric (Link Overlap) and obtained a multiplex-based similarity measure for spurious link prediction21. Based on this research, new measures are proposed based on the hyperbolic geometry of the network. First, a number of existing similarity measures which are widely used for the link prediction are chosen and then new measures are proposed to solve the spurious and missing link prediction problem. Our experimental results on four synthetic networks and six real-world multiplex networks shows that the new proposed measures outperform in all cases and also incorporating the interlayer information can improve the prediction performance compared with the case that only intralayer information is considered.
Jalili, M., Orouskhani, Y., Asgari, M., Alipourfard, N. & Perc, M. Link prediction in multiplex online social networks. Royal Society open science 4, 160863 (2017).
Guimerà, R. & Sales-Pardo, M. Missing and spurious interactions and the reconstruction of complex networks. Proceedings of the National Academy of Sciences 106, 22073–22078 (2009).
Lin, D. An information-theoretic definition of similarity. In Icml. 296–304 (1998).
Celisse, A., Daudin, J.-J. & Pierre, L. Consistency of maximum-likelihood and variational estimators in the stochastic block model. Electronic Journal of Statistics 6, 1847–1899 (2012).
Clauset, A., Moore, C. & Newman, M. E. Hierarchical structure and the prediction of missing links in networks. Nature 453, 98 (2008).
Papadopoulos, F., Aldecoa, R. & Krioukov, D. Network geometry inference using common neighbors. Physical Review E 92, 022807 (2015).
Wang, Z., Wu, Y., Li, Q., Jin, F. & Xiong, W. Link prediction based on hyperbolic mapping with community structure for complex networks. Physica A: Statistical Mechanics and its Applications 450, 609–623 (2016).
Cardillo, A. G ómez-Gardenes. J., Zanin, M., Romance, M., Papo, D., del Pozo, F., & Boccaletti, S (2013).
Nicosia, V., Bianconi, G., Latora, V. & Barthelemy, M. Growing multiplex networks. Physical review letters 111, 058701 (2013).
Szell, M., Lambiotte, R. & Thurner, S. Multirelational organization of large-scale social networks in an online world. Proceedings of the National Academy of Sciences 107, 13636–13641 (2010).
Kivelä, M. et al. Multilayer networks. Journal of complex networks 2, 203–271 (2014).
Boccaletti, S. et al. The structure and dynamics of multilayer networks. Physics Reports 544, 1–122 (2014).
Lee, K.-M., Min, B. & Goh, K.-I. Towards real-world complexity: an introduction to multiplex networks. The European Physical Journal B 88, 48 (2015).
Pujari, M. & Kanawati, R. Link prediction in multiplex networks. NHM 10, 17–35 (2015).
Hristova, D., Noulas, A., Brown, C., Musolesi, M. & Mascolo, C. A multilayer approach to multiplexity and link prediction in online geo-social networks. EPJ Data Science 5, 24 (2016).
Sharma, S. & Singh, A. An efficient method for link prediction in complex multiplex networks. In 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). IEEE, 453–459 (2015).
Yao, Y. et al. Link prediction via layer relevance of multiplex networks. International Journal of Modern Physics C 28, 1750101 (2017).
Hajibagheri, A., Sukthankar, G. & Lakkaraju, K. A holistic approach for link prediction in multiplex networks. In International Conference on Social Informatics. Springer, 55–70 (2016).
Zeng, A. & Cimini, G. Removing spurious interactions in complex networks. Physical Review E 85, 036101 (2012).
Zhang, P., Zeng, A. & Fan, Y. Identifying missing and spurious connections via the bi-directional diffusion on bipartite networks. Physics Letters A 378, 2350–2354 (2014).
Samei, Z. & Jalili, M. Discovering spurious links in multiplex networks based on interlayer relevance. Journal of Complex Networks (2019).
Krioukov, D., Papadopoulos, F., Kitsak, M., Vahdat, A. & Boguná, M. Hyperbolic geometry of complex networks. Physical Review E 82, 036106 (2010).
Papadopoulos, F., Kitsak, M., Serrano, M. Á., Boguná, M. & Krioukov, D. Popularity versus similarity in growing networks. Nature 489, 537 (2012).
Papadopoulos, F., Psomas, C. & Krioukov, D. Network mapping by replaying hyperbolic growth. IEEE/ACM Transactions on Networking (TON) 23, 198–211 (2015).
Alanis-Lobato, G., Mier, P. & Andrade-Navarro, M. A. Manifold learning and maximum likelihood estimation for hyperbolic network embedding. Applied Network Science 1, 10 (2016).
Muscoloni, A. & Cannistraci, C. V. Leveraging the nonuniform PSO network model as a benchmark for performance evaluation in community detection and link prediction. New Journal of Physics (2018).
Muscoloni, A. & Cannistraci, C. V. A nonuniform popularity-similarity optimization (nPSO) model to efficiently generate realistic complex networks with communities. New Journal of Physics 20, 052002 (2018).
Muscoloni, A., Thomas, J. M., Ciucci, S., Bianconi, G. & Cannistraci, C. V. Machine learning meets complex networks via coalescent embedding in the hyperbolic space. Nature communications 8, 1615 (2017).
Muscoloni, A. & Cannistraci, C. V. Minimum curvilinear automata with similarity attachment for network embedding and link prediction in the hyperbolic space. arXiv preprint arXiv:1802.01183 (2018).
Bianconi, G. Statistical mechanics of multiplex networks: Entropy and overlap. Physical Review E 87, 062806 (2013).
Battiston, F., Nicosia, V. & Latora, V. Structural measures for multiplex networks. Physical Review E 89, 032804 (2014).
Gemmetto, V. & Garlaschelli, D. Multiplexity versus correlation: the role of local constraints in real multiplexes. Scientific reports 5, 9120 (2015).
Lee, K.-M., Kim, J. Y., Cho, W.-K., Goh, K.-I. & Kim, I. Correlated multiplexity and connectivity of multiplex random networks. New Journal of Physics 14, 033027 (2012).
Pan, L., Zhou, T., Lü, L. & Hu, C.-K. Predicting missing links and identifying spurious links via likelihood analysis. Scientific reports 6, 22955 (2016).
Daminelli, S., Thomas, J. M., Durán, C. & Cannistraci, C. V. Common neighbours and the local-community-paradigm for topological link prediction in bipartite networks. New Journal of Physics 17, 113037 (2015).
Kleineberg, K.-K., Boguná, M., Serrano, M. Á. & Papadopoulos, F. Hidden geometric correlations in real multiplex networks. Nature Physics 12, 1076 (2016).
Lü, L. & Zhou, T. Link prediction in weighted networks: The role of weak ties. EPL (Europhysics Letters) 89, 18001 (2010).
de Borda, J. C. Mémoire sur les élections au scrutin (1781).
Taylor, D., Shai, S., Stanley, N. & Mucha, P. J. Enhanced detectability of community structure in multilayer networks through layer aggregation. Physical review letters 116, 228301 (2016).
Clauset, A., Shalizi, C. R. & Newman, M. E. Power-law distributions in empirical data. SIAM review 51, 661–703 (2009).
Vickers, M. & Chan, S. Representing classroom social structure. Victoria Institute of Secondary Education, Melbourne (1981).
Lazega, E. The collegial phenomenon: The social mechanisms of cooperation among peers in a corporate law partnership. (Oxford University Press on Demand, 2001).
Snijders, T. A., Pattison, P. E., Robins, G. L. & Handcock, M. S. New specifications for exponential random graph models. Sociological methodology 36, 99–153 (2006).
Coleman, J., Katz, E. & Menzel, H. The diffusion of an innovation among physicians. Sociometry 20, 253–270 (1957).
Chen, B. L., Hall, D. H. & Chklovskii, D. B. Wiring optimization can relate neuronal structure and function. Proceedings of the National Academy of Sciences 103, 4723–4728 (2006).
De Domenico, M., Porter, M. A. & Arenas, A. MuxViz: a tool for multilayer analysis and visualization of networks. Journal of Complex Networks 3, 159–176 (2015).
De Domenico, M., Nicosia, V., Arenas, A. & Latora, V. Structural reducibility of multilayer networks. Nature communications 6, 6864 (2015).
Stark, C. et al. BioGRID: a general repository for interaction datasets. Nucleic acids research 34, D535–D539 (2006).
Department of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
Zeynab Samei
School of Engineering, RMIT University, Melbourne, Australia
Mahdi Jalili
Search for Zeynab Samei in:
Search for Mahdi Jalili in:
Z.S. conceived the study, performed the experiments, analyzed the data, and wrote the manuscript. M.J. analyzed the results and wrote the paper. Both authors approved the final version of the manuscript.
Correspondence to Zeynab Samei.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Samei, Z., Jalili, M. Application of hyperbolic geometry in link prediction of multiplex networks. Sci Rep 9, 12604 (2019) doi:10.1038/s41598-019-49001-7
Scientific Reports menu
About Scientific Reports
Guest Edited Collections
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Editorial Board Highlights
Author Highlights | CommonCrawl |
Research | Open | Published: 15 April 2019
A unified complex noncentral Wishart type distribution inspired by massive MIMO systems
Johannes T. Ferreira ORCID: orcid.org/0000-0002-5945-65501 na1 &
Andriëtte Bekker1 na1
Journal of Statistical Distributions and Applicationsvolume 6, Article number: 4 (2019) | Download Citation
The eigenvalue distributions from a complex noncentral Wishart matrix S=XHX has been the subject of interest in various real world applications, where X is assumed to be complex matrix variate normally distributed with nonzero mean M and covariance Σ. This paper focuses on a weighted analytical representation of S to alleviate the restriction of normality; thereby allowing the choice of X to be complex matrix variate elliptically distributed for the practitioner. New results for eigenvalue distributions of more generalised forms are derived under this elliptical assumption, and investigated for certain members of the complex elliptical class. The distribution of the minimum eigenvalue enjoys particular attention. This theoretical investigation has proposed impact in communications systems (where massive datasets can be conveniently formulated in matrix terms), in particular the case where the noncentral matrix has rank one which is useful in practice.
Communications systems with multiple-input-multiple-output (MIMO) design have become very popular since they allow higher bit rate and because of their applications in the analysis of signal-to-noise ratio (SNR). The literature of research on MIMO systems insists on MIMO systems to be modelled using complex matrix variate distributions (see Ratnarajah and Vaillancourt (2005); Bekker et al. (2018); Ferreira et al. (2020)), in particular due to the flexibility these distributions provide in terms of the massive amounts of data that springs forth from these MIMO systems. He et al. (2016) in particular mentions the unification of random matrix theory (RMT) models, and draws a comparison between such unified models and so-called big data analytics. The authors make specific mention to one of the foundations of big data analytics in communications systems, namely matrix analysis. Zhang and Qui (2015) and He et al. (2018) also have thoughts on the implementation and use of large RMT as building blocks to model the massive (big) data arising from massive MIMO systems, mentioning several benefits to the use of RMT in this regard.
In a practical sense, let X denote the channel propagation matrix in a MIMO channel context, with n "inputs" and p "outputs", colloquially referred to as "receivers" and "transmitters" respectively. Usually, the coefficients of X are assumed to be complex matrix variate normally distributed, andFootnote 1 E(X)=0, which reflects the standard i.i.d. Rayleigh fading assumption. However, in practice MIMO channels don't always exhibit this, stemming from a line-of-sight connection between the transmitters and receivers (Kang and Alouini 2006; Zhou et al. 2015; Jayaweera and Poor 2003) motivates the channel matrix X to be modelled having non-zero mean, to account for environments with strong line-of-sight paths between transmitters and receivers. In order to encompass all channel characteristics, Taricco and Riegler (2011) suggests employing correlated Rician fading models - which directly pertains to modeling X with a non-zero mean. It is with these thoughts in mind that this paper assumes E(X)=M≠0.
When evaluating different performance measures of MIMO systems, the complex channel coefficients have been taken to be complex matrix variate normal distributed so far. de Souza and Yacoub (2008) stated that the Rayleigh probability density function (pdf) (assumed within a signal fading environment) is a consequence based on the assumption from the central limit theorem for large number of partial waves, the resultant process is decomposed into two orthogonal zero-mean and equal standard deviation normal random processes. This is an approximation and the restriction of complex normal is restrictive as it is not always a large number of interfering signals. Thus a more general assumption than complex matrix variate normal may not be that far from reality (see also Ollila et al. (2011)). This paper challenges this assumption of a channel being fed by normal inputs, and sets the platform for introducing previously unconsidered models to the MIMO communications systems domain. Indeed, He et al. (2016) and Qiu (2017) explicitly asks what the consequences of analyses are when the entries of X is not normal. The contribution of the work in this paper aims to assist answering this question.
The Wishart distribution emanating from the underlying complex normal channel matrix X is of particular interest, and has been studied to a wide extent in literature (see for example, James (1964); Gupta and Varga (1995); Ratnarajah and Vaillancourt (2005)). However, Choi et al. (2007) discussed the viable and necessary contribution of the complex matrix variate t distribution as assumption for the underlying channel matrix. This paper focus onFootnote 2 S=XHX, but from a generalized view of assumingFootnote 3$\mathbf {X}\in \mathbb {C}_{1}^{n\times p}$ to be the complex matrix variate elliptical distribution, to address the criticism against the questionable use of the normal model. This complex matrix variate elliptical distribution contains the well-studied complex matrix variate normal distribution as a special case, but enjoys the flexibility to have different members which may serve as alternatives for the well-studied normal case. The complex matrix variate t- and slash distributions are also members of the complex elliptical class and bear close resemblance and familiarity to the well-studied normal case; with this notion, results pertaining to the underlying complex channel matrix distributed according to these distributions are presented. The distribution under consideration, that is, the distribution of S, is referred to as a complex noncentral Wishart type distribution.
He et al. (2016, 2018) mentions a crucial point of consideration for big data analytics is the "big" data matrix (in this case, X, or effectively S), and the study of its eigenvalues. The distribution of the minimum eigenvalue of the noncentral Wishart type distribution is thus investigated and expressions for the corresponding cumulative distribution functions (cdfs) derived. The distribution for the minimum eigenvalue from a noncentral Wishart form is crucial for the design and analysis of certain specialised MIMO systems (see Heath and Love (2005); Dharmawansa and McKay (2011)). For computational convenience the focus is on matrices with rank one noncentral matrix parameter. The low rank assumption is reportedly well modelled in practice (see Hansen and Bolcskei (2004)), and allows for tractable expressions in implementable computation of the derived results.
The paper is organized as follows. "Complex noncentral Wishart type" section contains some preliminary results required for the derivations in this paper. The main results relating to the distribution of the complex noncentral Wishart type distribution are also derived and some particular cases highlighted. In "Minimum eigenvalue cdf under rank one noncentrality" section the cdf of the minimum eigenvalue of the newly derived distributions is presented with special cases. Numerical experiments are discussed in "Numerical experiments" section, followed by some conclusions.
Complex noncentral Wishart type
In this section, the definition of the complex matrix variate elliptical distribution is presented, along with the lemma useful for the construction of the complex matrix variate elliptical model. Subsequently the derived complex noncentral Wishart type distribution is presented along with the corresponding joint eigenvalue distribution. Some particular cases, which is of interest for the practitioner, are highlighted.
The complex matrix variate elliptical distribution, which contains the well-studied complex matrix variate normal distribution as a special case, is defined next (see Bekker et al. (2018); Ferreira et al. (2020)).
The complex matrix variate $\mathbf {X}\in \mathbb {C}_{1}^{n\times p}$, whose distribution is absolutely continuous, has the complex matrix variate elliptical distribution with parameters $\mathbf {M}\in \mathbb {C} _{1}^{n\times p}$, $\mathbf {\Phi }\in \mathbb {C}_{2}^{n\times n}$, $\mathbf { \Sigma }\in \mathbb {C}_{2}^{p\times p}$, denoted by $\mathbf {X}\sim \mathcal { C}E_{n\times p}(\mathbf {M},\mathbf {\Phi \otimes \Sigma,}g)$, if it has the following pdfFootnote 4:
$$ h(\mathbf{X})=\frac{1}{\left(\det \mathbf{\Phi }\right)^{p}\left(\det \mathbf{\Sigma }\right)^{n}}\;g\left[ -tr\left(\mathbf{\Sigma }^{-1}(\mathbf{X}-\mathbf{M})^{H}\Phi^{-1}(\mathbf{X}-\mathbf{M})\right) \right] $$
with g(·) a generator function.
Chu (1973) and Gupta and Varga (1995) demonstrates that real elliptical distributions can always be expanded as an integral of a set of normal pdfs. We report the result by Provost and Cheong (2002) as a useful lemma, defining the complex matrix variate elliptical distribution as a weighted representation of complex matrix variate normal pdfs. This representation can be used to explore the distribution of S when the distribution of X can be that of any member of the complex matrix variate elliptical class.
Lemma 1
If $\mathbf {X}\sim \mathcal {C}E_{n\times p}(\mathbf {M}, \mathbf {\Phi \otimes \Sigma },g)$ with pdf h(X) (see (1)), then there exists a scalar weight function $\mathcal {W}(\cdot)$ onFootnote 5$\mathbb {R}^{+}$ such that
$$ h(\mathbf{X})=\int\limits_{\mathbb{R}^{+}}\mathcal{W}(t)f_{\mathcal{C} N_{n\times p}(\mathbf{M},\mathbf{\Phi \otimes }t^{-1}\mathbf{\Sigma })}(\mathbf{X|}t)dt $$
where $\mathbf {X|}t\sim \mathcal {C}N_{n\times p}\left (\mathbf {M},\mathbf {\Phi \otimes }t^{-1}\mathbf {\Sigma }\right)$ has the complex normal distribution with pdf (see James (1964))
$$ f_{\mathcal{C}N_{n\times p}(\mathbf{M},\mathbf{\Phi \otimes}t^{-1}\mathbf{\Sigma })}(\mathbf{X|}t)=\frac{1}{\pi^{pn}\det \left(\mathbf{\Phi}\right)^{p}\det \left(t^{-1}\mathbf{\Sigma }\right)^{n}} etr\left[-\left(t\mathbf{\Sigma }^{-1}(\mathbf{X}-\mathbf{M})^{H}\mathbf{\Phi }^{-1}(\mathbf{X}-\mathbf{M})\right) \right] $$
and the weight function $\mathcal {W}(\cdot)$is given by
$$ \mathcal{W}(t)=\pi^{np}t^{-np}\mathcal{L}^{-1}\left\{ g\left[-{tr}\left(\mathbf{\Sigma }^{-1}(\mathbf{X}-\mathbf{M})^{H}\mathbf{\Phi }^{-1}(\mathbf{X}-\mathbf{M})\right) \right] \right\} $$
where $\mathcal {L}$ is the Laplace transform operator.
Three special cases of the complex matrix variate elliptical model are of interest in this paper.
Firstly, the complex random matrix $\mathbf {X}\in \mathbb {C}_{1}^{n\times p}$ has the complex matrix variate normal distribution with weight function $\mathcal {W}(\cdot)$ in Lemma 1 given by
$$ \mathcal{W}(t)=\delta (t-1) $$
where δ(·) is the dirac delta function.
Secondly, $\mathbf {X}\in \mathbb {C}_{1}^{n\times p}$ has the complex matrix variatetdistribution (see Provost and Cheong (2002)) with the parameters $\mathbf {M}\in \mathbb {C}_{1}^{n\times p} $, $\mathbf {\Phi }\in \mathbb {C}_{2}^{n\times n}$, $\mathbf {\Sigma }\in \mathbb {C}_{2}^{p\times p}$ and degrees of freedom v>0, denoted by $ \mathbf {X}\sim \mathcal {C}t_{n\times p}(\mathbf {M},\mathbf {\Phi \otimes \Sigma },v)$, with pdf
$$ f(\mathbf{X})=\frac{v^{np}\mathcal{C}\Gamma \left(np+v\right) }{\pi^{np} \mathcal{C}\Gamma_{p}(v)}\left\{ 1+\frac{1}{v}{tr}\left(\mathbf{\Sigma }^{-1}(\mathbf{X}-\mathbf{M})^{H}\mathbf{\Phi }^{-1}(\mathbf{X}- \mathbf{M})\right) \right\}^{-(np+v)} $$
where $\mathcal {C}\Gamma _{p}(a)$ denotes the complex multivariate gamma function Footnote 6, and Γ(·) denotes the usual gamma function. In this case the weight function $\mathcal {W}(\cdot)$ in Lemma 1 is given by
$$ \mathcal{W}(t)=\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{\Gamma (\frac{v}{2})}t^{\frac{v}{2}-1}e^{-t\frac{v}{2}}. $$
Thirdly, $\mathbf {X}\in \mathbb {C}_{1}^{n\times p}$ has the complex matrix variate slash distribution (see Lachos and Labra (2014)) with the parameters $\mathbf {M}\in \mathbb {C}_{1}^{n\times p}$, $\mathbf { \Phi }\in \mathbb {C}_{2}^{n\times n}$, $\mathbf {\Sigma }\in \mathbb {C}_{2}^{p\times p}$ and shape parameter b>0, denoted by $\mathbf {X}\sim \mathcal {C}s_{n\times p}(\mathbf {M},\mathbf {\Phi \otimes \Sigma },b)$, with pdf
$$ f(\mathbf{X})=\int\limits_{0}^{1}bt^{b-1}f_{\mathcal{C}N_{n\times p}(\mathbf{ M},\mathbf{\Phi \otimes }t^{-1}\mathbf{\Sigma })}(\mathbf{X|}t)dt $$
In this case the weight function $\mathcal {W}(\cdot)$ in Lemma 1 is given by
$$ \mathcal{W}(t)=bt^{b-1}. $$
The case where Φ=In is of particular interest (in Lemma 1). Hence, Σ represents the covariance structure of the columns of the random matrix variate X, in other words, the covariance structure of the transmitters. Subsequently, the complex noncentral Wishart type distribution is derived (the proof is contained in the Appendix).
Suppose that $\mathbf {X}\in \mathbb {C}_{1}^{n\times p} (n\geq p)$is a random matrix distributed as $\mathcal {C}E_{n\times p}(\mathbf {M},\mathbf {I}_{n}\mathbf {\otimes \Sigma },g)$. Then $\mathbf {S=X}^{H} \mathbf {X}\in \mathbb {C}_{2}^{p\times p}$ has a complex noncentral Wishart type distribution with pdf
$$\begin{array}{@{}rcl@{}} f\left(\mathbf{S}\right) &=&\frac{\det \left(\mathbf{S}\right)^{n-p}}{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \notag \\ &&\times \int\limits_{\mathbb{R}^{+}}t^{np}{etr}\left(-t\left(\mathbf{ \Sigma }^{-1}\mathbf{S+\Delta }\right) \right) \text{ }_{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) \mathcal{W}\left(t\right) dt \end{array} $$
where Δ=Σ−1MHM denotes the noncentral matrix parameter and 0F1(·) denotes the complex hypergeometric function of matrix argument (see Constantine (1963)). This distribution is denoted by S∼ISCWp(n,M,In⊗Σ) (Integral Series of Complex Wishart).
Suppose that M=0. Then Δ=0, and the pdf (7) simplifies to
$$ f^{central}\left(\mathbf{S}\right) =\int\limits_{\mathbb{R}^{+}}\frac{\det \left(\mathbf{S}\right)^{n-p}{etr}\left(-\left(t^{-1}\mathbf{\Sigma }\right)^{-1}\mathbf{S}\right) }{\mathcal{C}\Gamma_{p}(n)\det \left(t^{-1} \mathbf{\Sigma }\right)^{n}}\mathcal{W}\left(t\right) dt, $$
$\mathbf {S}\in \mathbb {C}_{2}^{p\times p},$ which reflects the distribution as in Ferreira et al. (2020), eq. 2.2.
The complex noncentral Wishart type distribution (see (7)) can be written in terms of the complex central Wishart type distribution:
$$ f\left(\mathbf{S}\right) =\int\limits_{\mathbb{R}^{+}}f^{central}\left(\mathbf{S}\right) {etr}\left(-t\mathbf{\Delta }\right) \text{ } _{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) \mathcal{W}\left(t\right) dt, $$
where fcentral(S) denotes the pdf of the central complex Wishart type distribution (see (8)).
Special cases of the distribution in (7) are highlighted next.
By choosing $\mathcal {W}\left (t\right) $ as the dirac delta function (4), (7) simplifies to
$$ f\left(\mathbf{S}\right) =\frac{\det \left(\mathbf{S}\right)^{n-p}}{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}{etr} \left(-\left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right) \right) \text{ }_{0}F_{1}\left(n;\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) $$
where $\mathbf {S}\in \mathbb {C}_{2}^{p\times p}$, which is the complex matrix variate normal distribution as in James (1964).
By choosing $\mathcal {W}\left (t\right) $ as the t distribution weight (5), expanding the complex hypergeometric function per definitionFootnote 7 (see Constantine (1963)), and using Gradshteyn and Ryzhik (2007), p. 815, eq. 7.522.9, eq. 7.525.1, (7)simplifies to
$$\begin{array}{@{}rcl@{}} &&f\left(\mathbf{S}\right) \\ &=&\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{ \Gamma (\frac{v}{2})}\frac{\det \left(\mathbf{S}\right)^{n-p}}{\mathcal{C} \Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{\kappa }\frac{C_{\kappa }\left(\mathbf{\Delta \Sigma }^{-1}\mathbf{S} \right) }{k!\left[ n\right]_{\kappa }}\int\limits_{\mathbb{R}^{+}}t^{np+ \frac{v}{2}+2k-1}\exp \left[ -t{tr}\left(\mathbf{\Sigma }^{-1}\mathbf{ S+\Delta +}\frac{v}{2}\right) \right] dt \\ &=&\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{\Gamma (\frac{v}{2})} \frac{\det \left(\mathbf{S}\right)^{n-p}}{\mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{\kappa }\frac{ C_{\kappa }\left(\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) }{k!\left[ n \right]_{\kappa }}\frac{\Gamma \left(np+\frac{v}{2}+2k\right) }{\left({tr}\left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta +}\frac{v}{2}\right) \right)^{np+\frac{v}{2}+2k}} \end{array} $$
where $\mathbf {S}\in \mathbb {C}_{2}^{p\times p}.$
Similarly, by choosing $\mathcal {W}\left (t\right) $ as the slash distribution weight (6), expanding the complex hypergeometric function per definition, and using Gradshteyn and Ryzhik (2007), p. 346, eq. 3.381.1, (7) simplifies to
$$\begin{array}{@{}rcl@{}} f\left(\mathbf{S}\right) &=&\frac{b\det \left(\mathbf{S}\right)^{n-p}}{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \sum_{k=0}^{\infty }\sum_{\kappa }\frac{C_{\kappa }\left(\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) }{k!\left[ n\right]_{\kappa }} \int\limits_{0}^{1}t^{np+b+2k-1}\exp \left[ -t{tr}\left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right) \right] dt \\ &=&\frac{b\det \left(\mathbf{S}\right)^{n-p}}{\mathcal{C}\Gamma _{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{\kappa }\frac{C_{\kappa }\left(\mathbf{\Delta \Sigma }^{-1}\mathbf{S} \right) }{k!\left[ n\right]_{\kappa }}\frac{\gamma \left(np+b+2k,{tr} \left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right) \right) }{{tr} \left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right)^{np+b+2k}} \end{array} $$
where γ(·,·) denotes the lower incomplete gamma function (see Gradshteyn and Ryzhik (2007), p. 899, eq. 8.350.1), and $\mathbf {S}\in \mathbb {C}_{2}^{p\times p}.$
Eigenvalue distributions arising from complex Wishart random matrices are of interests in a variety of fields, especially in the case of wireless communications (see Dharmawansa and McKay (2011) and references therein). Expressions for the joint pdf of the eigenvalues of S (see (7)) and some special cases are derived (the proof is contained in the Appendix). Note that the ordered eigenvalues of S is denoted by λ1>λ2>...>λp>0. The ordered eigenvalues of the noncentral matrix parameter Δ is denoted by μ1>μ2>...>μp>0.
Suppose that $\mathbf {S}\in \mathbb {C}_{2}^{p\times p}$ is distributed with pdf (7), and let λ1>λ2>...>λp>0 represent the ordered eigenvalues of S. Then the eigenvalues of S, Λ=diag(λ1,λ2,...,λp), has joint pdf
$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\mathcal{C}\Gamma_{p}(p) \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \int\limits_{\mathbb{R}^{+}}t^{np}{etr}\left(-t\mathbf{\Delta }\right) \\ &&\times \int\limits_{\mathbf{E}\in U\left(p\right) }{etr}\left(-t \mathbf{\Sigma }^{-1}\mathbf{E\Lambda E}^{H}\right) \text{ }_{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{E\Lambda E}^{H}\right) d\mathbf{E} \mathcal{W}\left(t\right) dt \notag \end{array} $$
where Δdenotes the noncentral matrix parameter, and U(p) denotes the unitary manifold (see Appendix).
In the following corollary, particular attention is given to the case when Σ=σ2Ip. This assumption is meaningful within the MIMO paradigm, when the practitioner may assume that the transmitters are sufficiently spatially far from each other, so that an assumption of independence can be made (see also Kang and Alouini (2006)) (the proof is contained in the Appendix).
Corollary 1
Suppose that $\mathbf {S}\in \mathbb {C} _{2}^{p\times p}$ is distributed with pdf (7), and let λ1>λ2>...>λp>0 represent the ordered eigenvalues of S. Furthermore suppose that Σ=σ2Ip. Then the eigenvalues of S, Λ=diag(λ1,λ2,...,λp), has joint pdf
$$\begin{array}{@{}rcl@{}} &&f(\mathbf{\Lambda }) \notag \\ &=&\frac{\pi^{p\left(p-1\right) }}{\left(\left(n-p\right) !\right)^{p}} \frac{\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\prod\limits_{k< l}^{p}\left(\mu_{k}-\mu_{l}\right) \right) \sigma^{2np-p^{2}+1}}\int\limits_{\mathbb{R}^{+}}t^{np-p^{2}+1}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda }\right) \right) \notag \\ &&\times \det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda _{i}\right) \right) \mathcal{W}\left(t\right) dt \notag \\ &=&\frac{\pi^{p\left(p-1\right) }}{\left(\left(n-p\right) !\right)^{p}} \mathcal{K}\left(\mathbf{\Lambda }\right) \int\limits_{\mathbb{R} ^{+}}t^{np-p^{2}+1}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2} \mathbf{\Lambda }\right) \right) \notag \\ &&\times \det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda_{i}\right) \right) \mathcal{W}\left(t\right) dt \end{array} $$
where $\mathbf {\Delta }\in \mathbb {C}_{2}^{p\times p}$ denotes the noncentral matrix parameter, 0F1(·;·) denotes the confluent hypergeometric function of scalar argument, and where
$$ \mathcal{K}\left(\mathbf{\Lambda }\right) =\frac{\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\prod\limits_{k< l}^{p}\left(\mu_{k}-\mu_{l}\right) \right) \sigma^{2np-p^{2}+1}}. $$
For interest, special cases of the distribution in (10) are highlighted next.
By choosing $\mathcal {W}\left (t\right) $ as the dirac delta function (4), observe from (10) and (11) that
$$f(\mathbf{\Lambda})=\frac{\pi^{p\left(p-1\right)}}{\left(\left(n-p\right) !\right)^{p}}\mathcal{K}\left(\mathbf{\Lambda }\right) {etr}\left(-\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda }\right) \right) \det \left(\text{ }_{0}F_{1}\left(n-p+1;\sigma^{-2}\mu_{j}\lambda_{i}\right) \right). $$
When σ2=1, this result simplifies to p. 41, eq. 2.52 of McKay (2006).
By choosing $\mathcal {W}\left (t\right) $ as the t distribution weight (5) and using (11), (10) simplifies to
$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}\pi^{p\left(p-1\right) }}{\Gamma (\frac{v}{2})\left(\left(n-p\right)!\right)^{p}}\mathcal{K}\left(\mathbf{\Lambda }\right) \\ &&\times \int\limits_{\mathbb{R}^{+}}t^{np-p^{2}+\frac{v}{2}+1}{etr} \left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\mathbf{\Lambda }+} \frac{v}{2}\right) \right) \\ &&\times \det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda_{i}\right) \right) dt. \end{array} $$
By choosing $\mathcal {W}\left (t\right) $ as the slash distribution weight (6) and using (11), (10) simplifies to
$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{b\pi^{p\left(p-1\right) }}{\left(\left(n-p\right) !\right)^{p}}\mathcal{K}\left(\mathbf{\Lambda }\right) \\ &&\times \int\limits_{0}^{1}t^{np-p^{2}+b}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\mathbf{\Lambda }}\right) \right) \det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda_{i}\right) \right) dt. \end{array} $$
Suppose now the noncentral matrix Δ has L≤p non-zero eigenvalues, thus, rank(Δ)=L≤p. For the case Σ=σ2Ip, the joint pdf of eigenvalues of S, Λ=diag(λ1,λ2,...,λp), is presented in the following theorem (the proof is contained in the Appendix 1).
Suppose that S is distributed with pdf (7), and let λ1>λ2>...>λp>0 represent the ordered eigenvalues of $\mathbf {S}\in \mathbb {C}_{2}^{p\times p}$. Furthermore suppose that Σ=σ2Ip, and that Δ has arbitrary rank L<p with eigenvalues μ1>μ2>...>μL>0. Then the eigenvalues of S, Λ=diag(λ1,λ2,...,λp), has joint pdf
$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda })=&&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\left(n-p\right) !\right) ^{p}\left(\prod\limits_{k< l}^{L}\left(\mu_{k}-\mu_{l}\right) \right) \left(\prod\limits_{i=1}^{L}\mu_{i}^{p-L}\right) \mathcal{C}\Gamma _{p-L}(p-L)\sigma^{2np-p^{2}+1}} \\ &&\times \int\limits_{\mathbb{R}^{+}}t^{np-p^{2}+1} {etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda } \right) \right) \det \left(\mathbf{T}\right) \mathcal{W}\left(t\right) dt \end{array} $$
where Δ denotes the noncentral matrix parameter, and where T is a p×p matrix with (i,j)th entry
$$ \left\{\mathbf{T}\right\}_{i,j}= \left\{\begin{array}{lll} _{0}F_{1}\left(n-p+1 ;t^{2}\sigma^{-2}\mu_{i}\lambda_{j}\right) \quad\quad & i=1,\ldots,p \quad\quad & j=1,\ldots,L\\ \frac{\left(t^{2}\lambda_{i}\right)^{k}\left(n-p\right) !}{\left(n-p+k\right)!} \quad\quad & i=1,\ldots,p \quad\quad & j=L+1,\ldots,p \end{array}\right.. $$
Minimum eigenvalue cdf under rank one noncentrality
The distribution of the minimum eigenvalue from a complex Wishart random matrix is important in certain MIMO designs (see Heath and Love (2005)), and is thus of interest here. For computational convenience, we assume that the noncentral matrix has rank one; thus $\mathbf {\Delta \Sigma } ^{-1}\in \mathbb {C}_{1}^{p\times p}$ has rank one and is represented via its eigendecomposition as
$$ \mathbf{\Delta \Sigma }^{-1}=\mu \mathbf{\gamma \gamma }^{H} $$
where $\mathbf {\gamma }\in \mathbb {C}_{1}^{p\times 1}$ and γHγ=1 (see also (Dharmawansa and McKay 2011)). In (13), μ denotes the single eigenvalue of ΔΣ−1. The following contributions are made in this section:
The derivation of the exact cdf of the minimum eigenvalue of S=XHX∼ISCWp(n,M,In⊗Σ) for the case when $\mathbf {X}\in \mathbb {C} _{1}^{n\times p}$, $\mathbf {X}\in \mathbb {C}_{1}^{n\times n}$, and $\mathbf {X }\in \mathbb {C}_{1}^{n\times 2}$, and assuming $\mathbf {\Delta \Sigma } ^{-1}\in \mathbb {C}_{1}^{p\times p}$ has rank one; and
Exact results of the minimum eigenvalue of S as described, for the special cases of (4), (5), and (6).
To derive the cdf of the minimum eigenvalue of $\mathbf {S}\in \mathbb {C}_{2}^{p\times p}$ under this assumption, the following approach is employed:
$$F_{\min }\left(y\right) =1-P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right). $$
Knowing that
$$P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) =P\left(\mathbf{S }>y\mathbf{I}_{p}\right) $$
the cdf of the minimum eigenvalue can be found using (7) directly, therefore avoiding cumbersome derivations and computations of deriving the joint eigenvalue pdfs and subsequently marginal distributions with pdfs like (9).
For the complex noncentral Wishart type distribution with pdf (7), the cdf of the minimum eigenvalue is derived next (the proof is contained in the Appendix 1).
Suppose that $\mathbf {X}\in \mathbb {C}_{1}^{n\times p}$ is distributed as $\mathcal {C}E_{n\times p}(\mathbf {M},\mathbf {I}_{n}\mathbf { \otimes \Sigma },g)$, where $\mathbf {M}\in \mathbb {C}_{1}^{n\times p}$ has rank one, and S=XHX∼ISCWp(n,M,In⊗Σ)with pdf (7). The cdf of λmin(S) is given by
$$ F_{\min }\left(y\right) =1-\int\limits_{\mathbb{R}^{+}}y^{np}\frac{{etr }\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma } ^{-1}\right) }{\mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right) ^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{t^{np+2k}\left(y\mu \right) ^{k}}{k!\left(n\right)_{k}}\binom{k}{r}\mathcal{Q}_{n,p,t}^{r}\left(y\right) \mathcal{W}\left(t\right) dt $$
where y>0, Δ denotes the noncentral matrix parameter, and
$$ \mathcal{Q}_{n,p,t}^{r}\left(y\right) =\int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{p}\mathbf{+Y}\right)^{n-p}{etr}\left(-ty\mathbf{\Sigma } ^{-1}\mathbf{Y}\right) {tr}^{r}\left(\mathbf{\gamma \gamma }^{H} \mathbf{Y}\right) d\mathbf{Y} $$
where $\mathbf {Y\in }\mathbb {C}_{2}^{p\times p}$.
As before, special cases of the distribution in (14) are highlighted next.
By choosing $\mathcal {W}\left (t\right) $ as the dirac delta function (4), (14) simplifies to the result by Dharmawansa and McKay (2011).
By choosing $\mathcal {W}\left (t\right) $ as the t distribution weight (5), observe from (14) that
$$\begin{array}{@{}rcl@{}} &&P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) \\ &=&\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{\Gamma (\frac{v}{2})}\int\limits_{\mathbb{ R}^{+}}y^{np}\frac{{etr}\left(-t\left(\mathbf{\Delta }+\frac{v}{2} \right) \right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{t^{np+2k+\frac{v}{2}}\left(y\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r}\mathcal{Q}_{n,p,t}^{r}\left(y\right) dt \\ &=&I_{1} \end{array} $$
where $\mathcal {Q}_{n,p,t}^{r}\left (y\right) $ is given by (15). Thus Fmin(y)=1−I1.
By choosing $\mathcal {W}\left (t\right) $ as the slash distribution weight (6), observe from (14) that
$$\begin{array}{@{}rcl@{}} &&P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) \\ &&=b\int\limits_{0}^{1}y^{np}\frac{{etr}\left(-t\left(\mathbf{\Delta } \right) \right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C }\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{t^{np+2k+b}\left(y\mu \right)^{k}}{k!\left(n\right) _{k}}\binom{k}{r}\mathcal{Q}_{n,p,t}^{r}\left(y\right) dt \\ &&=I_{2} \end{array} $$
The following result gives the exact minimum eigenvalue distribution for n×n complex noncentral Wishart type matrices with n degrees of freedom (the proof is contained in the Appendix).
Suppose that $\mathbf {X}\in \mathbb {C}_{1}^{n\times n}$ is distributed as $\mathcal {C}E_{n\times n}(\mathbf {M},\mathbf {I}_{n}\mathbf { \otimes \Sigma },g)$, where $\mathbf {M}\in \mathbb {C}_{1}^{n\times n}$ has rank one, and S=XHX∼ISCWn(n,M,In⊗Σ)with pdf (7). The cdf of λmin(S) is given by
$$ F_{\min }\left(y\right) =1-\int\limits_{\mathbb{R}^{+}}{etr}\left(-t \mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) \sum_{j=0}^{\infty }\frac{\left(yt^{2}\mu \right)^{j}}{j!\left(n\right) _{j}}\text{ }_{1}F_{1}\left(n;n+j,t{tr}\mathbf{\Delta }\right) \mathcal{W}\left(t\right) dt $$
where 1F1(·) denotes the confluent hypergeometric function (see Gradshteyn and Ryzhik (2007), p. 1010, eq. 9.14.1).
See that (27) can also be expressed as
$$ \sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(yt^{2}\mu \right)^{k}}{ k!\left(n\right)_{k}}\binom{k}{r}\left(n\right)_{r}\left(\frac{1}{\mu ty }\right)^{r}\left({tr}^{r}\mathbf{\Delta }\right) =\Phi_{3}\left(n,n,t{tr}\mathbf{\Delta,}yt^{2}\mu \right) $$
where Φ3(·) denotes the Humbert confluent hypergeometric function of two variables (see Bateman and Erdélyi (1953), p. 225, eq. 5.7.1.22). Thus (16) can be written as
$$ F_{\min }\left(y\right) =1-\int\limits_{\mathbb{R}^{+}}{etr}\left(-t \mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) \Phi_{3}\left(n,n,t{tr}\mathbf{\Delta,}yt^{2}\mu \right) \mathcal{W} \left(t\right) dt. $$
Special cases of the distribution in (16) are highlighted next.
By choosing $\mathcal {W}\left (t\right) $ as the t distribution weight (5) and by applying Gradshteyn and Ryzhik (2007), p. 815, eq. 7.522.9, from (16) it follows:
$$ F_{\min }\left(y\right) =1-\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{ \Gamma (\frac{v}{2})}\sum_{j=0}^{\infty }\frac{\left(y\mu \right)^{j}}{ j!\left(n\right)_{j}}\frac{\Gamma \left(v+2j\right) }{\left({tr} \left(y\mathbf{\Sigma }^{-1}+\mathbf{\Delta +}\frac{v}{2}\right)\right)^{ \frac{v}{2}+2j}} \times \text{}_{2}F_{1}\left(n,v+2j;n+j,\frac{{tr}\mathbf{ \Delta }}{{tr}\left(y\mathbf{\Sigma }^{-1}+\mathbf{\Delta +}\frac{v}{2} \right) }\right) $$
where 2F1(·) denotes the Gauss hypergeometric function (see Gradshteyn and Ryzhik (2007), p. 1010, eq. 9.14.2).
By choosing $\mathcal {W}\left (t\right) $ as the slash distribution weight (6), observe from (16) that:
$$ F_{\min }\left(y\right) =1-b\sum_{j=0}^{\infty }\frac{\left(y\mu \right) ^{j}}{j!\left(n\right)_{j}}\int\limits_{0}^{1}{etr}\left(-t\left(\mathbf{\Delta +}y\mathbf{\Sigma }^{-1}\right) \right) t^{b+2j-1}\text{ } _{1}F_{1}\left(n;n+j,t{tr}\mathbf{\Delta }\right) dt. $$
The following result gives the exact minimum eigenvalue distribution for 2×2 complex noncentral Wishart type matrices with arbitrary degrees of freedom. Scenarios of this 2×2 nature has been investigated in the literature for both exemplary- as well as practical reasons (see Ratnarajah and Vaillancourt (2005), for example) (the proof is contained in the Appendix 1).
Suppose that $\mathbf {X}\in \mathbb {C}_{1}^{n\times 2}$ is distributed as $\mathcal {C}E_{n\times 2}(\mathbf {M},\mathbf {I}_{n}\mathbf { \otimes \Sigma },g)$, where $\mathbf {M}\in \mathbb {C}_{1}^{n\times 2}$ has rank one, and S=XHX∼ISCW2(n,M,In⊗Σ)with pdf (7). Thus, S is a 2×2 complex noncentral Wishart type matrix with arbitrary degrees of freedom n. The cdf of λmin(S) is given by
$$ F_{\min }\left(y\right) =1-\int\limits_{\mathbb{R}^{+}}\frac{{etr} \left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma } ^{-1}\right) }{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{\Sigma }\right) ^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(yt^{2}\mu \right)^{k} }{k!\left(n\right)_{k}}\binom{k}{r}\left(\frac{{tr}\left(\mathbf{ \Delta }\right) }{yt\mu }\right)^{r}\rho \left(r,y,t\right) \mathcal{W} \left(t\right) dt $$
$$\begin{array}{@{}rcl@{}} &&\rho \left(r,y,t\right) \notag\\ &=&\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}} \binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma _{2}\left(i_{1}-i_{2}+2\right) \\ &&\times \left(\frac{\mu }{{tr}\left(\mathbf{\Delta }\right) }\right) ^{h}\left(\det \mathbf{\Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}} \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr}\left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma }\right) } \right) \left(ty\right)^{2n+i_{2}-2i_{1}-4} \notag \end{array} $$
where $\mathfrak {C}_{n}^{v}\left (\cdot \right) $ denotes the Gegenbauer polynomial (see Gradshteyn and Ryzhik (2007), p. 991, eq. 8.932.1).
By choosing $\mathcal {W}\left (t\right) $ as the dirac delta function (4), (19) and (20) simplifies to (see Dharmawansa and McKay (2011)):
$$\begin{array}{@{}rcl@{}} &&F_{\min }\left(y\right) \\ &=&1-\frac{{etr}\left(-\mathbf{\Delta } \right) {etr}\left(-y\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{2}(n)\det \left(\mathbf{\Sigma }\right)^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(y\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{ k}{r}\left(\frac{{tr}\left(\mathbf{\Delta }\right) }{y\mu }\right) ^{r} \notag \\ &&\times \sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}} \binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma _{2}\left(i_{1}-i_{2}+2\right) \notag \\ &&\times \left(\frac{\mu }{{tr}\left(\mathbf{\Delta }\right) }\right) ^{h}\left(\det \mathbf{\Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}} \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr}\left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma }\right) } \right) y^{2n+i_{2}-2i_{1}-4}. \notag \end{array} $$
By choosing $\mathcal {W}\left (t\right) $ as the t distribution weight (5), (19) and (20) simplifies using Gradshteyn and Ryzhik (2007), p. 346, eq. 3.381.4:
$$\begin{array}{@{}rcl@{}} &&F_{\min }\left(y\right) \notag \\ &=&1-\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}} }{\Gamma (\frac{v}{2})}\frac{1}{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{ \Sigma }\right)^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(y\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r}\left(\frac{{tr} \left(\mathbf{\Delta }\right) }{yt\mu }\right) ^{r}\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}} \notag \\ &&\times \sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}}\binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right) _{r}\mathcal{C}\Gamma_{2}\left(i_{1}-i_{2}+2\right) \left(\frac{\mu }{ {tr}\left(\mathbf{\Delta }\right) }\right)^{h} \notag \\ &&\times \left(\det \mathbf{ \Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}}y^{2n+i_{2}-2i_{1}-4} \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr} \left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma } \right) }\right) \notag \\ &&\times \frac{\Gamma \left(2n+2k-r+i_{2}-2i_{1}-4+\frac{v}{2} \right) }{\left({tr}\left(\mathbf{\Delta +}y\mathbf{\Sigma }^{-1}+ \frac{v}{2}\right) \right)^{2n+2k-r+i_{2}-2i_{1}-4+\frac{v}{2}}}. \end{array} $$
By choosing $\mathcal {W}\left (t\right) $ as the slash distribution weight (6), (19) and (20) simplifies using Gradshteyn and Ryzhik (2007), p. 346, eq. 3.381.1:
$$\begin{array}{@{}rcl@{}} &&F_{\min }\left(y\right) \\ &=&1-\frac{b}{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{\Sigma }\right) ^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(y\mu \right)^{k}}{ k!\left(n\right)_{k}}\binom{k}{r}\!\left(\frac{{tr}\left(\mathbf{ \Delta }\right) }{yt\mu }\right) ^{r}\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}} \notag \\ &&\ \binom{i_{1}}{i_{2}}\binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right) _{r}\mathcal{C}\Gamma_{2}\left(i_{1}-i_{2}+2\right) \left(\frac{\mu }{ {tr}\left(\mathbf{\Delta }\right) }\right)^{h}\left(\det \mathbf{ \Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}}y^{2n+i_{2}-2i_{1}-4} \notag \\ && \times \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr} \left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma } \right) }\right) \frac{\gamma \left(2n+i_{2}-2i_{1}-4-r+2k+b,{tr} \left(\mathbf{\Delta +}y\mathbf{\Sigma }^{-1}\right) \right) }{\left({ tr}\left(\mathbf{\Delta +}y\mathbf{\Sigma }^{-1}\right) \right) ^{2n+i_{2}-2i_{1}-4-r+2k+b}}. \notag \end{array} $$
Numerical experiments
In this section, simulation and analytical results are presented to illustrate the contribution of the derived results. For the cdfs (16) and (19), the covariance matrix Σ is assumed to be given by:
$$ \left\{ \mathbf{\Sigma }\right\}_{i,j}=\exp \left(-\frac{\pi^{3}}{32} \left(i-j\right)^{2}\right) $$
where 1≤i,j≤p. The mean matrix M is constructed as:
$$ \mathbf{M=a}^{H}\mathbf{b} $$
where $\mathbf {a}\in \mathbb {C}_{1}^{1\times n}$ and $\mathbf {b}\in \mathbb {C }_{1}^{1\times p}$ is given by:
$$\begin{array}{@{}rcl@{}} \left\{ a\right\}_{i} &=&\exp \left(2\left(i-1\right) l\pi \cos \left(\theta \right) \right) \\ \left\{ b\right\}_{j} &=&\exp \left(2\left(j-1\right) l\pi \cos \left(\theta \right) \right) \end{array} $$
where $l=\sqrt {-1}$, $\theta =\frac {\pi }{4}$, and i=1,...,n and j=1,...,p. These specific constructions of the covariance and mean matrices are meaningful when modeling practical MIMO channels with a nonzero mean (see Dharmawansa and McKay (2011); McKay and Collings (2005)). Table 1 compares the analytical values of the cdf of λmin(S) where $ \mathbf {X}\in \mathbb {C}_{1}^{2\times 2}$ for the underlying t distribution (see (17)), the underlying slash distribution (see (18)) and the underlying normal distribution (see (21)) with corresponding simulated values (computed in Matlab R2013a). The tail behaviour of the simulated values relates to those of its analytical counterparts.
Table 1 Analytical ((16), (17), and (18)) and simulated values of cdf of λmin(S)
The following figures illustrate the cdfs (16) and (19) for n=2 and n=3 respectively, for the different weight functions under consideration in this paper. In Figs. 1 and 2, it is observed that (17) and (18) tends to the normal case as the value of v and b respectively increases - as does (22) and (23).
Cdf ((16), (17), and (18)) for different values of v=3,10 and b=3,10 when n=2 (left), zoomed in subset on right
These figures (Figs. 1 and 2) illustrates the value which the underlying complex matrix variate elliptical assumption provides the practitioner having the engineering expertise with. The proposed elliptical platform in this paper allows theoretical, and resultant practical access to previously unconsidered models; providing flexibility for modeling that may yield improved fits to experimental data in practice (see Yacoub (2007) for example).
In this paper, exact results were presented for a variety of characteristics pertaining to a complex noncentral Wishart type distribution. In particular, the pdf of a complex noncentral Wishart type matrix S=XHX, where $\mathbf {X}\in \mathbb {C} _{1}^{n\times p}\sim \mathcal {C}E_{n\times p}(\mathbf {M},\mathbf {I\otimes \Sigma },g)$ and the pdf of its associated ordered eigenvalues have been derived. Some special cases were investigated, of which the pdf of the eigenvalues when Σ=σ2I (which is of practical importance in communications systems) and the noncentral matrix has arbitrary rank L<p. Subsequently, the exact cdf of the minimum eigenvalue of S was derived for the case when $\mathbf {X}\in \mathbb {C}_{1}^{n\times n}$, as well as when $\mathbf {X}\in \mathbb {C} _{1}^{n\times 2}$. These cdfs were derived under the assumption that the noncentral matrix has rank one, which is a practical assumption. This theoretical investigation has proposed impact in big data and communication systems to allow the practitioner a flexible choice of underlying model for X, and thus S; thereby alleviating the restricted assumption of normality.
Matrix spaces; seeRatnarajah (2003): The set of all n×p(n≥p) matrices, E, with orthonormal columns is called the Stiefel manifold, denoted by $\mathcal {C}V_{p,n}$. Thus $\mathcal {C}V_{p,n}=\left \{ \mathbf {E}\left (n\times p\right) ;\mathbf {E }^{H}\mathbf {E}=\mathbf {I}_{p}\right \}.$ The volume of this manifold is given by
$$Vol\left(\mathcal{C}V_{p,n}\right) =\int\limits_{\mathcal{C}V_{p,n}}\left(\mathbf{E}^{H}d\mathbf{E}\right) =\frac{2^{p}\pi^{np}}{\mathcal{C}\Gamma _{p}(n)}. $$
If n=p then a special case of the Stiefel manifold is obtained, the so-called unitary manifold, defined as $\mathcal {C}V_{p,p}=\left \{ \mathbf {E} \left (p\times p\right) ;\mathbf {E}^{H}\mathbf {E}=\mathbf {I}_{p}\right \} \equiv U\left (p\right) $ where U(p) denotes the group of unitary p×p matrices. The volume of U(p) is given by $ Vol\left (U\left (p\right) \right) =\int \limits _{U\left (p\right) }\left (\mathbf {E}^{H}d\mathbf {E}\right) =\frac {2^{p}\pi ^{p^{2}}}{\mathcal {C}\Gamma _{p}(p)}.$
Complex noncentral Wishart type section proofs
Proof of Theorem 1
From (3), the pdf of X|t follows as
$$\begin{array}{*{20}l} f\left(\mathbf{X}|t\right) =\pi^{-np}\det \left(t^{-1}\mathbf{\Sigma } \right)^{-n}{etr}\left(-\left(t\mathbf{\Sigma }^{-1}\right) \mathbf{X }^{H}\mathbf{X}\right) {etr}\left(-\left(t\mathbf{\Sigma } ^{-1}\right) \mathbf{M}^{H}\mathbf{M}\right) {etr}\left(2\left(t \mathbf{\Sigma }^{-1}\right) \mathbf{M}^{H}\mathbf{X}\right) \end{array} $$
Let X=ET, where $\mathbf {E}:n\times p\in \mathcal {C}V_{p,n}$ such that EHE=Ip and T is an upper triangular matrix with real and positive diagonal elements. Then S=XHX=THT (the Cholesky decomposition of S). FromRatnarajah (2003) it thus follows that
$$\begin{array}{@{}rcl@{}} f(\mathbf{S,E|}t)=2^{-p}\pi^{-np}\det \left(t^{-1}\mathbf{\Sigma }\right) ^{-n}{etr}\left(-\left(t\mathbf{\Sigma }^{-1}\right) \mathbf{S} \right) \det \left(\mathbf{S}\right)^{n-p}\\ \times {etr}\left(-\left(t \mathbf{\Sigma }^{-1}\right) \mathbf{M}^{H}\mathbf{M}\right) {etr} \left(2\left(t\mathbf{\Sigma }^{-1}\right) \mathbf{M}^{H}\mathbf{ET} \right). \end{array} $$
Subsequently,
$$\begin{array}{@{}rcl@{}} f\left(\mathbf{S|}t\right) &=&\int\limits_{\mathcal{C}V_{p,n}}f(\mathbf{S,E| }t)\left(\mathbf{E}^{H}d\mathbf{E}\right) \\ &=&2^{-p}\pi^{-np}\det \left(t^{-1}\mathbf{\Sigma }\right)^{-n}{etr} \left(-\left(t\mathbf{\Sigma }^{-1}\right) \mathbf{S}\right) \det \left(\mathbf{S}\right)^{n-p}{etr}\left(-t\mathbf{\Delta }\right) \\ &&\times \int\limits_{\mathcal{C}V_{p,n}}{etr}\left(2\left(t\mathbf{ \Sigma }^{-1}\right) \mathbf{M}^{H}\mathbf{ET}\right) \left(\mathbf{E}^{H}d \mathbf{E}\right). \end{array} $$
Using eq. 3.37 fromRatnarajah (2003), see that
$$\int\limits_{\mathcal{C}V_{p,n}}{etr}\left(2\left(t\mathbf{\Sigma } ^{-1}\right) \mathbf{M}^{H}\mathbf{ET}\right) \left(\mathbf{E}^{H}d\mathbf{E }\right) =\frac{2^{p}\pi^{np}}{\mathcal{C}\Gamma_{p}(n)}\text{ } _{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right). $$
$$f\left(\mathbf{S|}t\right) =\frac{\det \left(\mathbf{S}\right)^{n-p}}{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}t^{np} {etr}\left(-t\left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right) \right) \text{ }_{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S} \right) $$
and finally, from (2):
$$f\left(\mathbf{S}\right) =\int\limits_{\mathbb{R}^{+}}f\left(\mathbf{S|} t\right) \mathcal{W}\left(t\right) dt $$
which leaves the final result.
Using eq. 93 ofJames (1964) and (7), the joint pdf of the eigenvalues λ1>λ2>...>λp>0 of S is given by
$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) }{ \mathcal{C}\Gamma_{p}(p)}\int\limits_{\mathbf{E}\in U\left(p\right) }f\left(\mathbf{E\Lambda E}^{H}\right) d\mathbf{E} \\ &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) \det \left(\mathbf{\Lambda } \right)^{n-p}}{\mathcal{C}\Gamma_{p}(p)\mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\int\limits_{\mathbb{R}^{+}}t^{np}{ etr}\left(-t\mathbf{\Delta }\right) \\ &&\times \int\limits_{\mathbf{E}\in U\left(p\right) }{etr}\left(-t \mathbf{\Sigma }^{-1}\mathbf{E\Lambda E}^{H}\right) \text{ }_{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{E\Lambda E}^{H}\right) d\mathbf{E} \mathcal{W}\left(t\right) dt \end{array} $$
which completes the proof.
Proof of Corollary 1
Substituting Σ=σ2Ip into (9) and usingJames (1964), p. 480, eq. 30, observe that
$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\mathcal{C}\Gamma_{p}(p) \mathcal{C}\Gamma_{p}(n)\sigma^{2np}}\int\limits_{\mathbb{R}^{+}}t^{np} {etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-t\sigma^{-2} \mathbf{\Lambda }\right) \notag \\ &&\times \int\limits_{\mathbf{E}\in U\left(p\right) }\text{ }_{0}F_{1}\left(n;t^{2}\sigma^{-2}\mathbf{\Delta E\Lambda E}^{H}\right) d \mathbf{E}\mathcal{W}\left(t\right) dt \notag \\ &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) \det \left(\mathbf{\Lambda } \right)^{n-p}}{\mathcal{C}\Gamma_{p}(p)\mathcal{C}\Gamma_{p}(n)\sigma ^{2np}}\int\limits_{\mathbb{R}^{+}}t^{np}{etr}\left(-t\mathbf{\Delta } \right) {etr}\left(-t\sigma^{-2}\mathbf{\Lambda }\right) \notag \\ &&\times \text{ } _{0}F_{1}\left(n;t^{2}\sigma^{-2}\mathbf{\Delta,\Lambda }\right) \mathcal{ W}\left(t\right) dt. \end{array} $$
UsingGross and Richards (1989), eq. 4.8, see that
$$ _{0}F_{1}\left(n;\mathbf{\Delta,}t^{2}\sigma^{-2}\mathbf{\Lambda }\right) =\frac{\det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu _{j}\lambda_{i}\right) \right) }{t^{p\left(p-1\right) }\sigma^{-p\left(p-1\right) }\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \prod\limits_{k< l}^{p}\left(\mu_{k}-\mu_{l}\right) }\frac{\mathcal{C} \Gamma_{p}(p)\mathcal{C}\Gamma_{p}(n)}{\left(\left(n-p\right) !\right) ^{p}}. $$
Substituting (25) into (24) simplifies to (10).
Consider from (10)
$$\begin{array}{@{}rcl@{}} &&f(\mathbf{\Lambda }) \\ &=&\int\limits_{\mathbb{R}^{+}}\frac{\pi^{p\left(p-1\right) }\det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\left(n-p\right) !\right) ^{p}\sigma^{2np-p^{2}+1}}\left(\prod\limits_{k< l}^{p}\left(\lambda _{k}-\lambda_{l}\right) \right) t^{np-p^{2}+1}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda }\right) \right) \\ &&\times \frac{\det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda _{i}\right) \right) }{\left(\prod \limits_{k< l}^{p}\left(\mu_{k}-\mu _{l}\right) \right) }\mathcal{W}\left(t\right) dt. \end{array} $$
In particular, consider
$$\begin{array}{*{20}l} \mathcal{J}={\lim}_{\mu_{L+1},...,\mu_{p}\rightarrow 0}\frac{\det \left(f_{i}\left(\mu_{j}\right)_{i,j=1,...,p}\right) }{\prod\limits_{k< l}^{p} \left(\mu_{k}-\mu_{l}\right) } \end{array} $$
where fi(μj)=0F1(n−p+1;t2σ−2μiλj). Applying Lemma 5, p. 340 ofChiani et al. (2010):
$$\mathcal{J}=\frac{\det \left[ \begin{array}{cccccc} f_{1}\left(\mu_{1}\right) & \cdots & f_{1}\left(\mu_{L}\right) & f_{1}^{\left(p-L-1\right) }\left(0\right) & \cdots & f_{1}^{\left(0\right) }\left(0\right) \\ \vdots & & & & & \vdots \\ f_{p}\left(\mu_{1}\right) & \cdots & f_{p}\left(\mu_{L}\right) & f_{p}^{\left(p-L-1\right) }\left(0\right) & \cdots & f_{p}^{\left(0\right) }\left(0\right) \end{array} \right] }{\mathcal{C}\Gamma_{p-L}(p-L)\left(\prod\limits_{k< l}^{L}\left(\mu_{k}-\mu_{l}\right) \right) \left(\prod\limits_{i=1}^{L}\mu _{i}^{p-L}\right) } $$
$$f_{i}^{\left(k\right) }\left(0\right) =\frac{\left(t^{2}\sigma ^{-2}\lambda_{i}\right)^{k}\left(n-p\right) !}{\left(n-p+k\right) !}. $$
This leaves
$$\begin{array}{@{}rcl@{}} &&\int\limits_{\mathbb{R}^{+}}\frac{\pi^{p\left(p-1\right) }\det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\left(n-p\right) !\right) ^{p}\sigma^{2np-p^{2}+1}}\left(\prod\limits_{k< l}^{p}\left(\lambda _{k}-\lambda_{l}\right) \right) t^{np-p^{2}+1}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda }\right) \right) \mathcal{JW} \left(t\right) dt \\ &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \right) \det \left(\mathbf{\Lambda } \right)^{n-p}}{\left(\left(n-p\right) !\right)^{p}\left(\prod\limits_{k< l}^{L}\left(\mu_{k}-\mu_{l}\right) \right) \left(\prod\limits_{i=1}^{L}\mu_{i}^{p-L}\right) \mathcal{C}\Gamma _{p-L}(p-L)\sigma^{2np-p^{2}+1}} \\ &&\times \int\limits_{\mathbb{R}^{+}}t^{np-p^{2}+1} {etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda } \right) \right) \det \left(\mathbf{T}\right) \mathcal{W}\left(t\right) dt \end{array} $$
where T is a p×p matrix as given in (22).
Minimum eigenvalue cdf proofs
Consider from (7):
$$\begin{array}{@{}rcl@{}} P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) &=&\int\limits_{ \mathbb{R}^{+}}t^{np}\frac{{etr}\left(-t\mathbf{\Delta }\right) }{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \notag \\ &&\times \int\limits_{\mathbf{S-}y\mathbf{I}_{p}}\det \left(\mathbf{S}\right)^{n-p} {etr}\left(-t\mathbf{\Sigma }^{-1}\mathbf{S}\right) \text{ } _{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) d \mathbf{S}\mathcal{W}\left(t\right) dt \end{array} $$
where $\mathbf {S-}y\mathbf {I}_{p}\mathbf {\in }\mathbb {C}_{2}^{p\times p}$. Consider now the transformation S=y(Ip+Y) with Jacobian $d\mathbf {S}=y^{p^{2}}d\mathbf {Y}$ (seeDharmawansa and McKay (2011)). It follows that
$$\begin{array}{@{}rcl@{}} P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) &=&\int\limits_{ \mathbb{R}^{+}}t^{np}y^{np}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \\ &&\times\! \int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{p}\mathbf{+Y} \right)^{n-p}{etr}\left(-ty\mathbf{\Sigma }^{-1}\mathbf{Y}\right) \text{ }_{0}F_{1}\left(n;yt^{2}\mathbf{\Delta \Sigma }^{-1}\left(\mathbf{I} _{p}\mathbf{+Y}\right) \right) d\mathbf{Y}\mathcal{W}\left(t\right) dt. \end{array} $$
By applying the definition of the complex hypergeometric function and the assumption of rank one for the noncentral matrix parameter (see (13)) the following is obtained:
$$\begin{array}{@{}rcl@{}} P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) &=&\int\limits_{ \mathbb{R}^{+}}t^{np}y^{np}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{\kappa }\frac{1}{k!\left[ n\right]_{\kappa }} \\ &&\times \int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{p}\mathbf{+Y} \right)^{n-p}{etr}\left(-ty\mathbf{\Sigma }^{-1}\mathbf{Y}\right) C_{\kappa }\left(yt^{2}\mu \mathbf{\gamma }^{H}\left(\mathbf{I}_{p}\mathbf{ +Y}\right) \mathbf{\gamma }\right) d\mathbf{Y}\mathcal{W}\left(t\right) dt \end{array} $$
where $\mathbf {Y\in }\mathbb {C}_{2}^{p\times p}$. Since having only one eigenvalue results in the partition κ to reduce to a single partition, per definition of zonal polynomials it follows that [n]κ=(n)k and Cκ(A)=tr(A)k, and
$$C_{\kappa }\left(yt^{2}\mu \mathbf{\gamma }^{H}\left(\mathbf{I}_{p}\mathbf{ +Y}\right) \mathbf{\gamma }\right) =\left(yt^{2}\mu \right)^{k}\sum_{r=0}^{k}\binom{k}{r}{tr}^{r}\left(\mathbf{\gamma \gamma }^{H} \mathbf{Y}\right). $$
$$\begin{array}{@{}rcl@{}} P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) &=&\int\limits_{ \mathbb{R}^{+}}t^{np}y^{np}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{1}{k!\left(n\right)_{k}} \\ &&\times \int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{p}\mathbf{+Y} \right)^{n-p}{etr}\left(-ty\mathbf{\Sigma }^{-1}\mathbf{Y}\right) \left(yt^{2}\mu \right)^{k}\binom{k}{r}{tr}^{r}\left(\mathbf{\gamma \gamma }^{H}\mathbf{Y}\right) d\mathbf{Y}\mathcal{W}\left(t\right) dt \end{array} $$
where $\mathbf {Y\in }\mathbb {C}_{2}^{p\times p}$, and leaves the final result.
Letting n=p, see from (14) and (15) that
$$P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) =\int\limits_{ \mathbb{R}^{+}}y^{n^{2}}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{n}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{t^{n^{2}+2k}\left(y\mu \right)^{k}}{k!\left(n\right) _{k}}\binom{k}{r}\mathcal{Q}_{n,n,t}^{r}\left(y\right) \mathcal{W}\left(t\right) dt $$
where $\mathcal {Q}_{n,n,t}^{r}\left (y\right) $ is as defined in (15). FollowingMathai (1997), p. 365, eq. 6.1.20:
$$\begin{array}{@{}rcl@{}} \mathcal{Q}_{n,n,t}^{r}\left(y\right) &=&\int\limits_{\mathbf{Y}}{etr} \left(-ty\mathbf{\Sigma }^{-1}\mathbf{Y}\right) C_{r}\left(\mathbf{\gamma \gamma }^{H}\mathbf{Y}\right) d\mathbf{Y} \\ &=&\frac{\mathcal{C}\Gamma_{n}\left(n,r\right) \left(\det \mathbf{\Sigma } \right)^{n}}{t^{n^{2}}y^{n^{2}}}\left(\frac{1}{\mu ty}\right) ^{r}C_{r}\left(\mathbf{\Delta }\right) \\ &=&\frac{\mathcal{C}\Gamma_{n}\left(n\right) \left(n\right)_{r}\left(\det \mathbf{\Sigma }\right)^{n}}{t^{n^{2}}y^{n^{2}}}\left(\frac{1}{\mu ty} \right)^{r}\left({tr}^{r}\mathbf{\Delta }\right) \end{array} $$
where $\mathbf {Y\in }\mathbb {C}_{2}^{p\times p}$, and $\mathcal {C}\Gamma _{n}\left (n,r\right) $ denotes the complex multivariate gamma function relating to r (seeMathai (1997)). Subsequently
$$ P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) =\int\limits_{ \mathbb{R}^{+}}{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) \sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{ \left(yt^{2}\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r}\left(n\right)_{r}\left(\frac{1}{\mu ty}\right)^{r}\left({tr}^{r}\mathbf{ \Delta }\right) \mathcal{W}\left(t\right) dt. $$
Consider the summation component in (26). This component can be rewritten as follows:
$$ \sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(yt^{2}\mu \right)^{k}}{ k!\left(n\right)_{k}}\binom{k}{r}\left(n\right)_{r}\left(\frac{1}{\mu ty }\right)^{r}\left({tr}^{r}\mathbf{\Delta }\right) =\sum_{j=0}^{\infty }\frac{\left(yt^{2}\mu \right)^{j}}{j!\left(n\right)_{j}}\text{ } _{1}F_{1}\left(n;n+j,t{tr}\mathbf{\Delta }\right) $$
Substituting (27) into (26) leaves the final result.
Substituting p=2, see from (14) and (15) that
$$ P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) =\int\limits_{ \mathbb{R}^{+}}y^{2n}\frac{{etr}\left(-t\mathbf{\Delta }\right) { etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{ t^{2n+2k}\left(y\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r} \mathcal{Q}_{n,2,t}^{r}\left(y\right) \mathcal{W}\left(t\right) dt $$
where from (15) andDharmawansa and McKay (2011), eq. 41:
$$\begin{array}{@{}rcl@{}} \mathcal{Q}_{n,2,t}^{r}\left(y\right) &=&\int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{2}\mathbf{+Y}\right)^{n-2}{etr}\left(-ty\mathbf{ \Sigma }^{-1}\mathbf{Y}\right) {tr}^{r}\left(\mathbf{\gamma \gamma } ^{H}\mathbf{Y}\right) d\mathbf{Y} \\ &=&\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\binom{n-2}{i_{1}}\binom{i_{1}}{ i_{2}}\int\limits_{\mathbf{Y}}{tr}^{i_{2}}\left(\mathbf{Y}\right) \det \left(\mathbf{Y}\right)^{i_{1}-i_{2}}{etr}\left(-ty\mathbf{\Sigma } ^{-1}\mathbf{Y}\right) {tr}^{r}\left(\mathbf{\gamma \gamma }^{H} \mathbf{Y}\right) d\mathbf{Y.} \end{array} $$
where $\mathbf {Y\in }\mathbb {C}_{2}^{2\times 2}$. By usingDharmawansa and McKay (2011), eq. 17 and setting p=i2,a=i1−i2+2,t=r,A=tyΣ−1 and R=γγH, see that
$$\begin{array}{@{}rcl@{}} &&\int\limits_{\mathbf{Y}}{tr}^{i_{2}}\left(\mathbf{Y}\right) \det \left(\mathbf{Y}\right)^{i_{1}-i_{2}}{etr}\left(-ty\mathbf{\Sigma } ^{-1}\mathbf{Y}\right) {tr}^{r}\left(\mathbf{\gamma \gamma }^{H} \mathbf{Y}\right) d\mathbf{Y} \\ &=&\frac{i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma_{2}\left(i_{1}-i_{2}+2\right) }{\left(\det \left(ty\mathbf{\Sigma }^{-1}\right) \right)^{i_{1}-i_{2}+2+\frac{i_{2}}{2}}} \\ &&\times \sum_{h=0}^{\min \left(i_{2},r\right) }\frac{\left(-1\right)^{h} \binom{r}{h}}{\left(\det \left(ty\mathbf{\Sigma }^{-1}\right) \right)^{ \frac{h}{2}}}{tr}^{r-h}\left(\mathbf{\gamma \gamma }^{H}ty\mathbf{ \Sigma }^{-1}\right) {tr}^{h}\left(\mathbf{\gamma \gamma }^{H}\right) \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{{tr}\left(ty \mathbf{\Sigma }^{-1}\right) }{2\sqrt{\det \left(ty\mathbf{\Sigma } ^{-1}\right) }}\right). \end{array} $$
Noting that γHγ=1 (see (13)), it follows that
$$\begin{array}{@{}rcl@{}} &&\mathcal{Q}_{n,2,t}^{r}\left(y\right) \notag \\ &=&\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}} \binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma_{2}\left(i_{1}-i_{2}+2\right) \left(\det \mathbf{\Sigma }\right)^{i_{1}- \frac{i_{2}}{2}+\frac{h}{2}+2} \notag \\ &&\times \left(\frac{{tr}\left(\mathbf{\Delta }\right) }{\mu }\right)^{r-h}\mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr} \left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma } \right) }\right) t^{i_{2}-2i_{1}-4-r}y^{i_{2}-2i_{1}-4-r}. \end{array} $$
Substituting (29) into (28), the following is obtained:
$$\begin{array}{@{}rcl@{}} &&P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) \\ &=&\int\limits_{ \mathbb{R}^{+}}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr} \left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{\Sigma }\right)^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k} \frac{\left(yt^{2}\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r} \left(\frac{{tr}\left(\mathbf{\Delta }\right) }{yt\mu }\right)^{r} \\ &&\times \sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}} \binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma _{2}\left(i_{1}-i_{2}+2\right) \\ &&\times \left(\frac{\mu }{{tr}\left(\mathbf{\Delta }\right) }\right) ^{h}\left(\det \mathbf{\Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}} \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr}\left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma }\right) } \right) \left(ty\right)^{2n+i_{2}-2i_{1}-4}\mathcal{W}\left(t\right) dt \end{array} $$
E(·) denotes the expectation operator.
XH denotes the conjugate transpose of X.
$\mathbb {C}_{1}^{n\times p}$ denotes the space of n×p complex matrices, and $\mathbb {C}_{2}^{p\times p}$ denotes the space of Hermitian positive definite matrices of dimension p.
etr(·)=etr(·) where tr(X) denotes the trace of matrix X, and X−1 denotes the inverse of matrix X.
$\mathbb {R}^{+}$ denotes the positive real line.
$\mathcal {C}\Gamma _{p}(a)=\pi ^{\frac {1}{2} p(p-1)}\prod \limits _{i=1}^{p}\Gamma \left (a-(i-1)\right) $ (see (James 1964)).
Cκ(Z) denotes the complex zonal polynomial of Z corresponding to the partition κ=(k1,…,kp),k1≥⋯≥kp≥0, k1+⋯+kp=k and $\sum _{\kappa }$ denotes summation over all partitions κ. [n]k denotes the generalized Pochammer coefficient relating to partition κ.
cdf:
Cumulative distribution function
ISCW:
Integral series of complex Wishart
MIMO:
Multiple-input-multiple-output
Probability distribution function
RMT:
Random matrix theory
SNR:
Signal-to-noise ratio
Bateman, H., Erdélyi, A.: Higher Transcendental Functions, Vol. I. McGraw–Hill, New York (1953).
Bekker, A., Arashi, M., Ferreira, J. T.: New bivariate gamma types with MIMO application. Commun. Stat. Theory Methods (2018). https://doi.org/10.1080/03610926.2017.1417428.
Chiani, M., Win, M. Z., Shin, H.: MIMO Networks: the effect of interference. IEEE Trans. Inf. Theory. 56(1), 336–350 (2010).
Choi, S. H., Smith, P., Allen, B., Malik, W. Q., Shafi, M.: Severely fading MIMO channels: Models and mutual information. In: IEEE International Conference on Communications, pp. 4628–4633. IEEE (2007).
Chu, K. C.: Estimation and decision for linear systems with elliptically random process. IEEE Trans. Autom. Control.18, 499–505 (1973).
Constantine, A. G.: Some noncentral distribution problems in multivariate analysis. Ann. Math. Stat.34, 1270–1285 (1963).
de Souza, R. A. A., Yacoub, M. D.: Bivariate Nakagami-m distribution with arbitrary correlation and fading parameters. IEEE Trans. Wirel. Commun.7(12), 5227–5232 (2008).
Dharmawansa, P., McKay, M. R.: Extreme eigenvalue distributions of some complex noncentral Wishart and gamma-Wishart random matrices. J. Multivar. Anal.102, 847–868 (2011).
Ferreira, J. T., Bekker, A., Arashi, M.: Advances in Wishart-type modelling of channel capacity. REVStat (2020).
Gradshteyn, I. S., Ryzhik, I. M.: Table of Integral, Series, and Products, 7th Ed. Academic Press, New York (2007).
Gross, K. I., Richards, D. S. P.: Total positivity, spherical series, and hypergeometric functions of matrix argument. J. Approximation Theory. 59, 224–246 (1989).
Gupta, A. K., Varga, T.: Normal mixture representations of matrix variate elliptically contoured distributions. Sankhya. 57, 68–78 (1995).
Hansen, J., Bolcskei, H.: A geometrical investigation of the rank-1 Ricean MIMO channel at high SNR. In: IEEE Proceedings International Symposium on Information Theory. IEEE (2004).
Heath, R. W., Love, D. J.: Multimode antenna selection for spatial multiplexing systems with linear receivers. IEEE Trans. Signal Process.53(8), 3042–3056 (2005).
He, X., Chu, L., Qui, R. C., Ai, Q., Ling, Z.: A Novel Data-Driven Situation Awareness Approach for Future Grids Using Large Random Matrices for Big Data Modeling. IEEE Access. 6, 13855–13865 (2018).
He, Y., Yu, F. R., Zhao, N., Yin, H., Yao, H., Qui, R. C.: Big data analytics in mobile cellular networks. IEEE Access. 4, 1985–1996 (2016).
James, A. T.: Distributions of matrix variate and latent roots derived from normal samples. Ann. Math. Stat.35, 475–501 (1964).
Jayaweera, S. K., Poor, H. V.: MIMO Capacity results for Rician fading channels. In: IEEE Proceedings of the Global Telecommunications Conference. IEEE (2003).
Kang, M., Alouini, M.: Capacity of MIMO Rician Channels. IEEE Trans. Wirel. Commun.5(1), 112–123 (2006).
Lachos, V. H., Labra, F. V.: Multivariate skew-normal/independent distributions: properties and inference. Pro Math.28(56), 11–53 (2014).
Mathai, A. M.: Jacobians of Matrix Transformations and Functions of Matrix Argument. World Science Publishing Co., Singapore (1997).
McKay, M., Collings, I.: General capacity bounds for spatially correlated MIMO Rician channels. IEEE Trans. Inf. Theory. 51(9), 625–672 (2005).
McKay, M.: Random Matrix Theory Analysis of multiple antenna communication systems [Unpublished PhD thesis]. University of Sydney (2006).
Ollila, E., Eriksson, J., Koivunen, V.: Complex elliptically symmetric random variables - generation, characterization, and circularity tests. IEEE Trans. Signal Process.59(1), 58–69 (2011).
Provost, S. B., Cheong, Y. H.: The distribution of Hermitian quadratic forms in elliptically contoured random vectors. J. Stat. Plan. Infer.102, 303–316 (2002).
Qiu, R. C.: Large random matrices big data analytics, in Big Data of Complex Networks. Chapman &Hall/CRC Big Data Series, Boca Raton Fl. (2017).
Ratnarajah, T.: Topics in complex random matrices and information theory [Unpublished PhD thesis]. University of Ottawa (2003).
Ratnarajah, T., Vaillancourt, R.: Quadratic forms on complex random matrices and multiple-antenna systems. IEEE Trans. Inf. Theory. 51(8), 2976–2984 (2005).
Taricco, G., Riegler, E.: On the ergodic capacity of correlated Rician fading MIMO channels with interference. IEEE Trans. Inf. Theory. 57(7), 4123–4138 (2011).
Yacoub, M. D.: The κ- μ distribution and the η- μ distribution. IEEE Antennas Propag. Mag.49(1), 68–81 (2007).
Zhang, C., Qui, C. R.: Massive MIMO as a Big Data System: Random Matrix Models and Testbed. IEEE Access. 3, 837–851 (2015).
Zhou, S., Alfano, G., Nordio, A., Chiasserini, C.: Ergodic capacity analysis of MIMO relay network over Rayleigh-Rician channels. IEEE Commun. Lett.19(4), 601–604 (2015).
The authors acknowledge the support of the StatDisT group at the University of Pretoria, South Africa, as well as Dr. D.A. Burger for editorial assistance and insight. Furthermore, the authors thank the anonymous reviewer, associate editor, as well as the Editor-in-Chief for their constructive suggestions which improved the paper.
This work is based on the research supported in part by the National Research Foundation of South Africa (SARChI Research Chair- UID: 71199; and Grant ref. CPRR160403161466 nr. 105840). Opinions expressed and conclusions arrived at are those of the author and are not necessarily to be attributed to the NRF.
Andriette Bekker and Johannes T Ferreira contributed equally to this work.
Department of Statistics, University of Pretoria, Lynnwood Road, Pretoria, South Africa
Johannes T. Ferreira
& Andriëtte Bekker
Search for Johannes T. Ferreira in:
Search for Andriëtte Bekker in:
JF wrote the manuscript draft as well as coding for the numerical examples. JF derived the mathematical expressions, where AB provided critical feedback and checked the derivations for correctness. Both authors read and approved the final manuscript.
Correspondence to Johannes T. Ferreira.
Multiple-input-multiple-output (MIMO)
Rank one
Wishart type | CommonCrawl |
Convergence theorems for the Non-Local Means filter
IPI Home
August 2018, 12(4): 831-852. doi: 10.3934/ipi.2018035
Geometric mode decomposition
Siwei Yu 1, , Jianwei Ma 1, and Stanley Osher 2,
Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China
Department of Mathematics, University of California, Los Angeles, CA, 90095, USA
Received October 2016 Revised March 2018 Published June 2018
Fund Project: The second author is supported by NSFC under grant nos. 41625017 and 91730306, and National Key Research and Development Program of China under grant no. 2017YFB0202902
Full Text(HTML)
Figure(15)
We propose a new decomposition algorithm for seismic data based on a band-limited a priori knowledge on the Fourier or Radon spectrum. This decomposition is called geometric mode decomposition (GMD), as it decomposes a 2D signal into components consisting of linear or parabolic features. Rather than using a predefined frame, GMD adaptively obtains the geometric parameters in the data, such as the dominant slope or curvature. GMD is solved by alternatively pursuing the geometric parameters and the corresponding modes in the Fourier or Radon domain. The geometric parameters are obtained from the weighted center of the corresponding mode's energy spectrum. The mode is obtained by applying a Wiener filter, the design of which is based on a certain band-limited property. We apply GMD to seismic events splitting, noise attenuation, interpolation, and demultiple. The results show that our method is a promising adaptive tool for seismic signal processing, in comparisons with the Fourier and curvelet transforms, empirical mode decomposition (EMD) and variational mode decomposition (VMD) methods.
Keywords: Mode decomposition, seismic data decomposition, geometric features, seismic data processing.
Mathematics Subject Classification: Primary: 49M27; Secondary: 86A15, 86A22.
Citation: Siwei Yu, Jianwei Ma, Stanley Osher. Geometric mode decomposition. Inverse Problems & Imaging, 2018, 12 (4) : 831-852. doi: 10.3934/ipi.2018035
M. Aharon, M. Elad and A. Bruckstein, K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Transactions on Signal Processing, 54 (2006), 4311-4322. doi: 10.1109/TSP.2006.881199. Google Scholar
C. Bao, H. Ji and Z. Shen, Convergence analysis for iterative data-driven tight frame construction scheme, Applied and Computational Harmonic Analysis, 38 (2015), 510-523. doi: 10.1016/j.acha.2014.06.007. Google Scholar
B. M. Battista, C. Knapp, T. McGee and V. Goebel, Application of the empirical mode decomposition and hilbert-huang transform to seismic reflection data, Geophysics, 72 (2007), H29-H37. doi: 10.1190/1.2437700. Google Scholar
S. Beckouche and J. Ma, Simultaneous dictionary learning and denoising for seismic data, Geophysics, 79 (2014), A27-A31. doi: 10.1190/geo2013-0382.1. Google Scholar
M. Bekara and M. van der Baan, Random and coherent noise attenuation by empirical mode decomposition, SEG Technical Program Expanded Abstracts, (2008), 2591-2595. doi: 10.1190/1.3063881. Google Scholar
J.-F. Cai, H. Ji, Z. Shen and G.-B. Ye, Data-driven tight frame construction and image denoising, Applied and Computational Harmonic Analysis, 37 (2014), 89-105. doi: 10.1016/j.acha.2013.10.001. Google Scholar
L. L. Canales, Random noise reduction, Seg Technical Program Expanded Abstracts, 3 (1984), 329-329. doi: 10.1190/1.1894168. Google Scholar
E. J. Candès and D. L. Donoho, Ridgelets: A key to higher-dimensional intermittency?, Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 357 (1999), 2495–2509. doi: 10.1098/rsta.1999.0444. Google Scholar
Y. Chen and J. Ma, Random noise attenuation by fx empirical-mode decomposition predictive filtering, Geophysics, 79 (2014), V81-V91. Google Scholar
M. N. Do and M. Vetterli, The finite ridgelet transform for image representation, IEEE Transactions on Image Processing, 12 (2003), 16-28. doi: 10.1109/TIP.2002.806252. Google Scholar
M. N. Do and M. Vetterli, The contourlet transform: An efficient directional multiresolution image representation, IEEE Transactions on Image Processing, 14 (2005), 2091-2106. doi: 10.1109/TIP.2005.859376. Google Scholar
K. Dragomiretskiy and D. Zosso, Variational mode decomposition, IEEE Transactions on Signal Processing, 62 (2014), 531-544. doi: 10.1109/TSP.2013.2288675. Google Scholar
K. Dragomiretskiy and D. Zosso, Two-dimensional variational mode decomposition, in Energy Minimization Methods in Computer Vision and Pattern Recognition, Springer, 2015,197– 208. doi: 10.1109/TSP.2013.2288675. Google Scholar
G. Easley, D. Labate and W.-Q. Lim, Sparse directional image representations using the discrete shearlet transform, Applied and Computational Harmonic Analysis, 25 (2008), 25-46. doi: 10.1016/j.acha.2007.09.003. Google Scholar
W. Fan, H. Keil, V. Spieß, T. Mörz and C. Yang, Surface related multiple elimination-application on north sea shallow seismic dataset, in 73rd EAGE Conference and Exhibition Incorporating SPE EUROPEC 2011, 2011. doi: 10.3997/2214-4609.20149657. Google Scholar
S. Fomel, Adaptive multiple subtraction using regularized nonstationary regression, SEG Technical Program Expanded Abstracts, (2008), 3639-3642. doi: 10.1190/1.3064088. Google Scholar
D. J. Foster and C. C. Mosher, Suppression of multiple reflections using the radon transform, Geophysics, 57 (1992), 386-395. doi: 10.1190/1.1443253. Google Scholar
J. Gilles, Empirical wavelet transform, IEEE Transactions on Signal Processing, 61 (2013), 3999-4010. doi: 10.1109/TSP.2013.2265222. Google Scholar
J. Gilles, G. Tran and S. Osher, 2d empirical transforms. wavelets, ridgelets, and curvelets revisited, SIAM Journal on Imaging Sciences, 7 (2014), 157-186. doi: 10.1137/130923774. Google Scholar
T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343. doi: 10.1137/080725891. Google Scholar
N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N.-C. Yen, C. C. Tung and H. H. Liu, The empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time series analysis, in Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, The Royal Society, 454 (1998), 903–995. doi: 10.1098/rspa.1998.0193. Google Scholar
M. N. Kabir and K. J. Marfurt, Toward true amplitude multiple removal, The Leading Edge, 18 (1999), 66-73. doi: 10.1190/1.1438158. Google Scholar
X. Li, W. Chen and Y. Zhou, A robust method for analyzing the instantaneous attributes of seismic data: The instantaneous frequency estimation based on ensemble empirical mode decomposition, Journal of Applied Geophysics, 111 (2014), 102-109. doi: 10.1016/j.jappgeo.2014.09.017. Google Scholar
J. Liang, J. Ma and X. Zhang, Seismic data restoration via data-driven tight frame, Geophysics, 79 (2014), V65-V74. doi: 10.1190/geo2013-0252.1. Google Scholar
B. Liu and M. D. Sacchi, Minimum weighted norm interpolation of seismic records, Geophysics, 69 (2004), 1560-1568. doi: 10.1190/1.1836829. Google Scholar
Y. Liu and M. D. Sacchi, De-multiple via a fast least squares hyperbolic radon transform, SEG Technical Program Expanded Abstracts, (2002), 2182-2185. doi: 10.1190/1.1817140. Google Scholar
Y. M. Lu and M. N. Do, Multidimensional directional filter banks and surfacelets, IEEE Transactions on Image Processing, 16 (2007), 918-931. doi: 10.1109/TIP.2007.891785. Google Scholar
J. Ma and G. Plonka, The curvelet transform, IEEE Signal Processing Magazine, 27 (2010), 118-133. doi: 10.1109/MSP.2009.935453. Google Scholar
J. Mairal, F. Bach, J. Ponce and G. Sapiro, Online dictionary learning for sparse coding, in Proceedings of the 26th Annual International Conference on Machine Learning, ACM, 2009,689–696. doi: 10.1145/1553374.1553463. Google Scholar
M. Naghizadeh and M. D. Sacchi, Beyond alias hierarchical scale curvelet interpolation of regularly and irregularly sampled seismic data, Geophysics, 75 (2010), WB189-WB202. doi: 10.1190/1.3509468. Google Scholar
M. Naghizadeh and M. Sacchi, Ground-roll elimination by scale and direction guided curvelet transform, in 73rd EAGE Conference and Exhibition incorporating SPE EUROPEC 2011, 2011. doi: 10.3997/2214-4609.20149212. Google Scholar
M. Naghizadeh, Seismic data interpolation and denoising in the frequency-wavenumber domain, Geophysics, 77 (2012), V71-V80. doi: 10.1190/geo2011-0172.1. Google Scholar
M. Naghizadeh and M. Sacchi, Multicomponent f-x seismic random noise attenuation via vector autoregressive operators, Geophysics, 77 (2012), V91-V99. doi: 10.1190/geo2011-0198.1. Google Scholar
S. Spitz, Seismic trace interpolation in the fx domain, Geophysics, 56 (1991), 785-794. Google Scholar
J.-L. Starck, E. J. Candès and D. L. Donoho, The curvelet transform for image denoising, IEEE Transactions on Image Processing, 11 (2002), 670-684. doi: 10.1109/TIP.2002.1014998. Google Scholar
J. B. Tary, R. H. Herrera, J. Han and M. Baan, Spectral estimation-what is new? what is next?, Reviews of Geophysics, 52 (2014), 723-749. doi: 10.1002/2014RG000461. Google Scholar
D. Trad, T. Ulrych and M. Sacchi, Latest views of the sparse radon transform, Geophysics, 68 (2003), 386-399. doi: 10.1190/1.1543224. Google Scholar
J. Wang, M. Ng and M. Perz, Fast high-resolution radon transforms by greedy least-squares method, in 2009 SEG Annual Meeting, Society of Exploration Geophysicists, (2009), 3128– 3132. doi: 10.1190/1.3255506. Google Scholar
S. Yu, J. Ma, X. Zhang and M. D. Sacchi, Interpolation and denoising of high-dimensional seismic data by learning a tight frame, Geophysics, 80 (2015), V119-V132. Google Scholar
S. Yu and J. Ma, Complex variational mode decomposition for slop-preserving denoising, IEEE Transactions on Geoscience and Remote Sensing, 56 (2017), 586-597. doi: 10.1109/TGRS.2017.2751642. Google Scholar
Figure 1. Support in the Fourier spectrum. (a) A 'texture' image. (b) Fourier spectrum of (a). The spectrum is band-limited. (c) A 'geometric' image with lines. (d) Fourier spectrum of (c). The spectrum is band-limited in the direction of the marked arrow
Figure Options
Download as PowerPoint slide
Figure 2. Support in the Radon spectrum. (a) A 'geometric' image with parabolic features. (b) Radon spectrum of (a). The spectrum is band-limited
Figure 3. The relationship between GMD and 2D VMD
Figure 4. Wiener filter with different a priori information. (a) and (b) Wiener filter with signal a priori $1/(\vec\omega-\vec\omega_k)^2$, with $\alpha = $ 500 and 5000, respectively. (c) and (d) Wiener filter with signal a priori $1/(\vec\omega\cdot\vec n_{\theta_k})^2$, with $\alpha = $ 500 and 5000, respectively
Figure 5. GMD-F applied to a synthetic seismic model consisting of three linear events. (a) Synthetic model. (b)-(d) Three decomposed modes. (e) Fourier spectrum and the trajectory of center frequencies. (b)-(d) Fourier spectra corresponding to (b)-(d)
Figure 6. Convergence analysis of $\omega_x$ in GMD-F
Figure 7. GMD-R applied to a synthetic seismic model consisting of three parabolic events. (a) Synthetic model. (b)-(d) Three decomposed modes. (e) Radon spectrum and the trajectory of ($\tau,p~$) pairs. (f)-(h) Radon spectra corresponding to (b)-(d)
Figure 8. GMD-R applied to a synthetic seismic model consisting of three parabolic events with similar slopes. (a) Synthetic model. (b)-(d) Three decomposed modes
Figure 9. GMD-R1. (a)-(b) The two decomposed modes. The first mode contains two events with similar slopes
Figure 10. Noise attenuation with GMD-F. (a) Original noisy data. (b) $FK$ spectrum of (a). (c) - (e) Denoising results of the GMD-F method (SNR = 10.77), the 1D VMD method (SNR = 6.75), and the $FX$ deconvolution method (SNR = 9.15). (f)-(h) Error between denoising results and noisy data corresponding to (c)-(e). (i)-(k) $FK$ spectra of (c)-(e)
Figure 11. Data interpolation with GMD-R. (a) $25\%$ regularly sub-sampled data. (c) Interpolated data with GMD-R. (e) Interpolated data with Spitz interpolation. (b), (d), and (f) $FK$ spectra of (a), (c), and (e)
Figure 12. Field data noise attenuation with GMD-F. (a) Field data. (b) Zoomed version of (a)
Figure 13. Field data noise attenuation with GMD-F. (a), (c), and (e) are the noise attenuation results of the GMD-F method, the curvelet method, and $FX$ deconvolution method, respectively. (b), (d), and (f) are the corresponding noise
Figure 14. Demultiple on NMO-corrected traces. (a) NMO-corrected traces. (b) Parabolic Radon spectrum. The two lines represent the two modes detected. (c) and (d) The separated multiple and primary with GMD-R1. $\alpha = 0.005$
Figure 15. Demultiple on NMO-corrected traces. (a) and (b) The separated multiple and primary with GMD-R1. $\alpha = 10^{-5}$. (c) and (d) The separated multiple and primary by directly muting the Radon spectrum
Yi Yang, Jianwei Ma, Stanley Osher. Seismic data reconstruction via matrix completion. Inverse Problems & Imaging, 2013, 7 (4) : 1379-1392. doi: 10.3934/ipi.2013.7.1379
Jonathan H. Tu, Clarence W. Rowley, Dirk M. Luchtenburg, Steven L. Brunton, J. Nathan Kutz. On dynamic mode decomposition: Theory and applications. Journal of Computational Dynamics, 2014, 1 (2) : 391-421. doi: 10.3934/jcd.2014.1.391
Steven L. Brunton, Joshua L. Proctor, Jonathan H. Tu, J. Nathan Kutz. Compressed sensing and dynamic mode decomposition. Journal of Computational Dynamics, 2015, 2 (2) : 165-191. doi: 10.3934/jcd.2015002
Hao Zhang, Scott T. M. Dawson, Clarence W. Rowley, Eric A. Deem, Louis N. Cattafesta. Evaluating the accuracy of the dynamic mode decomposition. Journal of Computational Dynamics, 2019, 0 (0) : 0-0. doi: 10.3934/jcd.2020002
Francesca Sapuppo, Elena Umana, Mattia Frasca, Manuela La Rosa, David Shannahoff-Khalsa, Luigi Fortuna, Maide Bucolo. Complex spatio-temporal features in meg data. Mathematical Biosciences & Engineering, 2006, 3 (4) : 697-716. doi: 10.3934/mbe.2006.3.697
Fengmin Xu, Yanfei Wang. Recovery of seismic wavefields by an lq-norm constrained regularization method. Inverse Problems & Imaging, 2018, 12 (5) : 1157-1172. doi: 10.3934/ipi.2018048
Raluca Felea, Venkateswaran P. Krishnan, Clifford J. Nolan, Eric Todd Quinto. Common midpoint versus common offset acquisition geometry in seismic imaging. Inverse Problems & Imaging, 2016, 10 (1) : 87-102. doi: 10.3934/ipi.2016.10.87
Weidong Bao, Wenhua Xiao, Haoran Ji, Chao Chen, Xiaomin Zhu, Jianhong Wu. Towards big data processing in clouds: An online cost-minimization approach. Big Data & Information Analytics, 2016, 1 (1) : 15-29. doi: 10.3934/bdia.2016.1.15
Min-Fan He, Li-Ning Xing, Wen Li, Shang Xiang, Xu Tan. Double layer programming model to the scheduling of remote sensing data processing tasks. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1515-1526. doi: 10.3934/dcdss.2019104
Rakesh Pilkar, Erik M. Bollt, Charles Robinson. Empirical mode decomposition/Hilbert transform analysis of postural responses to small amplitude anterior-posterior sinusoidal translations of varying frequencies. Mathematical Biosciences & Engineering, 2011, 8 (4) : 1085-1097. doi: 10.3934/mbe.2011.8.1085
Stefano Bianchini, Daniela Tonon. A decomposition theorem for $BV$ functions. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1549-1566. doi: 10.3934/cpaa.2011.10.1549
Fritz Colonius, Paulo Régis C. Ruffino. Nonlinear Iwasawa decomposition of control flows. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 339-354. doi: 10.3934/dcds.2007.18.339
Thiago Ferraiol, Mauro Patrão, Lucas Seco. Jordan decomposition and dynamics on flag manifolds. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 923-947. doi: 10.3934/dcds.2010.26.923
Mauro Patrão, Luiz A. B. San Martin. Morse decomposition of semiflows on fiber bundles. Discrete & Continuous Dynamical Systems - A, 2007, 17 (3) : 561-587. doi: 10.3934/dcds.2007.17.561
Simone Cacace, Maurizio Falcone. A dynamic domain decomposition for the eikonal-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 109-123. doi: 10.3934/dcdss.2016.9.109
David Kazhdan and Yakov Varshavsky. Endoscopic decomposition of characters of certain cuspidal representations. Electronic Research Announcements, 2004, 10: 11-20.
Nataša Djurdjevac Conrad, Ralf Banisch, Christof Schütte. Modularity of directed networks: Cycle decomposition approach. Journal of Computational Dynamics, 2015, 2 (1) : 1-24. doi: 10.3934/jcd.2015.2.1
George Dassios, Michalis N. Tsampas. Vector ellipsoidal harmonics and neuronal current decomposition in the brain. Inverse Problems & Imaging, 2009, 3 (2) : 243-257. doi: 10.3934/ipi.2009.3.243
Vladimír Špitalský. Transitive dendrite map with infinite decomposition ideal. Discrete & Continuous Dynamical Systems - A, 2015, 35 (2) : 771-792. doi: 10.3934/dcds.2015.35.771
Tomás Caraballo, Juan C. Jara, José A. Langa, José Valero. Morse decomposition of global attractors with infinite components. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 2845-2861. doi: 10.3934/dcds.2015.35.2845
PDF downloads (202)
HTML views (258)
Siwei Yu Jianwei Ma Stanley Osher | CommonCrawl |
Spread-F occurrences and relationships with foF2 and h′F at low- and mid-latitudes in China
Ning Wang ORCID: orcid.org/0000-0001-9512-50141,2,
Lixin Guo1,
Zhenwei Zhao2,
Zonghua Ding2 &
Leke Lin2
Ionospheric irregularities are an important phenomenon in scientific studies and applications of radio-wave propagation. Spread-F echoes in ionograms are a type of high-frequency band irregularities that include frequency spread-F (FSF), range spread-F (RSF), and mixed spread-F (MSF) events. In this study, we obtained spread-F data from four ionosondes at low- and mid-latitudes near the 120°E chain in China during the 23rd solar cycle. We used these data to investigate spread-F occurrence percentages and variations with local time, season, latitude, and solar activity. The four ionosondes were located at Haikou (HK) (20°N, 110.34°E), Guangzhou (GZ) (23.14°N, 113.36°E), Beijing (BJ) (40.11°N, 116.28°E), and Changchun (CC) (43.84°N, 125.28°E). We also present possible correlations between spread-Fs and other ionospheric parameters, such as the critical frequency of the F2-layer (foF2) and the virtual height of the bottom-side F-layer (h′F). In particular, we investigated the possible threshold of the foF2 affecting the FSF and the relationship between the h′F and the RSF. The main conclusions are as follows: (a) the FSF occurrence percentages were anti-correlated with solar activity at all four sites; meanwhile, RSF occurrence rates increased with the increase in solar activity at HK, but not at the other three sites; (b) FSF occurrence rates were larger at the mid-latitudes than expected, while FSFs occurred more often after midnight; (c) the highest FSF occurrence rates mostly appeared during the summer months, while RSFs occurred mostly in the equinoctial months of 2000–2002 at HK and GZ; (d) a lower foF2 was suitable for FSF events; nevertheless, h′F and RSF occurrences satisfied the parabolic relationship; (e) the foF2 thresholds for FSFs were 15, 14, 7.6, and 7.8 MHz at HK, GZ, BJ, and CC, respectively. The h′Fs occurring between 240 and 290 km were more favorable for RSF occurrences. These results are important for understanding ionospheric irregularity variations in eastern Asia and for improving space weather modeling and forecasting capabilities
In the middle to late 1930s, ionospheric irregularities and the manner in which their electrodynamic mechanisms affected ionospheric behaviors began to attract the interest of many researchers (Abdu et al. 1981a, b, 1998, 2009; Booker and Wells 1938; Bowman 1974, 1990; Chandra and Rastogi 1970; Chou and Kuo 1996; de Jesus et al. 2013; Ossakow 1981; Xiong et al. 2012). Ionospheric irregularities appear as scattered echoes in high-frequency (HF) band ionograms that are known as spread-F events. Spread-Fs can manifest as frequency spread-Fs (FSF) that are broadened traces that mark reflections from the ionosphere along the frequency axis, or as range spread-Fs (RSF) that are along the vertical height axis. Many ground-based instruments (optical, ionosondes, and radar) and space-borne platforms (rockets and satellites) have been employed to explore the spread-F phenomenon over the past seven decades. These efforts have deepened our knowledge on spread-Fs showing that they vary with respect to latitude, local time, season, and solar and magnetic activity (Alfonsi et al. 2013; Banola et al. 2005; Chou and Kuo 1996; Deng et al. 2013; Huang et al. 1993; Scherliess and Fejer 1999). Different mechanisms have been proposed to explain spread-F occurrences and their development (Bowman 1990; Fejer et al. 1999; Fukao et al. 2004); among these, the primary mechanism in equatorial regions is the generalized Rayleigh–Taylor (R–T) instability mechanism. The R–T instability mechanism suggests that pre-reversal electric field enhancements (PRE) during the evening cause a rapid uplift of the ionosphere's F-layer (Fejer et al. 1999; Fukao et al. 2004; Manju et al. 2007; Sukanta et al. 2017; Xiong et al. 2012; Upadhayaya and Gupta 2014). Relationships between spread-Fs and other ionospheric parameters, particularly the F2-layer (foF2) and h′F variations with the occurrence of spread-Fs, have also been statistically examined (Rungraengwajiake et al. 2013; Joshi et al. 2013; Madhav Haridas et al. 2013; de Abreu et al. 2014a, b, c, 2017; Abadi et al. 2015; Manju and Madhav Haridas 2015; Smith et al. 2015; Liu and Shen 2017). In addition, the effects of seasonal, solar, and magnetic activity variabilities on the h′F threshold have also been investigated (Manju et al. 2007; Manju and Madhav Haridas 2015; Madhav Haridas et al. 2013; Stoneback et al. 2011; Narayanan et al. 2014, 2017).
Devasia et al. (2002) first introduced the concept of threshold height (h′Fc) as a critical parameter controlling the day-to-day equatorial spread-F (ESF) variability. Past studies have revealed the dependence of the h′Fc on seasonal variations and solar and magnetic activity for the occurrence of ESFs and found the occurrences to be irrespective of the magnitude and polarity of meridional winds (Jyoti et al. 2004; Manju et al. 2007). Rungraengwajiake et al. (2013) presented a comparative study of the correlation between h′F and RSF occurrences in Thailand, and the results showed that high RSF occurrences mostly happened during equinoctial months that corresponded to rapid increases in the monthly mean h′F after sunset. Joshi et al. (2013) found that the h′F plays a key role in determining the R–T instability growth rate. Madhav Haridas et al. (2013) presented the effects of seasonal and solar activity variations of the h′Fc on ESF occurrences in India and found that substantial increases in the h′Fc varied with magnetic activity during every season.
Similar studies in Brazil have been presented (de Abreu et al. 2014a, b, c) to show that the occurrence of ESFs are closely related to daily variations of the h′F near the equator. During periods of low solar activity (LSA), the 250 km h′F altitude acted as the h′Fc for the generation of spread-Fs, while the 300 km h′Fc was during periods of high solar activity (HSA). An investigation using measurements from multiple instruments over the American sector showed that spread-Fs were often observed the nights before and during storms near the equator, in which the foF2 was less than 8 MHz and the h′F was lower than 300 km (de Abreu et al. 2017).
Abadi et al. (2015) studied the influences of the h′F on the latitudinal extension of ionospheric irregularities in Southeast Asia. Their results suggested that the latitudinal extension of plasma bubbles was mainly controlled by the PRE magnitude and h′F peak values during the initial phases of the ESF. Manju and Madhav Haridas (2015) investigated the h′Fc for the occurrences of ESFs during equinoxes and showed that the equinoctial asymmetry of the h′Fc increases with solar activity. Aside from the studies mentioned above, there are few reports that consider the effect of the foF2 threshold on the generation of spread-F events. Liu and Shen (2017) conducted a case study during a severe geomagnetic storm near 120°E in China and showed that the spread-F was suppressed near Sanya and Wuhan during the storm's main phase when the frequency spread over 14 MHz, and the suppression was sustained for several hours. This helped us to understand the possible onset causes of the day-to-day spread-F variability.
Stoneback et al. (2011) investigated the local time distribution of meridional (vertical) drifts during the prolonged solar minimum. They found that the downward drifts across sunset and the upward drifts across midnight were also consistent with the delay in the appearance of ionospheric irregularities after midnight. Narayanan et al. (2014) studied the relationship between the occurrence of satellite traces (STs) in ionograms and the formation of ESFs using observations from an Indian dip equatorial station during solar minimum conditions. They found that the ST occurred later in the night as well implying that the PRE was not the cause of the ST during these times. Additionally, they also found that the STs were not followed by ESFs in about 30% of the cases indicating that large-scale wave-like structures (LSWS) do not trigger ESFs on all occasions. Narayanan et al. (2017) also found that the plasma bubbles were generated without strong PREs when the ion-neutral collision frequencies possibly dropped significantly during the unusually low solar activity conditions of 2008. Abdu et al. (2006) found that the existence of significant planetary wave (PW) influences on plasma parameters at E- and F-region heights over the equatorial latitudes using airglow, radar, and ionospheric sounding observations. A direct consequence of the PW scale oscillations in the evening electric field is its role in the quiet time day-to-day variability of the ESF/plasma bubble occurrences and intensities.
We limited our focus to spread-F occurrences and their relationships with foF2 and h′F that affected spread-F occurrences during a complete solar cycle in the low- and mid-latitudes over China. The International Reference Ionosphere-2012 (IRI-2012) model includes the monthly mean spread-F occurrences for predicting in the Brazilian longitude sector but not for Chinese sector. Therefore, the studies of spread-F occurrence statistics in China are part of an on-going effort to develop the spread-F occurrence prediction abilities to improve the IRI model. In the present study, we focused on the characteristics and correlations between spread-F occurrences and the foF2 and h′F. Furthermore, we also present the thresholds of the foF2 as they relate to the generation of FSFs.
The China Research Institute of Radio-wave Propagation (CRIRP) constructed and operated a network of long-running ionospheric observation sites that cover mainland China. In this study, we extracted simultaneous spread-F data from four digital ionosondes located at Haikou (HK) (20°N, 110.34°E), Guangzhou (GZ) (23.14°N, 113.36°E), Beijing (BJ) (40.11°N, 116.28°E), and Changchun (CC) (43.84°N, 125.28°E). In addition, we also determined the data characteristic of the foF2 and h′F at these sites to reveal possible correlations between spread-F occurrences and the foF2 and h′F. No data were recorded in December 1997 and from May to December 1999 at CC, because the ionosonde was being repaired. The observational site details are shown in Table 1.
Table 1 Details of the digital ionosonde sites used in the investigation
The HK and GZ sites lie near the north crest of the equatorial ionization anomaly (EIA) zone. The EIA zone is where the fountain effect phenomena and the equatorial electrojet often interact resulting in complicated ionospheric physical processes. BJ and CC are located at the mid-latitudes in China. According to previous studies, ionospheric irregularities greatly depend on solar activity, local time, season, latitude and longitude, and geomagnetic disturbances (Abdu et al. 1981a, b, 1983, 1998, 2009; Booker and Wells 1938; Bowman 1974; Chandra and Rastogi 1970; Maruyama 1988; Xiong et al. 2012). To discuss the correlations between spread-Fs and solar and geomagnetic activities, we show the monthly mean 10.7 cm radio flux (F10.7) and ap index during the 23rd solar cycle in Fig. 1 that covers the epochs of the LSA and HSA. We used a 3-hourly ap index to identify geomagnetically quiet and disturbed days. If the maximum value of the 3-hourly ap index for a day was greater than 12, the day was considered as a disturbed day (Narayanan et al. 2017). Figure 2 shows the daily max ap indices from 2000 to 2005. Further, it can be seen from the figure that there were more geomagnetically disturbed days during the vernal equinox and autumn equinoxes in 2001 and 2002.
Monthly averaged 10.7 cm solar flux (F10.7) (y axis: F10.7/sfu) and ap index from 1997 to 2008 denoting the solar activity
Daily max ap indices from 2000 to 2005
Ionogram data were collected using type TYC-1 ionosondes, which are designed and manufactured by the CRIRP (Xu et al. 2001). Ionograms were recorded at 1-h intervals for a frequency range from 1 to 32 MHz. We distinguished two types of spread-F, FSF, and RSF for detailed study. We used the percentage of spread-F occurrences to describe the spread-F statistical features, which is defined as follows:
$${\text{P}}\left( y, m, h \right) = \frac{{n \left( {y, m, h} \right)}}{{N\left( {y,m,h} \right)}} \times 100{\text{\% }}$$
where y, m, and h represent the year, month, and local time (LT), respectively; n is the number of spread-F occurrences that appear at the same local time but during different days of a single month, and N is the total number of days for a given year and local time. Spread-Fs typically appeared after sunset and lasted until the subsequent sunrise; thus, the percentage of spread-F occurrences from 18:00 LT to 06:00 LT is the topic of interest in this study. Occurrences of FSF and RSF were compared with monthly medians of the foF2 and h′F to find the correlations between foF2 and h′F for the generation of spread-Fs. The FSF, RSF, foF2, and h′F were differentiated by manually analyzing the ionograms. The foF2 and h′F can sometimes be measured, but sometimes cannot be obtained when a spread-F occurs. The foF2 and h′F cannot be obtained during a strong spread-F (SSF). SSFs are a type of spread-F that can be identified when there is strong diffusion on the frequency and height axis of an ionogram. Figure 3 shows a SSF event in Haikou on March 26, 2012. The observations presented in this manuscript contain data when the foF2 and h′F values were reliably scaled during a spread-F. To examine their seasonal variations, we grouped the data into the following four seasonal bins: summer (May, June, July and August), vernal equinox (March and April), autumn equinox (September and October), and winter (January, February, November and December) (Maruyama and Matuura 1984; Maruyama et al. 2009; Sripathi et al. 2011; Xiao and Zhang 2001).
A SSF event in Haikou on March 26, 2012
Nocturnal, seasonal, and solar activity variations on spread-F occurrences
The monthly mean of the FSF occurrence rates varied with local time and are presented separately in Fig. 4 for Haikou, Guangzhou, Beijing, and Changchun. It can be found that the FSF occurrences frequently appeared after midnight. Also, the FSF occurrences observed at different sites exhibited distinct local time distribution patterns. Previous studies have also observed this trend (Zhang et al. 2015; de Jesus et al. 2010, 2012, 2016). The FSF occurrence rates at HK, BJ and CC were higher than GZ. The maximum FSF occurrence rate was ~ 80% and occurred in July 1997 at HK, in August 2008 at BJ and in June 2006 at CC. The LSA yielded high FSF occurrence percentages at all four sites. The relationship between the FSF and solar activity was approximate to a negative correlation. The seasonal variation of the FSF occurrence rates observed at the four sites is shown in Fig. 5a–d. We found that FSFs occurred mostly during the summer at HK and the occurrence rate was lower between 1999 and 2002. FSF occurrence rates were higher during the autumn equinox than during the vernal equinox between 2000 and 2001 at HK. FSFs occurred mostly during the summer at GZ, however, scarcely occurred in 2002 and 2008. Statistically, the FSFs started at approximately 21:00 LT and lasted until 05:00 LT at HZ and CC. However, FSFs started at about 23:00 LT and lasted until 05:00 LT at GZ and BJ, with post-midnight FSFs as the most commonly observed.
Monthly mean FSF occurrence percentages at the four sites
Seasonal variation of the FSF occurrences observed at HK (a), GZ (b), BJ (c), and CC (d)
Figure 6 shows variations in the average RSF occurrence rates at the four sites. The RSF occurrence rate was much larger than the FSF occurrence rate at GZ; however, the rates were smaller than the FSF occurrence rates at BJ and CC. RSF occurrence rates increased with an increase in solar activity at HK, but not at the other three sites. The maximum RSF occurrence rate was higher than 80% in June 2006 and July 2007 at GZ. Figure 7 shows the seasonal RSF occurrence rate variations at the four sites. RSFs mostly occurred in the vernal equinox and autumn equinox months during HSA years at HK and GZ. These observations revealed that the RSF occurrence rate from 2000 to 2002 at HK and GZ were possibly affected by the geomagnetic activity according to Fig. 2. During the solar maximum period between 2000 and 2002, RSFs appeared earlier than during other periods, with a maximum RSF occurrence rate occurring between 21:00 LT and 01:00 LT at HK and GZ. Different from the FSF occurrences, higher RSF occurrence rates mostly occurred during the winter months at BJ and CC. Previous studies have emphasized that FSF events are well correlated with bottom-side layers, while RSFs are closely correlated with plumes. Additionally, the RSF occurrence rate reaches its maximum before midnight during HSA at low latitude, whereas that of an FSF reaches a maximum after midnight (Liu et al. 2004a, b; Chen et al. 2006; Aarons et al. 1994; Hu et al. 2004). This regular pattern was also observed at the four sites in China.
Same as Fig. 2, but for the RSF occurrences
Seasonal variation of the RSF occurrences at HK (a), GZ (b), BJ (c), and CC (d)
Abdu et al. (2003) showed that RSF events are associated with developed or developing plasma bubble events, while FSF events are associated with narrow-spectrum irregularities that occur near the peak of the F-layer. These results suggest that the upward velocity of plasma bubbles have a strong seasonal connection with the maximum values observed during the summer. Variations of FSF and RSF except for those during the 2000–2002 solar maximum period are mainly consistent with these studies. Rungraengwajiake et al. (2013) showed that FSF events appear later than RSF events on average and that FSFs remain until morning, while RSFs almost disappear by around 04:00 LT. The results shown in Figs. 5 and 7 are slightly different, which may be partly attributed to the effects of geomagnetic activity. Figure 2 shows the geomagnetic activity during the equinoxes in 2001 and 2002. It is possible these activities caused the RSFs to occur mainly during equinoxes at HK and GZ in 2001 and 2002. The peak FSF occurrence rate appeared later at GZ than at HK, which is well correlated with the manner in which fresh bubbles start from the latter station and then expand to high latitudes. The average FSF occurrence percentage mostly peaks from 24:00 LT to 02:00 LT at HK and from 03:00 LT to 05:00 LT at GZ. The average RSF occurrence percentages mostly peaked from 21:00 LT to 23:00 LT at HK and from 24:00 LT to 02:00 LT at GZ during periods of HSA. Meanwhile, RSF occurrence rates were higher at HK and GZ than those at BJ and CC; FSF occurrence rates were higher at HK, BJ, and CC than at GZ. These results support the hypothesis that solar and geomagnetic activity affects seasonal and longitudinal variations of spread-Fs.
Liu and Shen (2017) found that the disturbance of electric fields could also contribute to the occurrence of spread-Fs, especially at low-latitude stations. The disturbed electric fields and the disturbance winds are also the probable factors that promote the spread-F along with the gravity-driven R–T instability. In addition, the electric field disturbances can also generate spread-Fs through R–T instability only (de Jesus et al. 2010; Wang et al. 2014; Wan and Xu 2014; Mo et al. 2017). The disturbance of the dynamo driven by enhanced global thermospheric circulation resulting from energy input at high latitudes is another factor for promoting spread-Fs (de Jesus et al. 2010; Liu and Shen 2017). Therefore, it can be seen that there are many possible mechanisms for spread-F occurrences, and more in-depth analysis is needed.
Nocturnal, seasonal, and solar activity variations on foF2 and h′F
In Fig. 8 we showed local time and the variations in solar activity in the monthly median foF2 data from the 23rd solar cycle. At lower latitudes, a higher magnitude foF2 was sustained until midnight. In addition, another morphological feature of the monthly medians is the typical post-sunset peak values. Between 1998 and 2005, foF2 variations showed dual-peak patterns at HK and GZ that reached a minimum during the summer and a maximum during the spring and winter. Additionally, wintertime monthly medians of the foF2 were higher during the spring in 1998, but this result is inverted between 2003 and 2005. Figure 9 shows the seasonal variations of the averaged foF2 monthly median data at all four sites. The medians reached their peak magnitudes between 18:00 and 19:00 LT. In addition, the medians were mostly higher during equinox seasons at HK and GZ; however, they were mostly higher during the summer at BJ and CC. The highest foF2 medians were ~ 18 MHz and occurred from 18:00 LT to 24:00 LT at HK and GZ during periods of maximum solar activity. The minimal medians occurred before dawn from around 03:00–05:00 LT. The post-midnight collapse of the foF2 usually occurred more often at low latitudes than mid-latitudes.
Variation of the monthly median foF2 at the four sites between 1997 and 2008
Seasonal variation of the monthly median foF2 at HK (a), GZ (b), BJ (c), and CC (d)
Abdu et al. (1983) proposed that the h′F parameter may be a possible factor involved in the occurrence and variation of spread-Fs. Figure 10 shows the h′F monthly median data at all four sites, thus demonstrating that monthly medians were higher at HK and GZ than at BJ and CC. The peak median h′F values occurred before midnight during the summer in HSA at HK and GZ; however, the peak value onset time was later at high latitudes. During periods of HSA, monthly medians increase. Figure 11 shows the seasonal variation of the average h′F monthly median at the four sites, which is quite different from the foF2. The maximum h′F values occurred from 21:00 LT to 01:00 LT during summer months at HK and GZ; otherwise, they appeared at or before midnight from 2000 to 2002.
Same as Fig. 6, but for monthly medians of the h′F
Seasonal variation of the monthly medians of the h′F at HK (a), GZ (b), BJ (c), and CC (d)
The possible foF2 threshold for FSFs and the relationship between the h′F and RSF
The correlations between spread-F occurrence and the foF2 and h′F magnitudes are discussed in this section. Figures 12 and 13 show the post-sunset foF2 and h′F variations compared with the normalized spread-F occurrence rates at the four sites. In order to analyze the correlation between the spread-F occurrence rate and the foF2 and h′F, the normalized probability was used. The normalized spread-F occurrence rate is defined as follows:
$$p_{i} = \frac{{m}_{i} }{{\mathop \sum \nolimits_{i} {m}_{i} }}$$
$$\mathop \sum \limits_{i} {p}_{i} = 1$$
where p is the normalized FSF or RSF occurrence rate, m i is the number of FSF or RSF event occurrences when the foF2 or h′F is within a certain interval. We used 0.2 MHz and 5 km as the sampling intervals for the foF2 and h′F. The summation of m i is the total number of FSF or RSF event occurrences. We applied the polynomial fitting method during the relationship analysis between the foF2 and h′F and the spread-F occurrence rates. We found that the foF2 and FSF occurrences satisfy the linear relationship shown in Fig. 12, and the h′F and RSF occurrences are similar to parabolic relationship shown in Fig. 13. The red point is the sample value. The blue lines are a fitting line or curve. The FSF occurrence rates increased with a decrease in foF2 at each site, and the foF2 values ranged from 2.5 to 18 MHz at HK and GZ. A straight line is drawn when the normalized spread-F occurrence rate is equal to 0% as in Fig. 12. The intersection of this line and the blue line is considered the foF2 threshold. We estimated that the foF2 threshold at HK and GZ was ~ 15 and ~ 14 MHz because almost the FSF occurrence was ~ 0% when foF2 exceeded this magnitude. The foF2 values ranged from of 3–9 MHz at BJ and CC. Thus, the corresponding foF2 thresholds for BJ and CC were 7.6 and 7.8 MHz, respectively. It is evident that the foF2 variability was much larger at low latitudes than at mid-latitudes. There are few reports that consider the effect of the foF2 threshold on the generation of spread-F events. De Abreu et al. (2017) found that the spread-F was often observed during storms using measurements from multiple instruments over the American sector when the foF2 was below 8 MHz. De Abreu et al. (2017) showed that the post-sunset EIA is produced by the plasma fountain arising from the pre-reversal vertical drift enhancement in the F-region (as indicated by large sunset increases of h′F and decreases of foF2). Therefore, it can be seen that the rapidly changing Dst index will also affect spread-Fs; however, our research is not currently focused on ionospheric storms. The variation in foF2 at different latitudes suggests that the PRE is not the only factor to initiate FSFs. For example, the meridional wind can suppress the growth rate of the R–T instability, also attributing to the foF2 and FSF (Buonsanto and Titheridge 1987; Stoneback et al. 2011).
Correlation between foF2 and the normalized FSF occurrence percentages
Correlation between h′F and the normalized RSF occurrence percentages
Figure 13 shows the post-sunset h′F variations compared with the RSF occurrence rates at the four sites. The red point is the sample value. The blue line is the fit curve. The RSF occurrence rate and the h′F satisfy the parabolic relationship. When the probability of the RSF was ~ 25% of the maximum probability of occurrence, we treated that virtual height value as the threshold value. The h′F occurring between 240 and 290 km is more favorable for RSF occurrence by calculation, which is different from the relationship between foF2 and FSF. Figures 6, 10, and 13 indicate that the higher occurrence rates of RSFs are well correlated with higher post-sunset h′F peaks (Rungraengwajiake et al. 2013). Previous studies observed spread-Fs in the equatorial region on nights when the h′F was below 300 km (Abadi et al. 2015; Manju and Madhav Haridas 2015; Liu and Shen, 2017; de Abreu et al. 2017). Our results also support this conclusion. In addition, Devasia et al. (2002), Jyoti et al. (2004) and Manju et al. (2007) obtained an h′F threshold for the spread-F occurrences in their studies in India. Devasia et al. (2002) found a threshold of about ~ 300 km for the cases in their study. Our results also show that when the virtual height is greater than 300 km, the probability of an RSF is very small. Jyoti et al. (2004) showed a linear relationship between solar activity and the h′F threshold. Manju et al. (2007) investigated the dependence of the h′F threshold on seasonal and solar activity for magnetically quiet conditions and proposed the important role of neutral dynamics in controlling the day-to-day ESF variability. Abadi et al. (2015) found that latitudinal extension of plasma bubbles was mainly controlled by the h′F peak value during the initial phase of an ESF. Manju and Madhav Haridas (2015) showed that the equinoctial asymmetry of the h′Fc increases with solar activity. In this article, the correlation between the h′F threshold and the seasonal and solar activities are not involved, and we will also focus on this content. The new idea presented from our study is the correlation between RSF occurrences and the h′F, which are different from previous research results. In a follow-up study, we will examine the relationship between the h′F threshold for RSFs and the solar and geomagnetic activities and equinoctial asymmetry.
The correlation RSF occurrence percentages with rapidly increasing post-sunset monthly mean h′F values substantiated the role of the PRE enhancement on RSF onsets. Traveling planetary wave ionospheric disturbance (TPWID)-type oscillations (de Abreu et al. 2014a, c; Fagundes et al. 2009) in the modulation of the virtual height in the F-region increased during sunset hours. Meridional wind velocities corresponding to the post-sunset h′F for each spread-F event have been considered. Buonsanto and Titheridge (1987) found that the hmF2 dropped from 13:00 to 18:00 LT during the solar maximum periods because of the meridional wind. These results also indicate that the spread-F is a complex phenomenon, which implies that other possible factors can be ascribed to spread-F occurrences. The atmosphere ionosphere coupling process has been proposed as a contributing factor for spread-F development. Therefore, the connections between spread-F occurrence characteristics and the foF2 and h′F magnitudes deserve detailed investigation by additional theoretical and observational research. The foF2 and h′F thresholds also require further investigation using observations from different regions and under different solar activity conditions.
In this study, we presented variations of the spread-F, foF2, h′F, the possible threshold of the foF2 for FSF, and the relationship between the h′F and RSF. The data in our study were recorded by four stations at low- and mid-latitudes near 120°E longitude in China during the 23rd solar cycle. The major conclusions are summarized as follows:
The FSF occurrence rates increased during years of LSA at all four sites. FSFs mainly occurred during the summer months, while RSFs occurred mostly in the equinoctial months between 2000 and 2002 at HK and GZ. Post-midnight FSFs were the most observed type of spread-F events. The typical FSF onset time was about 21:00 LT, and the FSFs normally lasted until 05:00 LT, while the RSFs occurred 2–3 h earlier at HK and GZ during periods of HSA.
The foF2 and h′F peak values come mainly before midnight at low latitudes, while h′F peak values appeared after midnight at mid-latitudes during periods of HSA.
Lower foF2 values were appropriate for FSF events; nevertheless, h′F and RSF occurrences satisfied the parabolic relationship. Most FSF events occurred when the foF2 was below 15 and 14 MHz at HK and GZ, and below 7.6 and 7.8 MHz at BJ and CC. The h′Fs occurring between 240 and 290 km were more favorable for RSF occurrences, which differ from the foF2. However, some questions remain unresolved and further studies are in progress.
Our studies of FSFs and RSFs in China are useful and have the potential to be included in the future IRI model. However, even after such studies of spread-F onsets and growth conditions, some uncertainties remain. This requires further efforts to understand the spread-F phenomenon at different locations. Soon, long irregularity data coverage over the China sector will be studied. More ionospheric parameters will be compared with local time and seasonal spread-F variations to amplify knowledge of the involved physical mechanisms.
FSF:
the frequency spread-F
RSF:
the range spread-F
MSF:
the mixed spread-F
foF2:
the critical frequency of the F2-layer
h′F:
the virtual height of the bottom-side F-layer
EIA:
equatorial ionization anomaly
F10.7:
the monthly average data of 10.7 cm radio flux
Rayleigh–Taylor
pre-reversal electric field
TPWID:
traveling planetary wave ionospheric disturbance
HSA:
high solar activity
LSA:
low solar activity
CRIRP:
China Research Institute of Radio-wave Propagation
Aarons J, Mendillo M, Yantosca R (1994) GPS phase fluctuations in the equatorial region during the MISETA 1994 campaign. J Geophys Res 101:26851–26862
Abadi P, Otsuka Y, Tsugawa T (2015) Effects of pre-reversal enhancement of E × B drift on the latitudinal extension of plasma bubble in Southeast Asia. Earth Planets Space 67:74. https://doi.org/10.1186/s40623-015-0246-7
Abdu MA, Batista I, Bittencourt JA (1981a) Some characteristics of spread-F at the magnetic equatorial station Fortaleza. J Geophys Res 86:6836–6842
Abdu MA, Bittencourt JA, Batista IS (1981b) Magnetic declination control of the equatorial F region dynamo electric field development and spread F. J Geophys Res 86:11443–11446
Abdu MA, Medeiros RT, Bittencourt JA, Batista IS (1983) Vertical ionization drift velocities and range spread F in the evening equatorial ionosphere. J Geophys Res 88:399–402
Abdu MA, Sobral JHA, Batista IS, Rios VH, Medina C (1998) Equatorial spread-F occurrence statistics in the American longitudes: diurnal, seasonal and solar cycle variations. Adv Space Res 22:851–854
Abdu MA, Souza JR, Batista IS, Sobral JHA (2003) Equatorial spread F statistics and empirical representation for IRI: a regional model for the Brazilian longitude sector. Adv Space Res 31(3):703–716
Abdu MA, Ramkumar TK, Batista IS, Brum CGM, Takahashi H, Reinisch BW, Sobral JHA (2006) Planetary wave signatures in the equatorial atmosphere-ionosphere system, and mesosphere E- and F- region coupling. J Atmos Sol-Terr Phys 68:509–522. https://doi.org/10.1016/j.jastp.2005.03.019
Abdu MA, Alam Kherani E, Batista IS, de Paula ER, Fritts DC, Sobral JHA (2009) Gravity wave initiation of equatorial spread F/plasma bubble irregularities based on observational data from the SpreadFEx campaign. Ann Geophys 27:2607–2622
Alfonsi L, Spogli L, Pezzopane M, Romano V, Zuccheretti E, de Franceschi G, Cabrera MA, Ezquer RG (2013) Comparative analysis of spread-F signature and GPS scintillation occurrences at Tucuman, Argentina. J Geophys Res 118:4483–4502. https://doi.org/10.1002/jgra.50378
Banola S, Pathan BM, Rao DRK, Chandra H (2005) Spectral characteristics of scintillations producing ionospheric irregularities in the Indian region. Earth Planets Space 57:47–59
Booker HG, Wells HG (1938) Scattering of radio waves by the F-region of ionosphere. J Geophys Res 43:249–256
Bowman GG (1974) Ionospheric spread F at Huancayo, sunspot activity and geomagnetic activity. Planet Space Sci 22:1579–1583
Bowman GG (1990) A review of some recent work on midlatitude spread-F occurrence as detected by ionosondes. J Geomagn Geo Electr 42:109–138
Buonsanto MJ, Titheridge JE (1987) Diurnal variations in the flux of ionization above the F2 peak in the northern and southern hemispheres. J Atmos Sol-Terr Phys 49:1093–1105
Chandra H, Rastogi RG (1970) Solar cycle and seasonal variation of spread F near the magnetic equator. J Atmos Terr Phys 32:439–443
Chen HJ, Liu LB, Wan WX, Ning BQ, Lei JL (2006) A comparative study of the bottomside profile parameters over Wuhan with IRI-2001 for 1999-2004. Earth Planets Space 58:601–605
Chou SY, Kuo FS (1996) A numerical study of the wind field effect on the growth and observability of equatorial spread F. J Geophys Res 101:17137–17149
de Abreu AJ, Fagundes PR, Bolzan MJA, Gende M, Brunini C, de Jesus R, Pillat VG, Abalde JR, Lima WLC (2014a) Traveling planetary wave ionospheric disturbances and their role in the generation of equatorial spread-F and GPS phase fluctuations during the last extreme low solar activity and comparison with high solar activity. J Atmos Sol-Terr Phys 117:7–19. https://doi.org/10.1016/j.jastp.2014.05.005
de Abreu AJ, Fagundes PR, Gende M, Bolaji OS, de Jesus R, Brunini C (2014b) Investigation of ionospheric response to two moderate geomagnetic storms using GPS-TEC measurements in the South American and African sectors during the ascending phase of solar cycle 24. Adv Space Res 53:1313–1328. https://doi.org/10.1016/j.asr.2014.02.011
de Abreu AJ, Fagundes PR, Bolzan MJA, de Jesus R, Pillat VG, Abalde JR, Lima WLC (2014c) The role of the traveling planetary wave ionospheric disturbances on the equatorial F region post-sunset height rise during the last extreme low solar activity and comparison with high solar activity. J Atmos Sol-Terr Phys 113:47–57. https://doi.org/10.1016/j.jastp.2014.03.011
de Abreu AJ, Martin IM, Fagundes PR, Venkatesh K, Batista IS, de Jesus R, Rockenback M, Coster A, Gende M, Alves MA, Wild M (2017) Ionospheric F-region observations over American sector during an intense space weather event using multi-instruments. J Atmos Sol-Terr Phys 156:1–14. https://doi.org/10.1016/j.jastp.2017.02.009
de Jesus R, Sahai Y, Guarnieri FL, Fagundes PR, de Abreu AJ, Becker-Guedes F, Brunini C, Gende M, Cintra TMF, de Souza VA, Pillat VG, Lima WLC (2010) Effects observed in the ionospheric F-region in the South American sector during the intense geomagnetic storm of 14 December 2006. Adv Space Res 46:909–920. https://doi.org/10.1016/j.asr.2010.04.031
de Jesus R, Sahai Y, Guarnieri FL, Fagundes PR, de Abreu AJ, Bittencourt JA, Nagatsuma T, Huang CS, Lan HT, Pillat VG (2012) Ionospheric response of equatorial and low latitude F-region during the intense geomagnetic storm on 24-25 August 2005. Adv Space Res 49:518–529. https://doi.org/10.1016/j.asr.2011.10.020
de Jesus R, Sahai Y, Fagundes PR, de Abreu AJ, Brunini C, Gende M, Bittencourt JA, Abalde JR, Pillat VG (2013) Response of equatorial, low- and mid-latitude F-region in the American sector during the intense geomagnetic storm on 24-25 October 2011. Adv Space Res 52:147–157. https://doi.org/10.1016/j.asr.2013.03.017
de Jesus R, Fagundes PR, Coster A, Bolaji OS, Sobral JHA, Batista IS, de Abreu AJ, Venkatesh K, Gende M, Abalde JR, Sumod SG (2016) Effects of the intense geomagnetic storm of September–October 2012 on the equatorial, low- and mid-latitude F region in the American and African sector during the unusual 24th solar cycle. J Atmos Sol-Terr Phys 138–139:93–105. https://doi.org/10.1016/j.jastp.2015.12.015
Deng BC, Huang J, Liu WF, Xu J, Huang LF (2013) GPS scintillation and TEC depletion near the northern crest of equatorial anomaly over South China. Adv Space Res 51:356–365. https://doi.org/10.1016/j.asr.2012.09.008
Devasia CV, Jyoti N, Subbarao KSV, Viswanathan KS, Tiwari D, Sridharan R (2002) On the plausible linkage of thermospheric meridional winds with equatorial spread F. J Atmos Sol-Terr Phys 64:1–12
Fagundes PR, Abalde JR, Bittencourt JA, Sahai Y, Francisco RG, Pillat VG, Lima WLC (2009) F layer postsunset height rise due to electric field prereversal enhancement: 2. Traveling planetary wave ionospheric disturbances and their role on the generation of equatorial spread F. J Geophy Res Space Phys 114(A12). https://doi.org/10.1029/2009JA014482
Fejer BG, Scherliess L, de Paula ER (1999) Effects of the vertical plasma drift velocity on the generation and evolution of equatorial spread F. J Geophys Res 104:19854–19869
Fukao S, Ozawa Y, Yokoyama T, Yamamoto M (2004) First observations of the spatial structure of F region 3-m-scale field-aligned irregularities with the equatorial atmosphere radar in Indonesia. J Geophys Res 109:A02304. https://doi.org/10.1029/2003JA010096
Hu LH, Ning BQ, Li GZ, Li M (2014) Observations on the field-aligned irregularities using Sanya VHF radar: 4. June solstitial F region echoes in solar minimum. Chinese J Geophys 57(1):1–9
Huang CS, Kelley MC, Hysell DL (1993) Nonlinear Rayleigh-Taylor instabilities, atmospheric gravity waves and equatorial spread F. J Geophys Res 98:15631–15642
Joshi LM, Patra AK, Rao SVB (2013) Low-latitude Es capable of controlling the onset of equatorial spread F. J Geophys Res 118:1170–1179. https://doi.org/10.1002/jgra.50189
Jyoti N, Devasia CV, Sridharan R, Diwakar Tiwari (2004) Threshold height (h′F)c for the meridional wind to play a deterministic role in the bottom side equatorial spread F and its dependence on solar activity. Geophys Res Lett 31:L12809. https://doi.org/10.1029/2004GL019455
Liu GQ, Shen H (2017) A severe negative response of the ionosphere to the intense geomagnetic storm on March 17, 2015 observed at mid- and low-latitude stations in the China zone. Adv Space Res 59:2301–2312. https://doi.org/10.1016/j.asr.2017.02.021
Liu JH, Liu LB, Wan WX, Zhang SR (2004a) Modeling investigation of ionospheric storm effects over Millstone Hill during August 4-5, 1992. Earth Planets Space 56:903–908
Liu LB, Wan WX, Lee CC, Ning BQ, Liu JY (2004b) The low latitude ionospheric effects of the April 2000 magnetic storm near the longitude 120°E. Earth Planets Space 56:607–612
Madhav Haridas MK, Manju G, Kumar Pant T (2013) First observational evidence of the modulation of the threshold height h′Fc for the occurrence of equatorial spread F by neutral composition changes. J Geophys Res 118:3540–3545. https://doi.org/10.1002/jgra.50331
Manju G, Madhav Haridas MK (2015) On the equinoctial asymmetry in the threshold height for the occurrence of equatorial spread F. J Atmos Sol-Terr Phys 124:59–62. https://doi.org/10.1016/j.jastp.2015.01.008
Manju G, Devasia CV, Sridharan R (2007) On the seasonal variations of the threshold height for the occurrence of equatorial spread F during solar minimum and maximum years. Ann Geophys 25:855–861
Maruyama T (1988) A diagnostic model for equatorial spread F 1. Model description and application to electric field and neutral winds effects. J Geophys Res 93:14611–14622
Maruyama T, Matuura N (1984) Longitudinal variability of annual changes in activity of equatorial spread F and plasma bubbles. J Geophys Res 89:10903–10912
Maruyama T, Saito S, Kawamura M, Nozaki K, Krall J, Huba JD (2009) Equinoctial asymmetry of a low-latitude ionosphere-thermosphere system and equatorial irregularities: evidence for meridional wind control. Ann Geophys 27:2027–2034
Mo XH, Zhang DH, Goncharenko L, Zhang SR, Hao YQ, Xiao Z, Pei JZ, Yoshikawa A, Chau H (2017) Meridional movement of northern and southern equatorial ionization anomaly crests in the East-Asian sector during 2002-2003 SSW. Sci China Earth Sci 60:776–785. https://doi.org/10.1007/s11430-016-0096-y
Narayanan VL, Sau S, Gurubaran S, Shiokawa K, Balan Nanan, Emperumal K, Sripathi S (2014) A statistical study of satellite traces and evolution of equatorial spread F. Earth Planets Space 66:160. https://doi.org/10.1186/s40623-014-0160-4
Narayanan VL, Gurubaran S, Berlin Shiny MB, Emperumal K, Patil PT (2017) Some new insights of the characteristics of equatorial plasma bubbles obtained from Indian region. J Atmos Sol-Terr Phys 156:80–86. https://doi.org/10.1016/j.jastp.2017.03.006
Ossakow SL (1981) Spread F theories - a review. J Atmos Sol-Terr Phys 43:437–452
Rungraengwajiake S, Supnithi P, Tsugawa T, Maruyama T, Nagatsuma T (2013) The variation of equatorial spread-F occurrences observed by ionosondes at Thailand longitude sector. Adv Space Rec 52:1809–1819. https://doi.org/10.1016/j.asr.2013.07.041
Scherliess L, Fejer BG (1999) Radar and satellite global equatorial F region vertical drift model. J Geophys Res 104:6829–6842
Smith JM, Rodrigues FS, de Palua ER (2015) Radar and satellite investigations of equatorial evening vertical drifts and spread F. Ann Geophys 33:1403–1412. https://doi.org/10.5194/angeo-33-1403-2015
Sripathi S, Kakad B, Bhattacharyya A (2011) Study of equinoctial asymmetry in the Equatorial Spread F (ESF) irregularities over Indian region using multi-instrument observations in the descending phase of solar cycle 23. J Geophys Res 116:A11302. https://doi.org/10.1029/2011JA016625
Stoneback RA, Heelis RA, Burrell AG, Coley WR, Fejer BG, Pacheco E (2011) Observations of quiet time vertical ion drift in the equatorial ionosphere during the solar minimum period of 2009. J Geophys Res 116:A12327. https://doi.org/10.1029/2011JA016712
Sukanta Sau, Narayanan VL, Gurubaran S, Ghodpage Rupesh N, Patil PT (2017) First observation of interhemispheric asymmetry in the EPBs during the St. Patrick's Day geomagnetic storm of 2015. J Geophys Res 122:6679–6688. https://doi.org/10.1002/2017JA024213
Upadhayaya AK, Gupta S (2014) A statistical analysis of occurrence characteristics of spread-F irregularities over Indian region. J Atmos Sol-Terr Phys 112:1–9. https://doi.org/10.1016/j.jastp.2014.01.019
Wan WX, Xu JY (2014) Recent investigation on the coupling between the ionosphere and upper atmosphere. Sci China Earth Sci 57:1995–2012. https://doi.org/10.1007/s11430-014-4923-3
Wang Z, Shi JK, Torkar K, Wang GJ, Wang X (2014) Correlation between ionospheric strong range spread F and scintillations observed in Vanimo station. J Geophys Res 119:8578–8585. https://doi.org/10.1002/2014JA020447
Xiao Z, Zhang TH (2001) A theoretical analysis of global characteristics of spread-F. Chin Sci Bull 46:1593–1594
Xiong C, Luhr H, Ma SY, Stolle C, Fejer BG (2012) Features of highly structured equatorial plasma irregularities deduced from CHAMP observations. Ann Geophys 30:1259–1269. https://doi.org/10.5194/angeo-30-1259-2012
Xu T, Wu ZS, Hu YL, Wu J, Suo YC, Feng J (2010) Statistical analysis and model of spread F occurrence in China. Sci China Tech Sci 53:1725–1731. https://doi.org/10.1007/s11431-010-3169-3
Zhang Y, Wan W, Li G, Liu L, Hu L, Ning B (2015) A comparative study of GPS ionospheric scintillations and ionogram spread F over Sanya. Ann Geophys 33:1421–1430. https://doi.org/10.5194/angeo-33-1421-2015
WN designed the study, analyzed the data, and wrote the manuscript. GLX and ZZW contributed related analysis on data from HK and GZ. DZH and LLK helped with the text of the paper, particularly with the introduction and comparison with previous works. All coauthors contributed to the revision of the draft manuscript and improvement of the discussion. All authors read and approved the final manuscript.
Authors' information
Ning Wang, is currently a Ph.D. student at Xidian University. She also is an Associate Professor at the China Research Institute of Radiowave Propagation. She has authored and coauthored 8 patents and over 15 journal articles. Her current research interests are in ionospheric irregularities and ionosphere radiowave propagation. Dr. Linxin Guo is currently a Professor and Head of the School of Physics and Optoelectronic Engineering Science at Xidian University, China. He has been a Distinguished Professor of the Changjiang Scholars Program since 2014. He has authored and coauthored 4 books and over 300 journal articles. Dr. Zhenwei Zhao is currently a Professor and Chief engineer at the China Research Institute of Radiowave Propagation. His current positions include: Chairman of the ITU-R SG3 in China; Head of the Chinese Delegation of ITU-R SG3; Lead expert for the Asia-Pacific Space Cooperation Organization (APSCO). Dr. Zonghua Ding is currently an Associate Professor at the China Research Institute of Radiowave Propagation. His current research interests are in ionosphere and ionosphere radiowave propagation. Dr. Leke Lin is currently a Professor at the China Research Institute of Radiowave Propagation. He has participated in the activities of the ITU-R study group 3 and has submitted about 40 contributions to the ITU-R SG3.
The authors acknowledge the Data Center of the China Research Institute of Radio-wave Propagation for help with ionogram scaling and classification. The authors would like to thank Dr. Shuji Sun and Dr. Tong Xu for proofreading this manuscript. The authors would also like to thank the anonymous referee for the useful comments and suggestions for improving the paper.
Regretfully, the data used in this manuscript cannot be shared because they belonged to the China Research Institute of Radio-wave Propagation (CRIRP).
Written informed consent was obtained from study participants for participation in the study and for the publication of this report and any accompanying images. Consent and approval for publication was also obtained from Xidian University and China Research Institute of Radio-wave Propagation.
This research was supported by the National Natural Science Foundation of China (Grant No. 41604129) and the National Key Laboratory Foundation of Electromagnetic Environment (Grant Nos. A171501016, A171601003, A161601002, and B041605003). The funds from Grant No. 41604129 were used for data collection and analysis. The funds from Grant Nos. A171501016, A171601003, A161601002, and B041605003 were used for manuscript preparation.
School of Physics and Optoelectronic Engineering, Xidian University, Xi'an, Shaanxi, 710071, China
Ning Wang & Lixin Guo
National Key Laboratory of Electromagnetic Environment, China Research Institute of Radio-wave Propagation, Qingdao, Shandong, 266107, China
Ning Wang, Zhenwei Zhao, Zonghua Ding & Leke Lin
Ning Wang
Lixin Guo
Zhenwei Zhao
Zonghua Ding
Leke Lin
Correspondence to Ning Wang.
Wang, N., Guo, L., Zhao, Z. et al. Spread-F occurrences and relationships with foF2 and h′F at low- and mid-latitudes in China. Earth Planets Space 70, 59 (2018). https://doi.org/10.1186/s40623-018-0821-9
Ionospheric irregularities
Spread-F occurrence percentage
foF2 threshold for FSF
Relationship between h′F and RSF
3. Space science | CommonCrawl |
Almost periodic dynamical behaviors of the hematopoiesis model with mixed discontinuous harvesting terms
Limiting behavior of trajectory attractors of perturbed reaction-diffusion equations
Fully decoupled schemes for the coupled Schrödinger-KdV system
Jiaxiang Cai 1,, , Juan Chen 2, and Bin Yang 1,
School of Mathematical Science, Huaiyin Normal University, Huaian, Jiangsu 223300, China
Department of Basis Education, Jiangsu Vocational College of Finance & Economics, Huaian, Jiangsu, 223003, China
* Corresponding author: [email protected] (J. Cai)
Received July 2018 Revised December 2018 Published April 2019
Fund Project: The first author is supported by the Natural Science Foundation of Jiangsu Province of China grant BK20181482, Qing Lan Project of Jiangsu Province of China and Jiangsu Overseas Visiting Scholar Program for University Prominent Young & Middle-aged Teachers and President
The coupled numerical schemes are inefficient for the time-dependent coupled Schrödinger-KdV system. In this study, some splitting schemes are proposed for the system based on the operator splitting method and coordinate increment discrete gradient method. The schemes are decoupled, so that each of the variables can be solved separately at each time level. Ample numerical experiments are carried out to demonstrate the efficiency and accuracy of our schemes.
Keywords: Schrödinger-KdV equation, splitting method, Hamiltonian system, discrete gradient method, structure-preserving algorithm.
Mathematics Subject Classification: 65P10, 65N35, 65N06.
Citation: Jiaxiang Cai, Juan Chen, Bin Yang. Fully decoupled schemes for the coupled Schrödinger-KdV system. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019069
K. O. Aiyesimoju and R. J. Sobey, Process splitting of the boundary conditions for the advection-dispersion equation, Int. J. Numer. Methods Fluids, 9 (1989), 235-244. doi: 10.1002/fld.1650090208. Google Scholar
P. Amorim and M. Figueira, Convergence of a numerical scheme for a coupled Schrödinger-KdV system, Rev. Mat. Complut., 26 (2013), 409-426. doi: 10.1007/s13163-012-0097-8. Google Scholar
K. Appert and J. Vaclavik, Dynamics of coupled solitons, Phys. Fluids, 20 (1977), 1845-1849. doi: 10.1063/1.861802. Google Scholar
U. M. Ascher and R. I. McLachlan, Multisymplectic box schemes and the Korteweg-de Vries equation, Appl. Numer. Math., 48 (2004), 255-269. doi: 10.1016/j.apnum.2003.09.002. Google Scholar
D. M. Bai and L. M. Zhang, The finite element method for the coupled Schrödinger-KdV equations, Phys. Lett. A, 373 (2009), 2237-2244. doi: 10.1016/j.physleta.2009.04.043. Google Scholar
J. Cai, C. Bai and H. Zhang, Efficient schemes for the coupled Schrödinger-KdV equations: Decoupled and conserving three invariants, Appl. Math. Lett., 86 (2018), 200-207. doi: 10.1016/j.aml.2018.06.038. Google Scholar
J. Cai, Y. Wang and C. Jiang, Local structure-preserving algorithms for general multi-symplectic Hamiltonian PDEs, Comput. Phys. Commun., 235 (2019), 210-220. doi: 10.1016/j.cpc.2018.08.015. Google Scholar
J. X. Cai, C. Z. Bai and H. H. Zhang, Decoupled local/global energy-preserving schemes for the $N$-coupled nonlinear Schrödinger equations, J. Comput. Phys., 374 (2018), 281-299. doi: 10.1016/j.jcp.2018.07.050. Google Scholar
J. Cai, B. Yang and C. Zhang, Efficient mass- and energy-preserving schemes for the coupled nonlinear Schrödinger-Boussinesq system, Appl. Math. Lett., 91 (2019), 76-82. doi: 10.1016/j.aml.2018.11.024. Google Scholar
J. X. Cai, J. L. Hong, Y. S. Wang and Y. Z. Gong, Two energy-conserved splitting methods for three-dimensional time-domain Maxwell's equations and the convergence analysis, SIAM J. Numer. Anal., 53 (2015), 1918-1940. doi: 10.1137/140971609. Google Scholar
E. Celledoni, V. Grimm, R. I. McLachlan, D. I. McLaren, D. O'Neale, B. Owren and G. R. W. Quispel, Preserving energy resp. dissipation in numerical PDEs using the ``Average Vector Field" method, J. Comput. Phys., 231 (2012), 6770-6789. doi: 10.1016/j.jcp.2012.06.022. Google Scholar
E. Fan, Multiple travelling wave solutions of nonlinear evolution equations using a unified algebraic method, J. Phys. A: Math. Gen., 35 (2002), 6853-6872. doi: 10.1088/0305-4470/35/32/306. Google Scholar
A. Golbabai and A. S. Vaighani, A meshless method for numerical solution of the coupled Schrödinger-KdV equations, Computing, 92 (2011), 225-242. doi: 10.1007/s00607-010-0138-4. Google Scholar
Y. Z. Gong, J. Q. Gao and Y. S. Wang, High order Gauss-Seidel schemes for charged particle dynamics, Discrete Cont. Dyn. B, 23 (2018), 573-585. doi: 10.3934/dcdsb.2018034. Google Scholar
O. Gonzalez and J. C. Simo, On the stability of symplectic and energy-momentum algorithms for nonlinear Hamiltonian systems with symmetry, Comput. Methods Appl. Mech. Eng., 134 (1996), 197-222. doi: 10.1016/0045-7825(96)01009-2. Google Scholar
E. Hairer, C. Lubich and G. Wanner, Geometric Numerical Integration: Structure-preserving Algorithms for Ordinary Differential Equations, 2nd edition, Springer-Verlag, Berlin, 2006. Google Scholar
M. S. Ismail, F. M. Mosally and K. M. Alamoudi, Petrov-Galerkin method for the coupled nonlinear Schödinger-KdV equation, Abstr. Appl. Anal., 2014 (2014), Art. ID 705204, 8 pp. doi: 10.1155/2014/705204. Google Scholar
T. Itoh and K. Abe, Hamiltonian-conserving discrete canonical equations based on variational difference quotients, J. Comput. Phys., 76 (1998), 85-102. doi: 10.1016/0021-9991(88)90132-5. Google Scholar
R. J. LeVeque, Intermediate boundary conditions for time-split methods applied to hyperbolic partial differential equations, Math. Comput., 47 (1986), 37-54. doi: 10.1090/S0025-5718-1986-0842122-8. Google Scholar
Y. Q. Liu, R. J. Cheng and H. X. Ge, An element-free Galerkin (EFG) method for numerical solution of the coupled Schrödinger-KdV equations, Chin. Phys. B, 22 (2013), 100204, 9pp. doi: 10.1088/1674-1056/22/10/100204. Google Scholar
J. E. Marsden and A. Weinstein, The Hamiltonian structure of the Maxwell-Vlasov equations, Phys. D, 4 (1982), 394-406. doi: 10.1016/0167-2789(82)90043-4. Google Scholar
Ö. Oruc and A. Esen, A Haar wavelet collocation method for coupled nonlinear Schödinger-KdV equations, Int. J. Modern Phys. C, 27 (2016), 1650103, 16pp. doi: 10.1142/S0129183116501035. Google Scholar
G. R. W. Quispel and D. I. McLaren, A new class of energy-preserving numerical integration methods, J. Phys. A, 41 (2008), 045206, 7pp. doi: 10.1088/1751-8113/41/4/045206. Google Scholar
M. Suzuki, Fractal decomposition of exponential operators with applications to many-body theories and Monte Carolo simulations, Phys. Lett. A, 146 (1990), 319-323. doi: 10.1016/0375-9601(90)90962-N. Google Scholar
X. P. Wang, C. J. García-Cervera and W. N. E, A Gauss-Seidel projection method for micromagnetics simulations, J. Comput. Phys., 171 (2001), 357-372. doi: 10.1006/jcph.2001.6793. Google Scholar
H. Yoshida, Construction of higher order symplectic integrators, Phys. Lett. A, 150 (1990), 262-268. doi: 10.1016/0375-9601(90)90092-3. Google Scholar
Z. Zhang, S. S. Song, X. D. Chen and W. E. Zhou, Average vector field methods for the coupled Schrödinger-KdV equations, Chin. Phys. B, 23 (2014), 070208, 9pp. doi: 10.1088/1674-1056/23/7/070208. Google Scholar
Figure 1. The solutions for the CS-KdV system at $ T = 50 $. Solid line: exact solution; Star: numerical solutions
Figure 2. Top: the errors in solution; Bottom: the changes in invariants
Figure 3. Left: the maximal error in solution Vs. time step (Red: S-CI-1; Blue: S-CI-2$ \hat{b} $; Square: $ E $; Circle: $ N $); Right: the changes in invariants Vs. time step (Red: S-CI-1; Blue: S-CI-2$ \hat{b} $; Square: $ \mathcal{I}_1 $; Star: $ \mathcal{I}_3 $)
Figure 4. Left: the maximal error in solution Vs. CPU time (Circle: S-CI-1; Star: S-AVF-2; Square: S-CI-2$ \hat{a} $; Diamond: S-CI-2$ \underline{a} $; Red triangle: AVFS [27])
Figure 5. The numerical (Star) and exact (solid line) solutions at $ T = 1 $ for the case $ \gamma = 0.1 $
Figure 6. The numerical (Star) and exact (solid line) solutions at $ T = 1 $ for the case $ \gamma = 1 $
Figure 7. The errors in solution (top) and the relative changes in invariants (bottom) for the cases $ \gamma = 1 $ (left) and $ \gamma = 10 $ (right), respectively
Figure 8. The numerical (circle) and exact solutions (solid line) for the case $ \gamma = 10 $
Table 1. The solution errors for the CS-KdV system (1): $ x\in[-30,30] $, $ \Delta x = 0.5 $, $ \tau = 0.1 $ and $ T = 10 $
Method e2,p e2,q e2,N ${{\rm{e}}_{\infty ,p}}$ ${{\rm{e}}_{\infty ,q}}$ ${{\rm{e}}_{\infty ,N}}$
$\;{\rm{S-CI}}-2\hat a$ 7.16e-3 7.81e-3 1.27e-4 2.98e-3 6.02e-3 1.80e-4
${\rm{S-CI}}-2\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}$ 7.16e-3 7.81e-3 1.21e-4 2.98e-3 6.01e-3 1.71e-4
${\rm{S-CI}}-2\hat b$ 7.12e-3 7.75e-3 1.35e-4 3.08e-3 5.95e-3 1.91e-4
${\rm{S-CI}}-2\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}$ 7.12e-3 7.75e-3 1.38e-4 3.08e-3 5.95e-3 1.95e-4
AVF[27] 7.13e-3 7.80e-3 3.27e-4 2.95e-3 5.98e-3 1.13e-4
AVFS[27] 7.16e-3 7.81e-3 5.05e-4 2.99e-3 6.01e-3 1.71e-4
EFG[20] 9.28e-3 1.42e-2 2.09e-3 3.45e-3 9.53e-3 7.74e-4
Table 2. The maximal solution errors for the CS-KdV system (1): $ x\in[-50,50] $, $ \Delta x = 0.1 $, $ \tau = 0.1 $ and $ T = 8 $
Method ${{\rm{e}}_{\infty ,E}}$ ${{\rm{e}}_{\infty ,N}}$
${\rm{S-CI}}-2\hat a$ 2.15e-4 1.69e-4
${\rm{S-CI}}-2\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{a}$ 1.98e-4 1.61e-4
${\rm{S-CI}}-2\hat b$ 7.41e-5 2.88e-5
${\rm{S-CI}}-2\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}$ 7.40e-5 2.68e-5
HW[22] 1.21e-4 1.14e-4
2-order PGM[17] 9.41e-5 2.92e-5
Table 3. The maximal solution errors for CS-KdV system (1): $ x\in[-50,50] $, $ \Delta x = 0.1 $, $ \tau = 0.0001 $ and $ T = 0.1 $
${\rm{S-CI}}-2\hat b$ 1.73e-5 2.57e-10
${\rm{S-CI}}-2\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{b}$ 1.73e-5 2.57e-10
4-order RK-PGM[17] 4.73e-5 5.65e-8
Qi Hong, Jialing Wang, Yuezheng Gong. Second-order linear structure-preserving modified finite volume schemes for the regularized long wave equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-20. doi: 10.3934/dcdsb.2019146
Alexander Zlotnik, Ilya Zlotnik. Finite element method with discrete transparent boundary conditions for the time-dependent 1D Schrödinger equation. Kinetic & Related Models, 2012, 5 (3) : 639-667. doi: 10.3934/krm.2012.5.639
Takeshi Fukao, Shuji Yoshikawa, Saori Wada. Structure-preserving finite difference schemes for the Cahn-Hilliard equation with dynamic boundary conditions in the one-dimensional case. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1915-1938. doi: 10.3934/cpaa.2017093
Raffaele D'Ambrosio, Giuseppe De Martino, Beatrice Paternoster. A symmetric nearly preserving general linear method for Hamiltonian problems. Conference Publications, 2015, 2015 (special) : 330-339. doi: 10.3934/proc.2015.0330
Richard A. Norton, G. R. W. Quispel. Discrete gradient methods for preserving a first integral of an ordinary differential equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (3) : 1147-1170. doi: 10.3934/dcds.2014.34.1147
Weizhu Bao, Chunmei Su. Uniform error estimates of a finite difference method for the Klein-Gordon-Schrödinger system in the nonrelativistic and massless limit regimes. Kinetic & Related Models, 2018, 11 (4) : 1037-1062. doi: 10.3934/krm.2018040
Hector D. Ceniceros. A semi-implicit moving mesh method for the focusing nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2002, 1 (1) : 1-18. doi: 10.3934/cpaa.2002.1.1
J. Colliander, M. Keel, Gigliola Staffilani, H. Takaoka, T. Tao. Resonant decompositions and the $I$-method for the cubic nonlinear Schrödinger equation on $\mathbb{R}^2$. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 665-686. doi: 10.3934/dcds.2008.21.665
Mahboub Baccouch. Superconvergence of the semi-discrete local discontinuous Galerkin method for nonlinear KdV-type problems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 19-54. doi: 10.3934/dcdsb.2018104
Chunxiao Guo, Fan Cui, Yongqian Han. Global existence and uniqueness of the solution for the fractional Schrödinger-KdV-Burgers system. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1687-1699. doi: 10.3934/dcdss.2016070
Pavlos Xanthopoulos, Georgios E. Zouraris. A linearly implicit finite difference method for a Klein-Gordon-Schrödinger system modeling electron-ion plasma waves. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 239-263. doi: 10.3934/dcdsb.2008.10.239
In-Jee Jeong, Benoit Pausader. Discrete Schrödinger equation and ill-posedness for the Euler equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 281-293. doi: 10.3934/dcds.2017012
Christopher Grumiau, Marco Squassina, Christophe Troestler. On the Mountain-Pass algorithm for the quasi-linear Schrödinger equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1345-1360. doi: 10.3934/dcdsb.2013.18.1345
Tetsu Mizumachi, Dmitry Pelinovsky. On the asymptotic stability of localized modes in the discrete nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (5) : 971-987. doi: 10.3934/dcdss.2012.5.971
Qin Sheng, David A. Voss, Q. M. Khaliq. An adaptive splitting algorithm for the sine-Gordon equation. Conference Publications, 2005, 2005 (Special) : 792-797. doi: 10.3934/proc.2005.2005.792
Daniele Boffi, Lucia Gastaldi. Discrete models for fluid-structure interactions: The finite element Immersed Boundary Method. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 89-107. doi: 10.3934/dcdss.2016.9.89
Liejune Shiau, Roland Glowinski. Operator splitting method for friction constrained dynamical systems. Conference Publications, 2005, 2005 (Special) : 806-815. doi: 10.3934/proc.2005.2005.806
Richard A. Norton, David I. McLaren, G. R. W. Quispel, Ari Stern, Antonella Zanna. Projection methods and discrete gradient methods for preserving first integrals of ODEs. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2079-2098. doi: 10.3934/dcds.2015.35.2079
Sandra Lucente, Eugenio Montefusco. Non-hamiltonian Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 761-770. doi: 10.3934/dcdss.2013.6.761
Matthias Erbar, Max Fathi, Vaios Laschos, André Schlichting. Gradient flow structure for McKean-Vlasov equations on discrete spaces. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 6799-6833. doi: 10.3934/dcds.2016096
Jiaxiang Cai Juan Chen Bin Yang | CommonCrawl |
A simple adaptive difference algorithm with CO2 measurements for evaluating plant growth under environmental fluctuations
Hiroki Gonome1,
Jun Yamada2,
Norito Nishimura2,
Yuta Arai2,
Minoru Hirai2,
Naoki Kumagai2,
Uma Maheswari Rajagopalan2 &
Takahiro Kono2
BMC Research Notes volume 15, Article number: 48 (2022) Cite this article
The aim of this study is to demonstrate an adaptive method that is robust toward environmental fluctuations and provides a real-time measure of plant growth by measuring CO2 consumption. To verify the validity of the proposed method, the relation between the plant growth and variation in light conditions with a closed experimental system was investigated.
The proposed method was used to measure the photosynthetic rate induced by photosynthetic photon flux density (PPFD) and to evaluate plant growth under continuous and pulsed light in arugula plants. The PPFD-dependent change in photosynthetic rate was measured. And in the condition range of 200–10,000 μs pulse period and 50% duty ratio of pulsed light, there was no change in the growth rate of plants assuming the same PPFD as continuous light. These experiments showed the validity of the adaptive method in removing environmental fluctuations without precise control of temperature and humidity.
In Japan, plant factories are becoming increasingly common for commercial vegetable production, mainly because of their efficiency and flexibility in terms of commercial horticulture. Plant factories ensure highly efficient plant production by controlling parameters, such as room light intensity and wavelength spectrum to be optimal for plant production [1,2,3,4,5,6,7,8,9]. However, the enormity of the environmental parameters that need to be controlled requires the experimental evaluation over a long period of time for each kind of the plant [10]. Therefore, there is an immediate need for an environmentally robust experimental evaluation method to determine the optimal conditions for a particular plant species.
Many recent studies have considered different evaluation methods for environmental variations. Dong et al. [11] investigated the influence of environmental conditions on the efficiency of wheat production. However, their analyses were conducted after the wheat had finished growing. Chen et al. [12] and Kim et al. [13], who proposed image processing methods for evaluating plant growth, were similarly constrained because the evaluation was once again conducted after plant growth had been completed. In these methods, the evaluation takes at least several months to wait for the plant growth to finish and reach maturity.
To measure the plant growth in real-time, gas-exchange systems, which measure photosynthesis by uptake of CO2 or consumption of O2, have come a long way [14,15,16,17,18,19]. There are three main types of gas-exchange systems: open [15], semi-closed [16,17,18] and closed systems [19]. The open system continuously renews the air inside the chamber while measuring the gas concentration at the entry and exit airstreams. In most open chamber systems with air supply, there is a constant overpressure within the chamber to keep the structure inflated. Furthermore, the speed of steady air flow is often higher than that of the natural wind and a suitable ventilation rate is difficult to attain. Also, for the closed systems, humidity and temperature have to be controlled because these environmental factors affect the photosynthesis of the plant. Thus, for the gas-exchange systems, it is necessary to control the environmental factors precisely during the experiment to evaluate the plant growth. In order to find the optimal environmental conditions for many types of plants, it is necessary to create a simple, robust, and a speedy method that does not require complicated and precise equipments.
In our research, we propose an adaptive method to measure the real-time plant growth by measuring CO2 consumption that is robust toward environmental fluctuations. The method implemented through measuring a parameter called R and is defined as based on the measured CO2 values under two different conditions. The measurement involves calculating/correcting the measured CO2 value at a time point with the measurements from the nearby time points making the method to be adaptive and insensitive to external environmental fluctuations. The advantage of the method is that it is made of a simple system and can evaluate plant growth in real time, even in closed systems without precise control of temperature and humidity.
In order to verify the validity of the proposed method, we investigated the relation between the plant growth and variation in light intensity under two different types of illumination, namely, continuous and pulse lights with a closed experimental system.
Experimental system
In this study, arugula (rocket salad, Eruca sativa) was used as model plant because of its ease of cultivation (details on growing and experiment conditions given in Additional file 1: A).
Firstly, we made an experimental instrument to measure the CO2 consumption during photosynthesis of the plant. Figure 1a shows a schematic of the experimental instrument and the control system with a closed system. By using a closed container, CO2 was supplied from an attached CO2 cylinder. A Non-Dispersive Infra-Red (NDIR) CO2 sensor (TR76Ui; T&D Corporation, Matsumoto, Japan) was placed inside the closed container and connected to an external monitor to measure the variation in CO2 concentration each minute. The sensor required a stabilization period of an hour for the CO2 concentration to become stable. Therefore, the first cycle of data was discarded. The sensor is also equipped with a thermal sensitive resistor (THA-3001; T&D Corporation, Matsumoto, Japan) and can also be the thermometer in the container. Twelve plant containers of arugula leaves were placed in the closed container, and total CO2 consumption of twelve plants was measured.
Experimental instrument: a experimental system, b irradiance of light source as a function of wavelength and c circuit for controlling the PPFD and the pulsed light
A light source was positioned above the closed container at a distance of 400 mm from the plants to avoid the increase in temperature. It was comprised of 144 red and 102 blue Light Emitting Diode (LED) bulbs (LH W5AM 1T3T-1 and LH W5AM 3T3U-35, respectively; OSRAM Opto Semiconductors, Regensburg, Germany). The spectral distribution of the light source was measured by a photonic multichannel analyser (Quest X, Konica Minolta, Inc., Tokyo, Japan). Measured relative spectral irradiance of this LED light is shown in Fig. 1b. By integrating measured irradiance and calibrating it with a standard light source, Photosynthetic Photon Flux Density (PPFD) can be evaluated. With the circuit shown in Fig. 1c, the PPFD and the pulsed light can be controlled.
Principle of the new evaluation method
Firstly, Fig. 2a shows the protocol of measurement for the example under alternating light condition of continuous and pulsed illumination. CO2 consumption was measured in a day by switching light condition between continuous and pulsed light every hour. The photosynthetic rate changes depending on the carbon dioxide concentration [15]. Although the photosynthetic rate dramatically increases as the CO2 concentration increases, the increase of photosynthetic rate saturates under high CO2 concentrations.
Examples of measured data in a day as a function of hours for a rotated condition, b CO2 consumption and c algorithm to calculate the evaluation parameter R. The data of CO2 consumption from 2 h after the start of the experiment to 800 ppm were used for the evaluation to avoid due to the unstable nature of the sensor (the shaded area in b was excluded from the evaluation)
Figure 2b shows the variation of CO2 over a period of 16 h under alternating illumination protocol. In order to minimize the errors that result from the CO2 concentration dependence, CO2 measured within an extended time under alternate conditions was used as indicated in the Fig. 2b. In addition, the measured temperature and humidity during the experiment given in the supplementary information Additional file 1: Fig. S4 implies that the measured data contain environmental noises which are due to the changes in the temperature and humidity. Therefore, under continuous operation over a long term, there are larger variations in the CO2 consumption and thus growth. Instead of using sophisticated equipment to control the environment, we propose an algorithm that reduces the environment related noise from the data.
The parameter R used in the new algorithm proposed in this study is a ratio, which is defined as follows:
$$ R_{i} = \frac{{C_{E2,i} }}{{(C_{E1,i} + C_{E1,i + 1} )/2}} $$
where, C [ppm] is CO2 consumption in 1 h. Here the subscripts correspond to the sequential trials obtained under two different conditions or rotations E1 and E2. 'E1' and 'E2' mean 'Event 1' and 'Event 2', respectively. The indices in subscripts, i or i + 1 correspond to the counts of rotation of the Events 1 and 2 that occur alternatively. As can be seen from Fig. 2b, the concentration of CO2 within the closed chamber is not constant but gradually decreasing which in turn may affect the CO2 consumption itself. In order to reduce the effect of the noise, average of the data of Events E1, i and E1, i + 1 occurring at the denominator of Eq. 1 was used as a factor to compare with the data of an Event E2, i. Having a ratio of CO2 measured under two different illumination conditions (Events, E1 and E2) makes the ratio almost insensitive to the variations of CO2 over the long time. Therefore, the proposed method can compensate (or adapt to) for the local environmental fluctuations within a relatively short time of a few cycles of data acquisition. As long as there are no drastic variations in the environment, this method can robustly evaluate or correct for the effect of environmental conditions on the plant growth. Figure 2c shows the calculated example of R for the example data shown in Fig. 2b. As shown in Fig. 2c this proposed method can measure over sufficient number of rotations and thus having a fairly sufficient number of R values within a day.
Firstly, we measured the relationship between the PPFD and relative photosynthesis rate by using our adaptive algorithm with the closed system. This experiment was conducted by fixing Event 1 being the illumination under light PPFD of 553 µmol m−2 s−1, and setting the Event 2 as being under a variable PPFD. Here the relative photosynthesis rate was normalized with the PPFD of Event 1 as 1.0. The measurement was performed over one day for each of the PPFD conditions, and the error bars shown in Fig. 3a are the standard deviations for the measured data. We have found a good agreement with the reference data (Jie He et al. [10]) (details on the comparison and validity of this method in Additional file 1: B).
Measurement result using the proposed method: a the relationship between the PPFD and relative photosynthesis rate and b effect of pulsed light on the plant growth. In a red dashed line shows the approximation equation for our data. The error bars in a and b are the standard deviation of the measured data
Finally, we investigated the effect of pulse periods of light source on plant growth using our method. Figure 3b shows the evaluation parameter R against the pulse period of the light source. In this experiment, constant light was fixed and periods of the pulsed light was varied. The duty ratio for the pulsed light was set to 50%. The measurement was performed for 4 days for each of the pulse period conditions. The error bars in Fig. 3b correspond to standard deviation.
In addition, PPFD difference between constant and pulsed illuminations should be properly considered in order to evaluate the efficiency of the method. Following simple approximation equation as shown in Fig. 3a, the dashed line, corresponded to the fitted line as determined by the least square method.
$$ R_{p} (P) = - 2.17 \times 10^{ - 6} \times P^{2} + 2.80 \times 10^{ - 3} \times P + 9.65 \times 10^{ - 2} , $$
where the Rp [–] is the relative photosynthesis rate and the P [µmol m−2 s−1] is the PPFD.
The value of PPFD decreased from 113 to 75.3 µmol m−2 s−1 when switching from continuous light to pulsed light. The decrease in PPFD led to a decreased photosynthetic rate calculated to be about 0.77 [= Rp (75.3)/Rp (112)]. The averaged value of R over all experimental range of pulse periods was about 0.75. Therefore, strictly speaking, the evaluation parameter R, ratio characterizing the effect of pulsed light and continuous light, would reflect a change in photosynthesis efficiency due to the change in PPFD.
Tennessen et al. [20] using tomato leaves found that when photons were provided during 1.5 μs of pulsed light followed by 148.5 μs dark periods, the photosynthesis was the same as in the continuous provided the integrated photons during the pulsed light are equivalent. This report suggested that the photons in pulses of 100 μs or shorter are absorbed and stored in the reaction centers to be used in electron transport during the dark period. Although the types of plants and used pulse periods range are different from [20], our measurement results agree with their results. In addition, Jao and Fang [21] have investigated the effects of pulsed light on the growth of potato plantlets and energy savings by using LEDs compared to the use of conventional tubular fluorescent lamps. They showed that the pulsed LEDs at 720 Hz and 50% duty ratio with 16-h light/8-h dark photoperiod could produce the highest photosynthesis growth rate, and LEDs at 180 Hz and 50% duty ratio with 16-h light/8-h dark photoperiod would be the best choice when considering the efficiency of the yield with respect to energy consumption. Optimal light source conditions considering of photosynthesis rate and energy consumption differ depending on the types of plant [20,21,22,23]. It would be an industrial advantage if the optimal light source conditions could be investigated and realized by using pulsed light with LEDs.
Our results suggest that our evaluation method can evaluate the effect of rotated condition on plant growth by removing the environmental noises without precise control of temperature and humidity. However, it is necessary to investigate what kind of light source can make high photosynthesis rate and saving energy consumption for various types of plants. In addition, considering the experimental principle of this study, it is not possible to evaluate the results in the case where R changes rapidly within one cycle.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
LED:
PPFD:
Photosynthetic photon flux density
Gaudreau L, Charbonneau J, Vézina L-P, Gosselin A. Photoperiod and photosynthetic photon flux influence growth and quality of greenhouse-grown lettuce. HortScience. 1994;29(11):1285–9.
Harun AN, Ani NN, Ahmad R, Azmi NS. Red and blue LED with pulse lighting control treatment for Brassica chinensis in indoor farming. IEEE Conf Open Syst. 2013;2013:231–6. https://doi.org/10.1109/ICOS.2013.6735080.
Kitaya Y, Niu G, Kozai T, Ohashi M. Photosynthetic photon flux, photoperiod, and CO2 concentration affect growth and morphology of lettuce plug transplants. HortScience. 1998;33(6):988–91.
Kozai T. Resource use efficiency of closed plant production system with artificial light: concept, estimation and application to plant factory. Proc Jpn Acad Ser B. 2013;89(10):447–61. https://doi.org/10.2183/pjab.89.447.
Morimoto T, Torii T, Hashimoto Y. Optimal control of physiological processes of plants in a green plant factory. Control Eng Pract. 1995;3(4):505–11. https://doi.org/10.1016/0967-0661(95)00022-M.
Murase H. The latest development of laser application research in plant factory. Agric Agric Sci Procedia. 2015;3:4–8. https://doi.org/10.1016/j.aaspro.2015.01.003.
Nishikawa T, Fukuda H, Murase H. Effects of airflow for lettuce growth in the plant factory with an electric turntable. IFAC Proc Vol. 2013;46(4):270–3. https://doi.org/10.3182/20130327-3-JP-3017.00062.
Shimizu H, Saito Y, Nakashima H, Miyasaka J, Ohdoi K. Light environment optimisation for lettuce growth in plant factory. IFAC Proc Vol. 2011;44(1):605–9. https://doi.org/10.3182/20110828-6-IT-1002.02683.
Coffelt TA, Nakayama FS. Determining optimum harvest time for guayule latex and biomass. Ind Crops Prod. 2010;31(1):131–3. https://doi.org/10.1016/j.indcrop.2009.09.015.
He J, See XE, Qin L, Choong TW. Effects of root-zone temperature on photosynthesis, productivity and nutritional quality of aeroponically grown salad rocket (Eruca sativa) vegetable. Am J Plant Sci. 2016;7(14):1993–2005. https://doi.org/10.4236/ajps.2016.714181.
Dong C, Shao L, Fu Y, Wang M, Xie B, Yu J, Liu H. Evaluation of wheat growth, morphological characteristics, biomass yield and quality in Lunar Palace-1, plant factory, green house and field systems. Acta Astronaut. 2015;111:102–9. https://doi.org/10.1016/j.actaastro.2015.02.021.
Chen WT, Yeh YHF, Liu TY, Lin TT. An automatic plant growth measurement system for plant factory. IFAC Proc Vol. 2013;46(4):323–7. https://doi.org/10.3182/20130327-3-JP-3017.00073.
Kim MH, Choi EG, Baek GY, Kim CH, Jink BO, Moon BE, Moon BE, Kim DE, Kim HT. Lettuce growth prediction in plant factory using image processing technology. IFAC Proc Vol. 2013;46(4):156–9. https://doi.org/10.3182/20130327-3-JP-3017.00036.
Takahashi N, Ling PP, Frantz JM. Considerations for accurate whole plant photosynthesis measurement. Environ Control Biol. 2008;46(2):91–101. https://doi.org/10.2525/ecb.46.91.
Bugbee B. Steady-state canopy gas exchange: system design and operation. HortScience. 1992;27(7):770–6.
Wheeler RM. Gas-exchange measurements using a large, closed plant growth chamber. HortScience. 1992;27(7):777–80. https://doi.org/10.21273/HORTSCI.27.7.777.
Acock B, Acock MC. Calculating air leakage rates in controlled-environment chambers containing plants. Agron J. 1989;81(4):619–23. https://doi.org/10.2134/agronj1989.00021962008100040014x.
Kimball BA. Exact equations for calculating air leakage rates from plant growth chambers. Agron J. 1990;82(5):998–1003. https://doi.org/10.2134/agronj1990.00021962008200050032x.
Mitchell CA. Measurement of photosynthetic gas exchange in controlled environments. Hortic Sci. 1992;27(7):764–7.
Tennessen DJ, Bula RJ, Sharkey TD. Efficiency of photosynthesis in continuous and pulsed light emitting diode irradiation. Photosynth Res. 1995;44(3):261–9. https://doi.org/10.1007/BF00048599.
Jao RC, Fang W. Effects of frequency and duty ratio on the growth of potato plantlets in vitro using light-emitting diodes. Hortic Sci. 2004;39(2):375–9. https://doi.org/10.21273/HORTSCI.39.2.375.
Yeh N, Chung JP. High-brightness LEDs—Energy efficient lighting sources and their potential in indoor plant cultivation. Renew Sustain Energy Rev. 2009;13(8):2175–80. https://doi.org/10.1016/j.rser.2009.01.027.
Son KH, Lee SR, Oh MM. Comparison of lettuce growth under continuous and pulsed irradiation using light-emitting diodes. Hortic Sci Technol. 2018. https://doi.org/10.12972/kjhst.20180054.
The authors thank all the participants of the current study.
This research did not receive any specific Grant from funding agencies in the public, commercial, or not‐for‐profit sectors.
Department of Mechanical System Engineering, Yamagata University, Yamagata, 992-8510, Japan
Hiroki Gonome
Department of Mechanical Engineering, Shibaura Institute of Technology, 3-7-5 Toyosu, Koto-ku, Tokyo, 135-8548, Japan
Jun Yamada, Norito Nishimura, Yuta Arai, Minoru Hirai, Naoki Kumagai, Uma Maheswari Rajagopalan & Takahiro Kono
Jun Yamada
Norito Nishimura
Yuta Arai
Minoru Hirai
Naoki Kumagai
Uma Maheswari Rajagopalan
Takahiro Kono
All authors have contributed to the article preparation and finalization. Detailed author contribution are as follows; HG: data curation, methodology, project administration, supervision, visualization, writing—original draft preparation, writing—review and editing. JY: conceptualization, methodology, project administration, supervision, writing—review and editing. NN, YA, MH, NK: investigation, validation, visualization. UMR: supervision, writing—review and editing. TK: data curation, formal analysis, investigation, methodology, project administration, supervision, visualization, writing—original draft preparation, writing—review and editing. All authors read and approved the final manuscript.
Correspondence to Takahiro Kono.
Figure S1. Plant sample: (a) Arugula before the experiments and (b) twelve plant containers of arugula placed in closed container. Table S1. Ingredients in the nutrient solution. Figure S2. Comparison of the measured CO2 consumption against the weight change of argula with red squares indicating the measurement data and the red line indicating the least squares fit with significant correlation (correlation coefficient r =0.995). Figure S3. Effect of rotation time on measurement of the evaluation parameter R. Figure S4. Examples of measured data of temperature and humidity in a day during experiment. Figure S5 Comparison of our measurement results and the reference data [10].
Gonome, H., Yamada, J., Nishimura, N. et al. A simple adaptive difference algorithm with CO2 measurements for evaluating plant growth under environmental fluctuations. BMC Res Notes 15, 48 (2022). https://doi.org/10.1186/s13104-022-05929-0
CO2 gas-exchange system
Adaptive method
Environmental fluctuations
Pulsed light | CommonCrawl |
The IllustrisTNG project is a suite of state-of-the-art cosmological galaxy formation simulations. Each simulation in IllustrisTNG evolves a large swath of a mock Universe from soon after the Big-Bang until the present day while taking into account a wide range of physical processes that drive galaxy formation. The simulations can be used to study a broad range of topics surrounding how the Universe — and the galaxies within it — evolved over time.
Motivation and Big Ideas
The standard model of cosmology posits that the mass-energy density of the Universe is dominated by unknown forms of dark matter and dark energy. Testing this extraordinary scenario requires precise predictions for the formation of structure in the visible matter, which is directly observable as stars, diffuse gas, and accreting black holes. These components of the visible matter are organized in a 'Cosmic Web' of sheets, filaments, and voids, inside which the basic units of cosmic structure - galaxies - are embedded. To test our current ideas on the formation and evolution of galaxies, we strive to create simulated galaxies as detailed and realistic as possible, and compare them to galaxies observed in the real universe. By probing our successes and failures, we can further enhance our understanding of the process of galaxy formation, and thereby perhaps realize something fundamental about the world in which we live.
IllustrisTNG Project Overview
The original IllustrisTNG project consists of three volumes, and 18 simulations in total. The individual simulations vary in their physical size, mass resolution, and complexity of physics included. Three physical simulation box sizes are employed: cubic volumes of roughly 50, 100, and 300 Mpc side length, which we refer to as TNG50, TNG100, and TNG300, respectively. The three boxes compliment each other by promoting a focus on various aspects of galaxy formation. The large physical volume associated with the largest simulation box (TNG300) enables the study of galaxy clustering, the analysis of rare objects such as galaxy clusters and provides the largest galaxy sample. In contrast, while the smaller physical volume simulation of TNG50 simulation has a comparatively limited sampling of rare objects, the mass resolution achieved in the smaller volume simulations is a few hundred times higher than the larger volume TNG300 simulation. The TNG50 volume therefore enables a more detailed look at, e.g., the structural properties of galaxies, the detailed structure of gas around galaxies, and the convergence of our physical model. The central volume simulation, TNG100, falls between these two limits. Importantly, the TNG100 volume uses the same initial conditions (adjusted for updated cosmology) as used in the original Illustris simulation, which facilities clean comparisons between the original Illustris results and the updated TNG model.
Each of the three simulation boxes has been run at three resolution levels. The highest resolution simulations employ more than 20, 10, and 30 billion resolution elements for the TNG50, TNG100, and TNG300 boxes, respectively. This leads to baryon and dark matter mass resolutions as shown in the table above. Sampling across different resolution levels within the same physical simulations enables clear analysis of the resolution dependence of our results. We employ a physical model that is deliberately constructed to not require parameter tuning as the simulation resolution is varied. So, while the mass resolution between the highest TNG50 run and lowest TNG300 run is separated by a factor of more than 10,000, the parameters employed in the model go unchanged. Comparing the results at different resolution levels helps to evaluate the performance and assess the predictive power of our model. While the details of certain results, such as galaxy stellar masses, change with resolution level, most of the results change in predictable and understandable ways that allow us both understand and correct for the finite resolution of our simulations.
Finally, all of the simulations have "dark matter only" counterparts to their "baryonic physics" runs just described. Dark matter only simulations give predictions for how the large scale structure, the clustering of galaxies, the shapes of halos, and so forth would evolve in a Universe constructed only of dark matter. These predictions are useful in part because they are relatively clean, owing to their sole dependence on the gravitational assembly of dark matter halos. However, at the same time, such models fully neglect the important, but uncertain, impact of baryons on the growth of galaxies. Having side-by-side dark matter only and full physics simulations allows us to directly compare and understand the impact that baryon physics has on a broad range of our results.
The TNG50 Simulation
TNG50 is a new class of cosmological volume simulation -- it has been designed to overcome the traditional limitation of compromising volume versus resolution, by simulating a large, fully representative cosmological volume at a resolution which approaches or even exceeds that of modern "zoom" simulations of individual massive galaxies. The simulation realizes a 50 Mpc box sampled by $2160^3$ gas cells, with a corresponding baryon mass of $8 \times 10^4 M_\odot$ (see table above). The median spatial resolution of the star-forming ISM gas is ~100-140 parsecs across cosmic time. It enables us to obtain unparalleled detail, providing a view into the structure, chemo-dynamical evolution, and small-scale properties of galaxies -- the image below shows two massive disk galaxies from the simulation to highlight its ability to resolve internal structural details such as spiral arms, bulges, and nuclear bars, together with the extremely thin scale-heights of galactic disks.
TNG50 contains roughly 100 Milky Way mass-analogs, enabling detailed comparisons to our own galaxy at z = 0. It also hosts one massive galaxy cluster with a total mass ~ $10^{14}$ solar masses, a Virgo-like analog, and dozens of group sized halos at ~ $10^{13} M_\odot$. All of these massive objects are simulated at higher numerical resolution than in any previously published study, enabling studies not only of the gaseous halos and central galaxies, but also of the large populations of their satellite galaxies. Its coverage in redshift range and galaxy stellar mass enables us to make quantitative predictions for signatures observable with the James Webb Space Telescope (JWST), as well as recent ground-based IFU instruments such as MUSE, SINFONI, and KCWI. The key science drivers of TNG50 focus not only on the present day (z = 0), but also at earlier epochs, from cosmic noon (z ~ 2) through reionization (z ~ 6).
The TNG50 simulation occupies a unique combination of large volume and high resolution, the figure above places it into context. We show TNG50 (dark blue circle) in comparison to other cosmological volumes (circles) and zoom simulation projects (diamonds) at z ~ 0. The x-axis shows an effective volume, given in terms of the total number of resolved galaxies with stellar mass greater than $10^9 M_\odot$. The y-axis shows the resolution, given in terms of the mass of the baryonic mass element. Moving towards the upper right corner represents the frontier for next-generation high-resolution cosmological volume simulations, an extraordinarily intensive computational undertaking which must take advantage of some of the largest supercomputers available.
TNG50 is the third and final simulation of the original IllustrisTNG project.
TNG Spin-off Projects
Since the completion of the original IllustrisTNG simulation suite, a number of further, spin-off projects have been undertaken using the TNG galaxy formation model. These simulations apply the TNG model to regimes, or applications, not accessible by the original TNG50+100+300 volumes.
CAMELS (2020) - a suite of many, small volume ($\rm{25 Mpc/h}$) simulations, varying cosmological and feedback related parameters, i.e. perturbations on the fiducial TNG model, among others. Focused on machine learning applications and cosmological inference.
THESAN (2021) - a study of cosmic reionization, combining the fiducial TNG model with a radiative transfer method down to redshift $z \sim 5$, offering a full radiation-hydrodynamical simulation to study the high-redshift galaxy population and the process of reionization.
MillenniumTNG (2022) - a realization of the original Millennium volume with the fiducial TNG model ($\rm{MTNG740}$), offering large-volume statistics and cosmological applications such as galaxy clustering, the galaxy-halo connection, and the impact of baryons.
TNG-Cluster (2023) - a simulation including several hundred massive galaxy clusters, $M_{\rm halo} \sim 10^{15} \rm{M}_\odot$, with the fiducial TNG model, designed to study the physics of the intracluster medium, galaxy evolution in dense environments, and cluster cosmology.
How does TNG relate to the original Illustris simulation?
The IllustrisTNG project is the successor of the Illustris simulation. It uses an updated 'next generation' galaxy formation model which includes both new physics as well as refinements to the original Illustris model. The TNG effort is a simulation campaign which:
(i) retains the fundamental approach and physical model flavor of Illustris,
(ii) alleviates many Illustris model deficiencies with respect to benchmark observations, and
(iii) significantly expands the scope with simulations of larger volumes, at higher resolution, and with new physics.
As in Illustris, we follow the coupled dynamics of DM and gas with the robust, accurate, and efficient quasi-Lagrangian code AREPO. In this approach, an unstructured Voronoi tessellation of the simulation volume allows for dynamic and adaptive spatial discretization, where a set of mesh generating points are moved along with the gas flow. This mesh is used to solve the equations of ideal magnetohydrodynamics (MHD) using a second order, finite volume, directionally un-split Godunov-type scheme. The gravitational force is calculated with a split Tree-PM approach, where long-range forces are calculated from a particle-mesh method, and short-range forces are calculated with a hierarchical octree algorithm. The scheme is quasi-Lagrangian, second order in both space and time, uses individual particle timestepping, and has been designed to efficiently execute large, parallel astrophysical simulations on modern supercomputer architectures.
On top of this numerical framework, the TNG galaxy formation model includes the key physical processes needed to study the formation and evolution of galaxies:
Microphysical gas radiative mechanisms, including primordial and metal-line cooling and heating with an evolving background radiation field.
Star formation in the dense interstellar medium.
Stellar population evolution and chemical enrichment following supernovae Ia, II, and AGB stars, individually tracking elements: H, He, C, N, O, Ne, Mg, Si, and Fe.
Stellar feedback driven galactic-scale outflows.
The formation, merging, and accretion of nearby gas by supermassive blackholes.
Multi-mode blackhole feedback operating in a thermal 'quasar' mode at high accretion states, and a kinetic 'wind' mode at low accretion states.
The amplification of cosmic magnetic fields from a minute primordial seed field at early times.
Scientific Goals
The goals of constructing such a large and ambitious simulation suite are to shed light on the physical processes that drive galaxy formation, to understand when, why, and how galaxies are evolving into the structures that are observed in the night sky, and to make predictions for current and future observational programs to broaden and deepen our understanding of galaxy formation. These goals are achieved not in a single step, but rather through a series of extended analyses of the simulations, each targeting specific science questions. Some of the first questions that have been specifically addressed using the TNG suite are characterizing the stellar masses, colors, and sizes of galaxies, understanding the physical origin of the heavy element (metallicity) distribution in galaxies and galaxy clusters, drawing connections between the presence of dynamically important magnetic fields and the observed radio emission from galaxies, and the clustering signal of galaxies and matter on large scales. Subsequent studies are expected to canvass an even broader range of topics.
The core power of simulation suites — such as TNG — is that the depth of information on the simulation is much deeper than what is accessible observationally alone. For example, one of the most important features of simulations is their access to the time domain. While observations of galaxies can be carried out at different redshifts to give census data about galaxy populations at different stages of their evolution, the timescale over which galaxies evolve (millions, or even more, years) is simply far too long to follow directly observationally. Instead, in observations, bold assumptions are required to infer how census data at different observational epochs leads to a physical picture of galaxy evolution. In stark contrast, simulated galaxy populations in the TNG suite can be directly tracked in time so that an unambiguous and clear picture of their evolutionary history can be pieced together. This facilitates, e.g., clear predictions for the size evolution of galaxies, which would be difficult to directly obtain observationally. Using the simulation's natural access to the time domain can help guide observational efforts for sussing out physical evolution trends within their multi-epoch observational data.
In addition to the time domain, the simulations also provide unambiguous predictions for physical quantities that might be difficult to derive observationally. For example, observational gas phase or stellar metallicity measurements are complicated derivative quantities that arise out of spectral energy distribution (SED) fitting, or line emission fitting. While significant effort has been put into refining these measurement procedures to the highest possible level of accuracy, significant systematic uncertainty still surrounds the observational measurement methods. In contrast, the TNG simulations make clear and direct predictions for these quantities.
The TNG simulations can therefore be used in concrete ways to build physical models of galaxy formation as well as to aid in the physical interpretation of observational data. The direct knowledge from the simulation can be used to detail the shape of galaxy stellar profiles, examine the color evolution of galaxies, or even characterize the evolution of the baryon acoustic oscillation signal in galaxy clustering data. The central goal of the TNG project is to create a broad tool that will further our understanding of galaxy formation both through direct analysis of the simulation, as well as through assisted interpretation of observational data.
Early Science Results from TNG100 and TNG300
The IllustrisTNG project addresses a number of open questions ranging from galaxy formation and evolution to galaxy clusters and the large scale structure of our Universe. Its wealth of data will be used to answer a large number of scientific questions in these fields. To exemplify how TNG can deepen our understanding of the Universe we summarize some of the first results of the simulations here.
Galaxy formation
One of the main motivations for the IllustrisTNG project is to deepen our understanding of how galaxy formation works. The simulation follows the evolution of dark matter and gas from the early universe until today, self-consistently modeling gravitational interaction, as well as the (magneto-)hydrodynamical interactions of the gaseous component. On top of this, TNG models radiative cooling of gas and the formation of stars and supermassive black holes as well as their feedback and chemical enrichment effects on the host galaxy. This implies that we follow all the main ingredients relevant for galaxy formation in a single simulation, and are therefore able to assess the importance of all these components and their complex interplay on various observable properties.
One of the key properties of a galaxy is its stellar content and how it is distributed. In particular, the distribution of stars in galaxies of different mass is very different: Milky-Way size galaxies mainly build up their stellar content via gas cooling and in-situ star formation. This leads to a centrally peaked stellar mass profiles with an average power-law slope of -5, which implies that on average more than 90% of the stellar mass of such a halo is located within 10% of the halos' virial radius. The stellar content of the most massive, galaxy cluster size, halos however, is in large parts (80% of the final stellar mass) brought to them via merging with already existing lower mass galaxies. The different physical behavior of stars compared to diffuse gas leads to a significantly different radial distribution of stars, in particular a more pronounced diffuse stellar component. The average power-law slope of the stellar density profile of such a galaxy cluster is around -3, which implies significantly more stars in the outer regions: only about 50% of the stellar mass of galaxy clusters is located within 10% of the virial radius, while the other half is a diffuse component outside this radius and observationally very difficult to detect. However, modeling this correctly in observations is crucial to infer accurate total stellar masses for these systems. To enable this in future observations, we provide analytical fitting functions inferred from TNG to correct for the undetectable components in Pillepich et al. (2017), together with a detailed quantitative analysis of the stellar content of massive halos.
Apart from the total mass of the stars and their distribution, the color at which they collectively shine is one of the most evident observational properties of galaxies. The overall color of a galaxy depends on the collective properties of all stars, in particular on their age and chemical composition. On top of this, the apparent color of a galaxy can change due to the presence of dust, both, close to the stars or in the interstellar medium. In Nelson et al. (2017), we produce mock observations of our simulated TNG galaxies, taking all these effects into account, and compare the resulting color distribution with distributions of observed galaxies from the Sloan Digital Sky Survey (SDSS). The overall agreement of simulated and observed samples is unprecedented. In particular, the g-r color distribution functions in different stellar mass bins are in excellent agreement with observations, with small second-order discrepancy in the slope of the red galaxy population in the g-r vs stellar mass plane, possibly due to aperture effects. This indicates that the underlying physical quantities, i.e. the mean stellar age and stellar metallicity, are also well-reproduced in the simulations, though care has to be taken when comparing these properties directly to observationally inferred ages and metallicities.
The main reason why some galaxies are red is that their stellar populations are very old and all massive stars, which would contribute to the blue light, have already ceased. This leaves only moderately massive and low-mass stars behind, which cause a red appearance. From a galaxy formation point of view, this can only be achieved if there is no formation of new stars in the galaxy. The gas residing in these galaxies however, has a natural tendency to cool and collapse via gravitational instabilities, which ultimately will lead to the formation of new stars. The key ingredient in the simulation to prevent this from happening are feedback effects from supermassive black holes, which become very efficient at a specific mass scale and drastically reduce star-formation in the massive galaxies. In TNG, the transition to a low accretion state and an associated, highly efficient kinetic wind feedback is key for a sharp transition and the buildup of a bimodal galaxy color distribution. The transition time of galaxies evolving from blue, star-forming to red, quiescent galaxies varies significantly from galaxy to galaxy, with a median of 1.6 Gyr. The high scatter in transition time originates from the diverse paths that individual galaxies take through the color-stellar mass plane, which, alongside with a detailed analysis of the low-redshift galaxy color distribution, is presented in Nelson et al. (2017).
Another very important property of stars inside galaxies are the abundances of individual chemical elements, which give important insights into how heavy elements in the Universe form. In TNG, we follow three main enrichment channels that return heavy elements to the interstellar medium: Enrichment from core-collapse supernovae (SNcc), enrichment from type Ia supernovae (SNIa) and enrichment from stellar winds from asymptotic giant branch (AGB) stars. Each of these enrichment channels releases a different chemical composition of elements to the surrounding interstellar medium, which, for SNcc and AGB star enrichment also depends on the properties of the stellar population they originate from. In Naiman et al. (2017), we study the abundance of magnesium (Mg) and iron (Fe) as a proxy of the relative enrichment contribution of SNcc to SNIa as a function of the Fe over hydrogen (H) fraction, a proxy for the overall enrichment. TNG broadly recovers the observed trend, however with an offset normalization, which indicates that the SNIa rate used in TNG might be too low by a factor of a few.
Looking at very rare elements, such as Europium (Eu), which is assumed to be created in neutron star — neutron star mergers, we are able to obtain important insight about the mixing of elements in the interstellar medium during phases of intense star formation. The most Eu enriched gas (high Eu/Fe abundances) originates from starbursts around redshifts 2-4, at the peak of the cosmic star formation history. Interestingly, however, the Eu/Fe ratio does not show a trend with assembly history or present-day galactic properties of the host galaxy in Milky Way-sized halos, indicating that the precise assembly history does not influence the Eu enrichment. The simulated Milky-Way sized galaxies do, however, show a negative trend of the Eu/Fe ratio with star formation rate, suggesting that an increased level of star formation reduces the Eu/Fe enrichment. Possible mechanisms causing this trend, as well as detailed enrichment distributions, are presented in Naiman et al. (2017).
Galaxy Clusters
Galaxy Clusters are the largest collapsed objects in our Universe. They do not only contain hundreds of member galaxies (shown above), but also a large reservoir of dilute, hot intra-cluster gas. Observations of this gas component yielded a number of puzzling results which up to now theoretical astrophysicists struggle to explain. One of these results is the existence of extended radio-emission from some galaxy clusters, which is a sign for the presence of magnetic fields and high-energy electrons in these systems. As IllustrisTNG models the presence and amplification of primordial magnetic fields during the collapse of structure in the early universe, we are able to self-consistently study the magnetic properties of the gas in different environments: in low-density regions and cosmic filaments, the magnetic fields closely follow the expectation from adiabatic compression of the primordial field during structure formation, keeping the orientation. In collapsed objects however, where the density is significantly higher, there is an efficient amplification of the magnetic field to about 5 orders of magnitude above the value expected from adiabatic compression alone. The topology of the magnetic field in these regions is consequently strongly correlated with the topology of the gas-flows: the magnetic field in disc galaxies is ordered and disc-like with field strengths of about 10 micro-Gauss, while the magnetic field in elliptical galaxies is unordered, reflecting the chaotic gas motions in these systems.
Applying a simple model for relativistic electrons in these systems, it is possible to derive a radio-flux for the simulated galaxy clusters and compare them to observations. In Marinacci et al. (2017) we created these mock radio observations for resolution and sensitivity parameters of VLA, LOFAR, ASKAP and SKA observations. From the simulations it becomes clear that current observations are just able to probe the most radio-luminous objects and the increased sensitivity of future telescopes will open up the possibility to probe the radio emission with far better statistics, which is essential to deepen our understanding of the magnetic fields and relativistic electron population in galaxy clusters. The simulated radio-emission in TNG is broadly consistent with observations, and the clusters obey observed scaling relations between radio and X-ray emission and as well as between radio emission and the Sunyaev-Zel'dovich Compton parameter. However, a more detailed analysis shows some discrepancies, possibly highlighting the need to treat populations of high-energy electrons in these simulations more accurately to provide an adequate theoretical counterpart to future radio telescopes such as SKA. More details on the magnetic field properties and the radio emission in TNG can be found in Marinacci et al. (2017).
Our understanding of the large scale structure, the Cosmic Web and the evolution of the Universe as a whole has made enormous progress in the past decades. The Lambda-Cold-Dark-Matter theory of cosmology is very successful in explaining observations of the primordial chemical abundances, the cosmic microwave background radiation, the expansion history of the universe and the statistical properties of large scale structure. However, there is still a fundamental lack of knowledge, for example on the nature of the so-called dark energy, which drives the accelerated expansion in our Universe. To pin down its physical properties, large observational efforts are taken to further constrain its effects and by this narrow down the number of possible models. Among these efforts, there are large galaxy redshift surveys like EUCLID, DES or eBOSS which will map out the large-scale structure of our universe with unprecedented accuracy. These missions, however measure only the stellar light component of the universe, which, to some extent is not completely equivalent to the overall matter distribution, which is the relevant quantity for cosmological measurements. With TNG, in particular TNG300, hydrodynamical simulations have reached a sufficient volume and resolution to study clustering of all matter components in the Universe on the relevant scales. This means that these kind of simulations have come to a point where they can complement other methods traditionally used in this field, such as for example semi-analytic models, sub-halo abundance matching and halo-occupation distribution approaches. Among these different types of modeling, the TNG simulations provide the most complete and self-consistent approach to following the emergence and evolution of the large-scale structure in our Universe, and can therefore test the assumptions made in the other approaches. Alongside with this, TNG is able to inform about observational biases depending on sample selection and as a function of scales. For example, TNG300 is just large enough to assess the biases at the scales of the baryonic acoustic oscillations (BAO). Its location sensitively depends on cosmological parameters, but only very weakly on galaxy formation physics, making it an ideal probe of cosmology. In Springel et al (2017), we find that the location of the BAO feature can vary up to 6%, depending on tracer used, but can be corrected for by template fitting.
Another effect only hydrodynamical simulations are able to predict self-consistently is the back reaction of the baryonic components on the underlying dark matter distribution, as well as on the overall matter distribution: we find that its impact on the total matter power spectrum becomes >10% at scales with k > 5 h/Mpc at redshift zero, but an effect on the percent level to far larger scales. A detailed analysis on the matter and galaxy clustering is presented in Springel et al. (2017).
Future directions?
The emergence of cosmological hydrodynamical simulations as powerfully predictive theoretical models was embodied in recent projects (from 2013 - 2016) such as Illustris, EAGLE, Horizon-AGN, Magneticum, and MassiveBlack-II. In concert with other large-volume efforts these programs have convincingly demonstrated that hydrodynamical simulations of structure formation at kilo-parsec spatial resolution can reasonably reproduce the fundamental properties and scaling relations of observed galaxies.
In TNG we push these types of simulations to new limits - in size, resolution, and physical fidelity. However, for cosmology, the TNG300 volume is still relatively small. At the other extreme, for studying the properties of the dense and cold phases of the interstellar medium, the resolution and physical model assumptions of TNG50 are still relatively coarse. Therefore, the future steps for such simulations encompass a combined approach of improved and additional physics together with ever more ambitious numerical realizations which continue to take advantage of the increasing computational power of the world's fastest supercomputer systems. | CommonCrawl |
The history of rescuing reinforcement and the preliminary study of preventive protection system for the cliff of Mogao Grottoes in Dunhuang, China
Xudong Wang1,2,5,
Yanwu Wang ORCID: orcid.org/0000-0002-5290-16231,2,3,4,
Qinglin Guo3,4,
Qiangqiang Pei3,4 &
Guojing Zhao4,6
Based on the research results and practical engineering experience pertaining to the protection and reinforcement of the cliff of the Mogao Grottoes in Dunhuang, China, this paper presents a method that is mainly based on the analytic hierarchy process (AHP) to evaluate the preservation state and risk of the Mogao cliff, a means that numerical simulation was conducted to quantitatively evaluate the stability and effectiveness of protective measures for the Mogao cliff, a set of reinforcement methods which integrate the key protection techniques based on propping, anchoring, grouting, and anti-weathering and the quality control measures based on assessing their effectiveness for surrounding rocks of the grottoes, and a set of methods for monitoring and warning based on risk theory throughout the entire reinforcement process. The four above-mentioned techniques complement and support with each other, and every stage is based on research. Additionally, the protection and reinforcement concepts implemented at the Mogao cliff are summarized in this paper. Finally, preventive protection and reinforcement techniques for sandy conglomerate grottoes were established based on the research, evaluation, calculation, and monitoring. The techniques presented in this paper can be used as a theoretical foundation and provide technical guidance for the protection and reinforcement of similar cultural heritage sites all over the world.
A grotto is a Buddhist art palace integrating wall paintings, painted statues, and architectures, and also is an important part of heritage sites and a historical testimony of the integration of religion and culture. Moreover, grottoes are of great historical, scientific, artistic, social and cultural value. A number of grottoes exist all over the world, such as the Mogao Grottoes, Longmen Grottoes, and Yungang Grottoes in China, Ajanta Caves, Ellora Caves, and Elephanta Caves in India, and Bamiyan Buddhas in Afghanistan. However, the cliff of grottoes suffers from the influence of fissures [1, 2], collapses [3, 4], falling rocks [5, 6], seepage [7, 8], weathering [9, 10], and vandalism [11]. Hence, researchers and managers carry out a substantial amount of work to protect and strengthen the cliff of such grottoes. For examples, the rockfall of Saptashrungi Gad Temple in India was reinforced by re-sloping, scaling and trimming of loose rock blocks, draping mesh, and anchoring rock bolts [12]. The stabilization of Ajanta Cave above Cave 1 to 5 was undertaken by a concrete retaining wall [13]. The seepage problem of Sokkuram Grotto in South Korea was solved after 3 times of protection and reinforcement which include the pouring of the concrete dome from 1914 to 1916, the waterproofing of the concrete dome in 1917, and the construction of double domes and wooden cave eaves in 1961 [14]. The cliff of Yungang Grottoes in China was reinforced with epoxy resin grouting, anchoring and cave waterproofing [15], the cliff of Maijishan Grottoes was protected by Spraying concrete with steel mesh, and anchoring [16], the cliff of Mogao Grottoes and Yulin Grottoes was protected and reinforced by propping, anchoring, grouting, and anti-weathering [17, 18]. In these practices and experience of reinforcement for the cliff of cave temple, the Mogao Grottoes is one of the most important and famous protection projects.
The Mogao Grottoes are located at the eastern foot of Mingsha Mountain, on the west bank of Daquan River, facing Sanwei Mountain in the east, which is 25 km away from Dunhuang in Gansu province, China. Since the first cave was built in 366 AD, there were 735 caves excavated in the north–south cliff with a height of about 40 m and a length of 1.7 km. There are 45,000 square meters of splendid wall paintings, and over 2000 painted statues in caves of south section, and about 200 caves are used for the monks' daily life, which are located in the north section of the Mogao (Fig. 1).
The position of the Mogao Grottoes
The Mogao cliff has been deteriorating since its formation under the structural influence and strength of a sandy conglomerate, which is cemented by muddy calcium or weak muddiness. Additionally, the cliff has various typical issues, such as cracks, partial dangling, and weathering. Since 1956, many domestic and foreign scholars have developed the theoretical foundation and accumulated a substantial amount of practical experience with regard to the conservation of the Mogao cliff. This experience includes the assessment of the conservation state, analysis and calculation of stability, evaluation of the reinforcement effect, and protective monitoring. However, most of these studies have only investigated one aspect of the protection and reinforcement measures for the cliff of the Mogao Grottoes, and systematic investigation has not been carried out with regard to the cliff characteristics, protection and reinforcement techniques, as well as quality control and effectiveness evaluation. Based on research results and data obtained by protection projects, protection and reinforcement techniques for the conservation of the Mogao cliff have been established in recent years. These techniques include a method for evaluating the conservation state and risk of the rock mass, quantitative analysis methods for evaluating the state before and after the reinforcement of the cliff based on the numerical simulation, key protection and reinforcement techniques including the propping of the roof, anchoring, grouting and anti-weathering, and a method based on risk theory for monitoring and warning throughout the entire protection process. These techniques are expected to be used in similar projects for the conservation of grottoes heritage all over the world.
The protection and reinforcement of Mogao cliff
The history and effectiveness assessment for the protection and reinforcement of Mogao cliff
Before 1944, the Mogao Grottoes had been left unmanaged and unprotected for an extensive period of time. Many fissures and collapse incidents have occurred on the Mogao cliff under the influence of earthquakes, rainfall, temperature, sand storms, and humidity (Fig. 2). During its existence, this heritage site has been severely threatened by these issues. The protection and reinforcement of the Mogao Grottoes have been carried out continuously since the establishment of the Dunhuang Institute of Art in 1944. Early on, a wall was constructed and sand cleaning was carried out. Subsequently, the cliff was protected and reinforced, and monitoring and warning systems are installed presently. The protection and reinforcement history of the Mogao cliff has gone through the following major stages since the first cliff body reinforcement in 1956.
In 1956, the experimental reinforcement of the cliff between the 248th Cave and the 261st Cave was mainly implemented by roof-propping using stone masonry pillars and a wooden walkway on the roof-propping [19]. However, there were some problems regarding the shape of the reinforcement and structures owing to the lack of cave reinforcement experience.
In 1963–1966 and 1984, the cliff with a length of 800 m and the 400 caves of the Mogao Grottoes were reinforced in four phases of protection projects for the conservation of the Mogao cliff and caves. The reinforcement measures included withstanding and propping using a beam-column structure for the dangerous rock mass, resisting collapse using a gravity retaining wall, and cleaning by removing the dangerous rock mass at the top of the slope of the Mogao cliff [19]. These measures have been in place since they were implemented, and their good reinforcement effects have provided a solid foundation for the protection and utilization of the Mogao Grottoes.
In 1999, the cliff between the 248th Cave and the 261st Cave was reconsolidated by removing the dangerous rock mass and repairing the wooden walkway. Other measures implemented in this section include chemical strengthening by PS-spraying (Potassium Silicate Spraying with modulus 3.8–4.0) the barely weathered rock of both the erect cliff and the sandy conglomerate layer on the slope, and strengthening the cliff fissures by grouting [19, 20]. Thus, more scientific reinforcement measure was applied in this section, and the reinforcing system of this section was improved.
In 2000–2003, the flooding threat for the Mogao Grottoes was eliminated by the Daquan River flood control project. Additionally, the cliff in the northern section of the Mogao Grottoes was reinforced by anchoring and PS-spraying. Hence, the historical conditions of the site have been retained [20].
In 2010–2011, the dangerous rock masses in the southern section of Mogao, which had not been reinforced by previous protection projects, but reinforced by anchoring, grouting, and PS-spraying. The measures were evaluated by considering the wind erosion and rain erosion. At this stage, with the introduction of wind erosion, rain erosion, and other factors, the effectiveness of the protection and reinforcement measures was evaluated. Thereafter, the protection and reinforcement of the Mogao cliff formed the completely closed loop of research-calculation-reinforcement-evaluation.
Since 2013, the dangerous rock mass close to the 196th Cave in the south section, and the B71Cave and the B65 Cave in the north section, were reinforced. These daily protective maintenance tasks have improved the protection and reinforcement system of the Mogao Grottoes.
Preservation state of Mogao Grottoes before reinforcement
In summary, the protection and reinforcement of the Mogao cliff can be divided into the experimental reinforcement phase since 1956, the rescuing reinforcement phase from 1963 to 1984, the scientific protection phase from 1999 to 2011, and the daily maintenance phase since 2013. Additionally, with the implementation of the National Science and Technology Support Program 'Research and Demonstration of Key Technologies for Risk Pre-Control of World Cultural Heritage Sites', the protection and reinforcement of the Mogao cliff are currently in the stage of preventive protection at the same time [18, 21].
The technique and concept for the protection and reinforcement of Mogao cliff
By reviewing the protection and reinforcement history of the Mogao cliff, it is evident that the main means of reinforcement for the Mogao cliff are 'withstanding', 'propping', 'resisting' and 'cleaning' at the stage of experimental and rescuing protection. The main purpose of these measures is to prevent the cliff from collapsing and causing the destruction and disappearance of the Mogao Grottoes. Considering China's national conditions and technical capabilities, the reinforcement concepts at this stage, and the current conservation state of the Mogao Grottoes, the reinforcement measures provide good conditions for the long-term preservation and sustainable usage of the Mogao Grottoes.
At the scientific protection phase, with the improvement of relevant national policies, the technical strength of the industry, and the changing and refinement of the personnel structure and cooperation scope of the Dunhuang Academy, the conservation of the Mogao cliff used the system of 'anchoring, grouting, and anti-weathering' based on scientific research with regard to PS, calcined ginger nuts, and mature anchoring technology. Additionally, the technology of the protection system is more in line with the requirements dictated by the Principles for the Conservation of Heritage Sites in China and related laws and regulations.
At the daily maintenance and preventive protection phase, more work priorities and processes are focused on inspection, monitoring, warning, and maintenance.
Since 1956, concepts regarding the protection of cultural relics and the related technical requirements in China, and also the protection and reinforcement of the Mogao cliff, have been gradually investigated and integrated into a comprehensive technological system of protection and management. This system includes the concept of retaining the historical conditions, roof/propping/grouting/anti-weathering reinforcement technology, and a management plan for monitoring, warning, and maintenance.
On the other hand, the stability of the middle and lower parts of the Mogao's cliff was effectively improved after the reinforcement from 1956 to 1984, but the protection and reinforcement in the rescuing protection stage is relatively lacking in research and evaluation. With the deepening of understanding of the properties of Mogao's cliff and the maturity of anchoring technology in the protection of cultural relics, anchoring technology was used for the reinforcement of the cliff in the upper part from 2010 to 2011, so the stability calculation based on the assessment of diseases and the evaluation of physical properties of Mogao's cliff has been started in the reinforcement of cliff in this stage. Besides, the monitoring of the cliff was also introduced with the using of the 3D scanner in the cultural heritage field and the establishing of the warning monitoring system for Mogao Grottoes. Hence, evaluation or implementation of the cliff's preservation status, stability calculation, monitoring, and protection and reinforcement have all appeared in different stages of the cliff protection and reinforcement history, and a framework system for cliff protection and reinforcement has also been initially established. However, the internal correlation and mechanism of each aspect have not been systematically studied, because these works have carried out for over 60 years since 1956. Consequently, the protection and reinforcement system of the Mogao Grottoes will be thoroughly reviewed in this paper based on the protection research, evaluation, monitoring, and reinforcement of the Mogao Grottoes over the past 60 years.
Evaluation of the conservation state of Mogao cliff
Since the end of the twentieth century, the conservation of cultural heritage sites, and relevant protection techniques and materials, have improved along with rapid development of China's economic [22,23,24,25,26,27]. The geological conditions, physical and mechanical properties, disease types and characteristics, and the preservation state of the Mogao cliff have been systematically investigated and evaluated. Additionally, the evaluation and research results have provided an important foundation for the analysis of basic cliff properties, theoretical calculations, effectiveness of reinforcement measures, evaluation of reinforcement effects, and protective monitoring in the process of protection and reinforcement.
Evaluation of the cliff's geological conditions
The Mogao Grottoes are excavated in the cliff of the valley located at the east of Mingsha Mountain. There is a similar geomorphic unit for the horizontal top of the cliff, and the Daquan valley lies in front of the cliff. Only a slight difference exists on the slope at the edge of the cliff. Additionally, the influence mechanism of 12 faults around the Mogao area is approximately similar throughout the cliff [28]. Thus, the important considerations in the analysis of the geological conditions for the Mogao cliff include stratigraphic lithology, hydrogeological conditions, and unfavorable geological phenomena.
Investigation of stratigraphic lithology
Previously, based on the quaternary geological studies of the Hexi Corridor, research results for the Dunhuang Basin, survey results for the Mogao cliff, and test results for cliff specimens, various scholars [29, 30] have divided the Mogao cliff and its underlying strata into the Lower Pleistocene Yumen Formation sandy conglomerate layer (Q1), Middle Pleistocene Jiuquan Formation sandy conglomerate layer (Q2), and Upper Pleistocene Gobi Formation conglomerate layer (Q3). For these strata, only the Yumen Formation sandy conglomerate layer is exposed in the vicinity of the seismic platform in the upper Daquan River. The caves are excavated in the Jiuquan Formation, and the Gobi Formation is mainly exposed at the top of the Mogao cliff [1, 31]. The different Mogao strata are shown in Fig. 3.
Lithological profile in front of Mogao Grottoes [1, 31]
The caves of the Mogao Grottoes and Yulin Grottoes are excavated in the erect cliffs comprising the Jiuquan sandy conglomerate. Hence, various scholars have referred to the Jiuquan conglomerate as the 'cave stratum' [30]. However, the lithological and mechanical properties of the formation directly affect the stability of the caves and their cliffs. Therefore, the investigation of the lithological properties is key for the conservation of the Mogao cliff. Wang [32] measured and mapped the comprehensive stratigraphic profile and analyzed cliff specimens from the south side of the Nine-storey Building of the Mogao Grottoes. Then, he divided the cliff into four engineering geological rock groups based on the lithological characteristics and engineering properties of the strata. Figure 4 shows the profiles categorized as A, B, C, and D from top to bottom. To further understand the internal strata of the Mogao cliff, Yang [33] dug a 21-m-deep exploration well in the rock mass at the west side of the Nine-storey Building. The distance from the Nine-storey Building to the well is 150 m, and Yang extensively catalogued the well (Fig. 5). And the position of lithological profile, stratigraphic profile, and the well mentioned above is shown in Fig. 6.
Comprehensive stratigraphic profile of cliff in the south side of the Nine-storey Building [1, 31]
Stratum profile of exploratory well [33]
The position of lithological profile, stratigraphic profile, and the well [1, 31, 33, 35]
However, this was mainly based on point surveys using the profile and well. Therefore, Guo [34] conducted an investigation deeper into the strata of the upper and middle Mogao cliff, and divided the formation of different sections of the Mogao cliff into five types based on a large number of on-site investigations and mapping. These five types are the slope cliff without a sandy stratum, stepped cliff without a sandy stratum, erect cliff without a sandy stratum, stepped cliff with an obvious sandy stratum, and slope cliff with an inconspicuous sandy stratum (Fig. 7).
Distribution map of five types of strata in southern district of Mogao Grottoes [34]
The above-mentioned studies did not describe the distribution state of the Mogao cliff in a clear and comprehensive manner. However, all of them have played an important role in the protection and reinforcement of the Mogao Grottoes. To determine the distribution characteristics of the Mogao cliff strata in the greater district, we investigated the area behind the Mogao cliff using a ground penetrating radar and surface wave exploration (Fig. 8).
Penetration result of surface wave exploration at the top of Mogao Grottoes cliff
From the research results pertaining to the geological conditions of Mogao, the division of the strata and engineering rock groups of the Mogao cliff is becoming clearer and more comprehensive with the improvement of survey techniques at different stages. Most studies have used these research results as foundation data for the protection of the Mogao cliff. Moreover, although the Mogao strata have simple lithology, their distribution is complex.
Research on the state of water vapor in Mogao cliff
The Mogao rock mass is a sandy conglomerate with poor cementation, therefore, its strength decreases after the cement is leached and eroded by water. Moreover, water or water vapor can cause salt dissolution in the rock and migration with moisture transport, and the salt content is enriched in a part of the rock or on the surface of the wall paintings in the caves. Eventually, the rock properties will be changed and the wall paintings will be destroyed by efflorescence, detachment, and other diseases. Therefore, elucidating the distribution and transport law is very important for protecting and reinforcing the Mogao Grottoes.
Yang [33] excavated and sampled the exploration well at the top of the Mogao cliff, and analyzed the moisture state of the rock mass within a distance of 20 m from top of the well. Guo [36] systematically expounded the distribution and source of water and salt in the rock mass by monitoring the temperature and humidity conditions at different depths, and using electrical resistivity tomography (ERT) on the western bare-wall of the 98th Cave of the Mogao Grottoes. Chen [37] discovered the water vapor's transport channel related to the structure of the rock mass using three-dimensional (3D) ERT under different scales on the western bare-wall of the 108th Cave of the Mogao Grottoes. Guo [38] determined the distribution and source of water in the Mogao rock mass using ERT to measure the resistivity for the rock surrounding the grottoes, before and after rainfall, at the tree belt in front of Mogao and at the top of the cliff [38]. The results obtained by these studies have provided an important theoretical foundation for the protection and reinforcement of the Mogao cliff.
Evaluation of the physical and mechanical properties of the Mogao cliff
To understand the composition, structure, and properties of the Mogao cliff, it is important to investigate its physical and mechanical properties. Such investigations can provide the basic parameters to consider for protection and reinforcement. Hence, Wang [1] and Zhang [39] used X-ray diffraction, the wax sealing method, and the drying method to determine the mineral composition, density, and moisture content of each engineering rock group. Additionally, they used the point load method to test the strength of each group, and investigated the wave velocity of each group using the seismic method and acoustic wave testing. Fu [40], Shi [41], Guo [34], and Pei [42] obtained the physical and mechanical parameters and structure of each rock group using different methods such as laboratory testing and empirical formula calculations. Table 1 lists the physical and mechanical parameters of each engineering rock group and structure of the Mogao Grottoes, based on the results obtained by the studies mentioned for the physical and mechanical properties of the Mogao cliff. Besides, the physical and mechanical properties of Maijishan Grottoes cliff, Beishiku Temple cliff, and Yungang Grottoes cliff are also listed in Table 1. As can be seen from the table, the physical and mechanical properties of different rock groups in Mogao Grottoes are quite different, and it is also different from the properties of Maijishan's sandy conglomerate, Beishiku Temple's and Yungang Grottoes' sandstone.
Table 1 Physical and mechanical parameters of different cave temple cliff
Evaluation and survey of cliff diseases
Since the cliff was created after the river cut into the alluvial floodplain and the caves were excavated, the Mogao cliff has been deteriorating and has incurred different types of diseases (such as fissures, collapse, weathering, and gullies) under the influence of natural factors (rainfall, snowfall, sun light, wind, and so on) and natural disasters (floods, earthquakes, sandstorms, and so on). The different sections have different stability because the landform has been changed by these diseases. The excavation of the caves did not only cause the redistribution of the internal stress in the cliff but also caused the adverse effect of cave spacing and thin-roof caves, which influences the stability. Future natural disasters, such as earthquakes, floods, and sandstorms, will also adversely affect the stability and development of cliff disease. Therefore, it is of great significance to evaluate the development of diseases and other cliff threats and their effect on the stability and safety of cultural relics.
Many studies have systematically investigated the development of diseases on the Mogao cliff. Wang [20] investigated the weathering of the Mogao cliff and divided the weathering diseases into nine types: cliff-faced conglomerate weathering, sandstone weathering, gentle slope rock weathering, dangerous rock, thin-roof cave, rainfall seepage, existing engineering issues, stratum salt damage, and sand damage. Subsequently, he measured these diseases in detail and marked them on the geological disease distribution map on the scale of 1:200. Shi [49] and Zhang [50] investigated the diseases of the Mogao Grottoes from the perspective of engineering geology and found that these diseases in the northern part of the Mogao Grottoes mainly include cracks, collapses, scarps, dangerous rocks, the collapse of cave roofs, weathering, sandstorm damage, and flood damage. Wang [51] investigated the causes, sources, movement mechanism, and risk of stones falling from the Mogao cliff through field experiments and motion track simulations. Moreover, various studies have investigated the threats faced by Mogao, such as earthquakes [52, 53], floods [54, 55] and sandstorms [32].
Based on these studies, we re-investigated the conservation state of the Mogao cliff with consideration to the influences and effects of threats posed by natural disasters, natural factors in the surrounding environment, and human factors in the process of construction and utilization. We found that the Mogao cliff is exposed to three types of threats from natural disasters: earthquakes, floods, and sandstorms. Additionally, the cliff is influenced by natural factors such as rainfall, snowfall, sun light, and wind, and is also threatened by human factors, including the construction of caves and tourism. Furthermore, we found that four types of diseases, namely, fissures, collapse, weathering, and gullies (Fig. 9), with 15 sub-categories, have developed on the cliff, and that the cliff faces problems such as internal stress redistribution, thin-roof caves, cave spacing, and vibration from tourists. Finally, the cliff has developed three shape types: erect cliff, stepped cliff, and sloped cliff. The classification of these diseases and threats is shown in Fig. 10.
The four types of diseases in Mogao cliff
Map of Mogao cliff diseases and threats
According to properties and characteristics of these diseases and threats, the Mogao cliff deteriorates because of the threats caused by natural disasters and natural factors in past, present and future. The impact of excavation on the cliff has been completed and we can't change the result any more, and the threat caused by tourism is being effectively solved under the opening management measures of Dunhuang Academy. On the other hand, most diseases and issues of Mogao cliff caused by these external factors have been improved through protection and reinforcement, and other small-scale diseases pose a less threat to the Mogao cliff.
The preservation state of Mogao cliff
In recent years, most studies have investigated the geological conditions, physical and mechanical properties, disease types, and threat situation of the cliff based on a local section or local point. Although more comprehensive research results have been obtained, the preservation state with regard to the quality of rock mass and the hazards resulting from diseases are still not clearly understood. Hence, it is necessary to evaluate the preservation state of the Mogao cliff.
Wang [56] analyzed and summarized relevant factors, such as landforms, fissures, caves, gentle slope weathering, and their sub-factors, which may affect the quality of the Mogao cliff rock mass, based on a deep understanding of the engineering geological features of the Mogao Grottoes. Wang completed the grading evaluation of rock mass quality using the analytic hierarchy process (AHP) and an expert system, and determined the weight of each factor (Fig. 11). Additionally, Guo [57] evaluated the hazard of 42 potential dangerous rock masses in the cliff of the southern Mogao section (Fig. 12). The risk assessment factors are as follows: dangerous rock mass, cliff shape, stratum lithology, fissures, and earthquake. The hazard assessment factors are as follows: humans, buildings, infrastructure, and other property. Based on risk theory, Guo pointed out the key areas of concern and explained the main hazards and potential dangers in each section. The results provide effective guidance for the protection and management of the Mogao Grottoes, and a theoretical foundation for the stability and potential hazard assessment of the caves.
Quality assessment for rock mass of Mogao Grottoes [58]
Hazard map of potentially dangerous bodies within cliff [57]
Analysis and calculation of Mogao cliff stability
In the field of traditional engineering geology, various analytical methods, such as qualitative analysis, semi-quantitative analysis, and quantitative analysis, are used to calculate the stability of the rock and soil mass. In the twenty-first century, with consideration to the actual situation and special characteristics of the Mogao cliff, methods have been established for the analysis and calculation of the stability and consolidation force in long-term practical conservation and reinforcement. These methods combine the preliminary analysis of the in-situ wedge balance and resistance force calculation using the GEOSLOPE software. Moreover, we investigated a set of methods for calculating the reliability of the reinforcement measures and focus points after the reinforcement. These methods provide a theoretical foundation and empirical guidance for the protection and reinforcement of the Mogao cliff.
Analysis and calculation of reinforcement force
Analysis and calculation of wedge balance
In engineering practice, the limit equilibrium method is one of the earliest and most widely used quantitative analysis methods. After assuming the failure mode of the deformation system, we can analyze the stress state of the rock and soil mass under various failure modes according to the mechanical balance principle in the failure mode. Moreover, we can calculate the stability of the rock and soil mass according to the force balance between the anti-sliding force and the downslide force, or according to the moment equilibrium in the dangerous rock and soil mass.
We consider the sliding failure as an example of this case. After investigating the cliff and fissure close to the 100th Cave of the Mogao Grottoes, we simplified the profile of the dangerous rock mass into a wedge according to the theory of the limit equilibrium method. Subsequently, we obtained the typical profile of the Mogao cliff (Fig. 13A). It is assumed that the sliding surface comprises a fissure and its potential failure surface. The self-weight of the dangerous rock mass is denoted as G, the horizontal seismic force generated by an earthquake is denoted as P, the hydrostatic pressure after the fissure is filled with water is denoted as F, and the frictional force generated by the dangerous rock mass on its sliding surface is denoted as φl. Hence, we can calculate the safety factor of the dangerous rock mass by balancing the anti-sliding force and downslide force on the failure surface. According to the requirements of the safety factor K under the state of reinforcement in the field of cultural relics protection in China, we can obtain the reinforcement force N, which is required to consolidate the dangerous rock mass, by recalculating the balance of the stress system and adding the anchoring force (Fig. 13B).
Typical profile of Mogao Grottoes and stress system of dangerous rock mass
The equation between the safety factor K and those parameters is as follows:
$$K = \frac{{\left( {\left( {G + N \cdot sin\theta } \right) \cdot cos\alpha - \left( {F + P - N \cdot cos\theta } \right) \cdot sin\alpha } \right) \cdot tan\varphi + c \cdot L}}{{\left( {\left( {G + N \cdot sin\theta } \right) \cdot sin\alpha + \left( {F + P - N \cdot cos\theta } \right)} \right) \cdot cos\alpha }}.$$
In Fig. 13 and the equation, K is the safety factor; N is the reinforcement force (kN); G is the weight of the dangerous rock mass (kN); F is the hydrostatic pressure when the fissure is filled with water (kN); P is the horizontal seismic force generated by an earthquake (kN); α is the angle of the sliding surface (°); φ is the friction angle (°); c is the cohesion (kPa); d is the fissure depth (m); l is the length of the sliding surface (m); h is the fissure height (m); θ is the anchoring angle (°).
Stability evaluation based on the in situ situation
The calculated wedge balance is safer than the actual situation because the rock and soil mass are considered as a rigid body in the simplified calculation. Although this assumption is widely adopted in engineering practice, we cannot ignore that its economic benefits arise from being too conservative. The numerical simulation results are closer to the actual situation because different constitutive models and failure criteria can be selected according to different types of rock and soil mass. Hence, the numerical simulation has obvious advantages.
Therefore, using the GEOSLOPE software, Pei [42] analyzed the static and dynamic response of five Mogao cliff profiles, including the in situ cliff of unexcavated caves, cliff with excavated caves, cliff with caves and fissures, cliff reinforced with a retaining wall, and cliff reinforced with anchor cables. Pei determined the failure surface, failure models, and evaluation criteria under different calculation situations. Additionally, Pei carried out groundbreaking work to calculate the reinforcement force required for reinforcement measures, and the safety factor after the reinforcement corresponded with the safety factor of the in-situ state in his calculation. Pei believes that the reinforcement measures using the force are the most reasonable measures, and his results have provided the theoretical foundation and a method for evaluating the reinforcement effect for the protection and reinforcement of the Mogao cliff (Fig. 14).
Safety factor and shear stress distribution map of typical Mogao cliff profile under different reinforcement measures [42]
Simulation analysis of reinforcement measure reliability
The investigations of Pei using both wedge balance calculation and the GEOSLOPE software have provided reliable support for the protection and reinforcement of the Mogao cliff. However, the Mogao cliff is a rock mass with a particular rock structure that contains complex underground cavern structures. Hence, caution must be exercised when implementing reinforcement measures. It is not only necessary to calculate and verify the reliability of the proposed reinforcement measures before they are applied, but the long-term serviceability of the various measures after the reinforcement should be analyzed and the key prevention areas of the caves should be investigated after the reinforcement.
Therefore, based on the analysis of the Mogao cliff's characteristics and the structure of reinforcement measures, Shi et al. [41, 53, 59] have used static and dynamic analysis methods to calculate the stability of the surrounding rock and ancillary facilities of the Mogao Grottoes. The above studies have also discussed the seismic stability evaluation method with regard to the surrounding rock of the Mogao Grottoes and its ancillary structures. In another study, the finite element analysis method was used to numerically simulate the displacement field and stress field distribution characteristics of the surrounding rock mass under dynamic loads with different peak acceleration, spectrum, and duration. Additionally, the dynamic response and variation rule of the Mogao cliff and ancillary structures were investigated under earthquake action. These studies have also pointed out key considerations for the future protection and reinforcement of the caves.
After the Mogao cliff was investigated and appropriate reinforcement measures were implemented, Guo [34] analyzed the stress, displacement distribution characteristics, and acceleration variation of the caverns under the action of gravity and seismic loads using the FLAC 3D software. Guo found that the sections of densely excavated caves in the middle of the Mogao cliff are not only prone to tensile stress concentration, but also produce large displacements. Guo also found that the caves at the top of the cliff produced large displacements under seismic loads.
These studies have also provided a reliable theoretical foundation for the protection and management of the Mogao Grottoes. In addition, we have summarized the key considerations included cliff, caves, and ancillary facilities, which should be focused on preservation, management, and research in the future. The details are shown in Table 2.
Table 2 Table of the key prevention areas in the future
Protective monitoring of Mogao cliff
With the formulating and implementing of Principles for the Conservation of Heritage Sites in China in 2002, the concept of 'preventive measure' has appeared in Chinese industry standard document [60], and the revised China Principles in 2015 further integrated preventive conservation into the system of cultural heritage conservation [61]. Thus the reinforcement concept has been changed in China. Meanwhile, the protection and reinforcement of Mogao cliff has undergone experimental reinforcement, rescuing reinforcement, and scientific reinforcement since 1956. Especially in the stage of the scientific reinforcement, the reinforcement skills have been effectively improved and a large number of cultural relics have been rescued under the support of international cooperation and a lot of scientific research results. All the cliff of Mogao Grottoes has been effectively protected and reinforced, and the potential dangers of cliff stability have basically been eliminated. In this context, the protective measures for the Mogao Grottoes gradually transformed from 'rescuing protection' to 'preventive protection'. Only by actively preventing and eliminating all types of unfavorable factors that affect the conservation of the Mogao Grottoes, can we extend the life of the Mogao Grottoes to maximize extent and slow down the decline of wall paintings and painted statues. Thus, we can completely conserve the information and outstanding value of the Mogao Grottoes. Therefore, it is necessary to develop and construct a risk monitoring and warning system [21]. Based on the long-term practical experience of conservation, research, and promotion, Dunhuang Academy has constructed a three-in-one protective monitoring and warning system for the daily inspection, regular monitoring, and early warning monitoring of the Mogao cliff.
Daily inspection of Mogao cliff
In the study by Sun [62], the inspection methods of the Mogao relics were summarized into four types: annual inspection, daily inspection of caves open to tourists, inspection for emergencies, and inspection of key sections. The inspection of the cliff is part of the Mogao inspection.
In the annual inspection, a fixed workgroup consisting of approximately 10 staff members investigates every section of the Mogao cliff, compares the results with the data of the previous year to discover changes in the cliff over the last year, and the inspection results are eventually collated and archived. The daily inspections are mainly conducted according to the opening conditions of the Mogao Grottoes during the busy tourist season. General inspections are carried out once a week in the busy season and once every two weeks off season. Emergency inspections are mainly conducted after natural disasters such as earthquakes, heavy rainfall, floods, and sandstorms, and emergency plans are initiated as needed.
Regular monitoring of Mogao cliff
The regular monitoring of the Mogao cliff supplements the inspection of the key section. A 3D scanner and a total station are used for regularly inspect the development of various types of diseases on the Mogao cliff.
Presently, the cliff of the Mogao Grottoes is in a stable state and it is thus possible to effectively monitor the displacement of rock mass under earthquake loads, control the reinforcement process, and evaluate the reinforcement effect. The total station is regularly used in the regular and fixed-point monitoring of typical parts with high cliffs, poor rock mass quality, large number of caves, and high risk. The monitoring period is once every 6 months. Moreover, the weathering progress of the cliff's surface can be ascertained by comparing the results obtained from a regular 3D scan. For example, we scanned the typical weathering cliff between the 203rd Cave and 204th Cave in 2009 and 2016 (Fig. 15). The deformation map of the 3D models obtained by comparing the two laser scans is shown in Fig. 16. The green part indicates the deformation section within 3.5 mm, which means that there was no deformation on the cliff over the last eight years. Hence, the blue and orange sections may have originated from the consolidation and repairs that took place from 2009 to 2016.
Position of 3D scanning
Deformation map based on the comparison between 2009 and 2016 results
Comprehensive and multi-purpose cliff monitoring provides a strong guarantee for the stability and weathering protection of the Mogao cliff and ensures the safety of cultural relics, personnel, and equipment.
Early warning monitoring for Mogao cliff
The monitoring and warning system for the Mogao cliff is an important part of the risk monitoring and early warning system framework for the Mogao Grottoes, and focuses on the stability of the cliff body, water vapor content and its migration in the rock, and weathering of the cliff body. The grotto environment, which comprises the meteorological environment, water environment, earthquakes, vibration, and sandstorms, has been added to the monitoring scope. The monitoring system comprises a front-end data acquisition system and a monitoring and early warning system. In summary, the overall objective of monitoring changes, predicting and pre-controlling risks, and pre-consolidating protection has been achieved at the site of the Mogao Grottoes [21].
The protection of the grottoes' cliff is a systematic project based on practical experience, and must be fully considered the current preservation situation, systematically evaluate whether or not intervention is needed, deeply analyze if the stability of the cliff should be consolidated according to the in-situ state, calculate and analyze the effectiveness of protection and reinforcement measures in solving practical engineering problems, and finally evaluate the effects before and after reinforcement from a practical viewpoint.
This paper presents a full review of the relevant literature and materials pertaining to the protection of the Mogao cliff. The long-term practical experience with regard to conservation, research, and promotion is reviewed and summarized, and the methods for evaluating the preservation state, calculating and analyzing the stability of the Mogao cliff, constructing protective measures, and carrying out protective monitoring are systematically discussed.
To evaluate the preservation state, we comprehensively evaluated the rock mass quality of the Mogao cliff and assessed the risk of risk-prone sections in the assessment results using the grey mathematical theory and an expert system. This study is based on the deep understanding of the stratum property characteristics, cliff shape characteristics, migration state of moisture, physical and mechanical properties of the rock mass, physical properties of water, and various types of diseases of the cliff and their characteristics. This study provides the basis for the protection and reinforcement of the cliff and the conservation and management of the Mogao Grottoes.
To calculate and analyze the stability of the Mogao cliff based on the evaluation of rock mass quality and risk assessment for dangerous rock masses, we used the method of analyzing the safety factor compared with the in situ state, and calculated the reinforcement force using wedge balance theory. Subsequently, we simulated and checked the failure surface of the Mogao cliff profile, and investigated the effectiveness of the reinforcement measures under the static and dynamic state. Additionally, we verified the reliability of the reinforcement measures based on the 2D calculation results and simulated the key areas after reinforcement using a 3D numerical simulation software that closely resembles the natural conditions. The investigations reported herein can provide theoretical support for the reinforcement and stability evaluation of a rock mass.
Regarding the construction of protective measures, the history of protective and reinforcement measures for the Mogao cliff was reviewed, and the reinforcement techniques and concepts for the Mogao cliff were summarized and scrutinized. A protection and reinforcement system is proposed based on the principle of retaining historical conditions, the basic management plan of monitoring/early warning/maintenance, and the reinforcement methods for propping/anchoring/grouting/anti-weathering.
With regard to protective monitoring, we summarized and formed a three-in-one protective monitoring system comprising daily inspection, regular inspection, and early warning monitoring, which runs throughout the entire protection and reinforcement process.
Based on these investigations, we propose reliable and comprehensive protection and reinforcement techniques for the Mogao cliff, and a management system for the Mogao Grottoes. The proposed techniques and system can be used in conservation projects of heritage sites similar to the Mogao Grottoes.
However, based on the conclusions drawn from practical experience, there are still various shortcomings in the current system for evaluating the conservation state, calculating the stability, constructing protective measures, and carrying out protective monitoring. For example, in the physical and mechanical testing of rock mass properties, it is impossible to collect a perfect specimen to investigate the cliff parameters owing to the characteristics of the Mogao sandy conglomerate. The stability calculation results are often too conservative because the constitutive model of the cliff has not been established, and this can result in the inefficient use of manpower, material resources, and financial resources. In the evaluation of the protection and reinforcement effect, although Dunhuang Academy has developed a series of detection devices for use in combination with practical experience, the problem of insufficient testing apparatus still exists. With regard to protective monitoring, some protection and reinforcement projects have failed to realize monitoring throughout the entire protection and reinforcement process owing to the inadequate development of science and technology and limited financial resources at the earlier stage of protection and reinforcement for the Mogao cliff.
To more effectively protect and manage similar grottoes, we propose a protection and reinforcement system for the Mogao Grottoes based on research and the practical conservation experience obtained at this site (Fig. 17).
Protection and reinforcement system for Mogao Grottoes
A protection project for a grotto often begins with the assessment of the preservation state of its cliff. If the result is dangerous, the next process is stability calculation of the cliff, else it is monitoring and early warning the cliff. During the monitoring process. If there is no danger on the cliff, then the manager will regularly evaluate the preservation state of the cliff. When the early warning mechanism is triggered, a new round of risk and hazard assessment will be carried out. If the result is safe, the monitoring will be continued. Instead, the next process is stability calculation of the cliff. When the calculation result is stable, the dangerous area is continuously monitored and evaluated; else the dangerous area of the cliff will be protected and reinforced. However, the reinforcement project cannot be directly implemented before the reinforcement. Firstly, it is needed to analyze the reinforcement measures through the numerical simulation calculation because it is evaluated on computer instead of cliff. It can not only evaluate the reliability of the reinforcement measures, but also tell the managers the areas that need attention after the reinforcement. Only the calculation and evaluation of the reinforcement measures are reliable can the real reinforcement be started. After the reinforcement project, it is necessary to systematically evaluate the reinforcement effect of the reinforcement measures. If there is any problem with the reinforcement effect, the reinforcement measures must be improved until passing the evaluation. After all the reinforcement, the reinforcement measures and the reinforced cliff shall be included in the monitoring system.
In the whole process, the most important task is the research and investigation. Each work for a grotto must be based on the deeply study before the implementation of reinforcement measures. It is not possible to directly quote the research results and parameters of other sites. For example, in the monitoring of cliff, the setting of early warning values must be determined after a large number of studies on the cliff of the grotto, rather than directly quote the warning value of the Mogao Grottoes or other caves.
In these steps, the assessment of the preservation state of the cliff mainly includes the risk and hazard assessment for the geological conditions, the physical and mechanical properties of the rock body, and the disease in the cliff. The monitoring mainly includes daily inspections, regular inspections, and early warning monitoring. The stability calculation mainly includes calculation of wedge balance and numerical simulation calculation. The reinforcement measures mainly include propping, anchoring, grouting, and ant-weathering.
This paper summarizes and teases out a method for evaluating the preservation state of the Mogao cliff, a method for calculating and assessing the stability factor before and after consolidating the cliff, and the protection and reinforcement techniques, conservation history, and protective monitoring of the Mogao cliff. The protection and reinforcement system of the Mogao cliff is extensively discussed. The proposed conservation and management system can be used for similar grottoes. The main conclusions drawn from this study are as follows:
Based on the discussion in this paper and according to practical experience accumulated over the 65 years of preserving the Mogao Grottoes, it is concluded that the cliff protection and reinforcement measures are reliable and effective, and can be used for similar grottoes. The protection and reinforcement system is based on extensive research, on monitoring throughout the entire protection process, on the principles of evaluating the preservation state and stability of the cliff, and on the reinforcement measures of propping, anchoring, grouting, and anti-weathering.
The study work should be carried out in every phase of protection and reinforcement. It can provide quantitative parameters for the reinforecement of the Mogao cliff and indicate dangerous sections of the cliff by evaluating the cliff's conservation state. It can provide the foundation for calculating the reinforcement force, analyzing the reliability, and discovering the key point after reinforcing by calculating the stability of the cliff. The principle governing the protection and reinforcement of the cliff is to completely retain the historical conditions. The basic management plan consists of monitoring, warning, and maintenance, and the consolidation effect should be evaluated after reinforcement. Protective monitoring comprises daily inspection, regular inspection, and warning monitoring, and should be used throughout the entire reinforcement process. Thus, changes can be monitored, the risks can be predicted and pre-controlled, and the protection can be reinforced in advance.
Most of the data on which the conclusions of the manuscript can be found in the references or CNKI.
Wang XD, Zhang MQ, Zhang HY, Zeng ZZ, Yao Z, Zhou ZH. Engineering properties of surrounding rocks of Mogao grottoes at Dunhuang. Chin J Rock Mech Eng. 2000;19:756–61.
Zhao M, Yan SJ, He K, Dou Y, Fang Y, Zhai GL. Experimental study on novel grouting materials for fracture seepage of the Longmen grottoes. J Yangtze River Sci Res Inst. 2016;33:115–28.
Guo F, Jiang GH. Investigation into rock moisture and salinity regimes: implications of sandstone weathering in Yungang grottoes. China Carbonates Evaporites. 2015;30:1–11.
Germinario L, Oguchi CT, Tamura Y, Ahn S, Ogawa M. Taya Caves, a Buddhist marvel hidden in underground Japan: stone properties, deterioration, and environmental setting. Herit Sci. 2020;8:1–20.
Ansari MK, Ahmad M, Singh R, Singh TN. Rockfall hazard assessment at Ajanta Cave, Aurangabad, Maharashtra, India. Arab J Geosci. 2014;7:1773–80.
Ansari MK, Ahmad M, Singh TN. Rockfall hazard analysis of Ellora Cave, Aurangabad, Maharashtra, India. Int J Sci Res. 2014;3:427–31.
Pujari PR, Soni A, Padmakar C, Mahore P, Sanam R, Labhasetwar P. Ground penetrating radar (GPR) study to detect seepage pathways in the Ajanta Caves in India. Bull Eng Geol Environ. 2014;73:61–75.
Wang K, Fang Y, Huang ZY, Qiao L. Analysis of the mechanism of the water seepage damage formation at Lei Gutai, Longmen grotto, based on discontinuity network modeling. Sci Conserv Archaeol. 2015;27:87–93.
Bharti G. Ajanta caves: deterioration and conservation problems (a case study). Int J Sci Res Publ. 2013;3:392–5.
Guo QL, Wang XD, Zhang HY, Li ZX, Yang SL. Damage and conservation of the high cliff on the northern area of Dunhuang Mogao grottoes, China. Landslides. 2009;6:89–100.
Francioni F, Lenzerini F. The destruction of the Buddhas of Bamiyan and international law. Eur J Int Law. 2003;14:619–51.
Ansari MK, Ahmad M, Singh R, Singh TN. 2D and 3D rockfall hazard analysis and protection measures for Saptashrungi Gad Temple, Vani, Nashik, Maharashtra—a case study. J Geol Soc India. 2018;91:47–56.
Bagde MN. Assessment of rockfall hazard and stabilization of rock slope surface at the world heritage Ajanta Cave. Geotech Geol Eng. 2021;39(4):1–14.
Joungah L. The history of Korean architectural cultural property protection. Beijing: Science Press; 2019.
Li ZG. Fifty years of scientific and technological protection of Yungang grottoes. World Antiq. 2004;5:3–7.
Ma Q. Retrospect and reflection on the protection course of Maijishan grottoes. China Cult Herit. 2016;1:58–64.
Wang XD, Li ZX. The geotechnical engineering problems of Yulin grottoes and the preventive measures. Dunhuang Res. 2000;1:123–31.
Li ZX. Sixty years on the conservation of the Dunhuang grottoes. Dunhuang Res. 2004;3:10–26.
Sun RJ. The Dunhuang grottoes I experienced—the 40s to 60s of the last century. Dunhuang Res. 2006;6:203–18.
Wang XD, Zhang HY, Guo QL, Lu QF. Weathering characterization and conservation treatment of cliff at Mogao grottoes. Chin J Rock Mech Eng. 2009;28:1055–63.
Wang XD. Construction of a monitoring and precaution system and exploration of preventive conservation at the Mogao grottoes based on risk management theory. Dunhuang Res. 2015;1:104–10.
Xie CS. Protection work of cultural relicsin past 50 years in New China. Contemp China Hist Stud. 2002;9:61–70.
Zhang WM. Institution of cultural relics preservation in morden China and their effect. J Natl Mus China. 2011;6:138–49.
Bao XH. The formation of the consciousness of modern Chinese cultural relics protection. Wenbo. 2000;2:75–80.
Shan JX. From "relics protection" to "cultural heritage protection." Tianjin: Tianjin University Press; 2008.
Li ZX. Ancient site protection on Silk Road. China Cult Relics. 2004;3:118–22.
Huang KZ. Chinese cultural relics protection in the 21st century. Dunhuang Res. 2000;1:5–9.
Yuan DY, Shi YC, Wang XD. The characteristics and influence of new faults in the Mogao grottoes of Dunhuang. Dunhuang Res. 2000;18:56–64.
Gansu Geological Bureau. Regional geological survey report (Dunhuang Plaza 1: 200 000); 1974.
Cao XS. The quaternary of Gansu. Acta Geol Gansu. 1997;6:1–21.
Academy D, editor. Dunhuang research collection: engineering characteristics of Dunhuang cave formation. Lanzhou: Gansu Nationalities Publishing House; 1993.
Wang WF, Zhang WM, Li YH. Study on sandstorm harm and prevention in Dunhuang Mogao grottoes. Dunhuang Res. 2000;1:42–8.
Yang SL, Wang XD, Guo QL. Preliminary analysis of moisture distribution in cliff rocks of the Mogao grottoes in Dunhuang. Hydrogeol Eng Geol. 2009;35:94–7.
Guo ZQ. Stability analysis of dense caves and risk assessment of the dangerous rock body at the southern area of Mogao grottoes, Dunhuang [Doctor]. Lanzhou: Lanzhou University; 2018.
Li BX. The features and ages of the "Grottoes strata" in west part of Gansu Province. Gansu Geol. 1986;6:61–97.
Guo QL, Wang XD, Xue P, Zhang GB, Fan ZX, Hou WF, Zhang ZM. Research on spatial distribution and relations of salinity and moisture content inside rock mass of low-layer caves in Dunhuang Mogao grottoes. Chin J Rock Mech Eng. 2009;28:3769–76.
Zhou QY, Yao BY, Li HS, Wang YW, Chen GQ. Analysis of water and salt transport processes in the wall of Cave 108 of Mogao grottoes by electrical resistivity tomography. Sci Conserv Archaeol. 2020;32:34–44.
Guo QL, Wang XD, Li ZX, Koizumi K, Chikaosa T, Tadash M. Elementary application of the high density resistivity method in investigation of moisture at the Mogao grottoes. Dunhuang Res. 2008;6:24–30.
Zhang HY, Zeng Z, Zhang MQ, Wang XD, Li ZX. Surrounding rock stability and environmental protection of Mogao grottoes in Dunhuang. Chin J Geol Hazard Control. 1996;7:73–80.
Fu CH, Shi YC. Research on dynamic characteristics of country rock of Mogao grottoes under earthquake loading. Northwest Seismol J. 2004;26:266–73.
Shi YC, Cai HW, Xu HP, Liu HM. Methods for the seismic stability evaluation of surrounding rock and auxiliary structures of grottoes. Northwest Seismol J. 2000;22:83–9.
Pei QQ. Analysis of the stability of grottoes and research on rock mass at Mogao dynamic response under- seismic load based on geo-slope [Master]. Lanzhou: Lanzhou University; 2012.
Yang HR, Liu P, Sun B, Yi ZY, Wang JJ, Yue YQ. Study on damage mechanisms of the microstructure of sandy conglomerate at Maijishan grottoes under freeze-thaw cycles. Chin J Rock Mech Eng. 2021;40:546–55.
Ding DJ. Experimental study on the effect of the sulfate on the weathering of glutenite in Maijishan [Master]. Lanzhou: Lanzhou University; 2020.
Zhu JL. Experimental study of BFRP anchors on reinforcement in the rock mass of Maiji mountain grottoes [Master]. Lanzhou: Lanzhou University; 2019.
Wang XD, Yu ZH, Pei QQ, Zhang B, Guo QL. Research on preparation method of sandstone sample and its error influence. Rock Soil Mech. 2020;41:1–10.
Lu SM. Columns' stability analysis of cave no. 9 and no. 10 in Yungang grottoes [Master]. Shenyang: Shenyang Jianzhu University; 2018.
Sun B, Peng NB, Wang FR. Seismic dynamic responses of no. 19 Grotto's west side cave of Yungang grottoes. J Southwest Jiaotong Univ. 2012;47:573–9.
Shi YC, Zhang J. The Dunhuang Caves' main diseases and precautions against them. Northwest Seismol J. 1997;2:21–9.
Zhang GJ, Li ZX. The threats of the precipice body of the northern area of Mogao grottoes. Dunhuang Res. 2005;92:71–5.
Wang PT. Study on movement characteristics of rockfall at Mogao grottoes in Dunhuang [Master]. Lanzhou: Lanzhou University; 2017.
Shi YC, Xu HP, Wang XD. Seismic safety evaluation of Mogao grottoes in Dunhuang. Dunhuang Res. 2000;63:49–55.
Shi YC, Fu CH, Wang LM. Numerical simulation analysis of mechanism of seismic deformation damage of country rock of grottoes. Rock Soil Mech. 2006;27:543–8.
Tang XW. The flood risk grading and threshold determining of Mogao grottoes [Master]. Lanzhou: Lanzhou University; 2015.
Ma HH. The construction of flood risk early warning system in Mogao grottoes [Master]. Lanzhou: Lanzhou University; 2016.
Wang XD, Guo QL, Yang SL, Pei QQ, Fan YQ. The engineering geological problems and conservation of cliff face of Dunhuang Mogao grottoes, China. In: Engineering geology for society and territory-volume 8. Cham: Springer; 2015. p. 183–7.
Guo ZQ, Chen WW, Zhang JK, Ye F, Liang XZ, He FG, Guo QL. Hazard assessment of potentially dangerous bodies within a cliff based on the Fuzzy-AHP method: a case study of the Mogao grottoes, China. Bull Eng Geol Environ. 2017;76:1009–20.
Yang ZF, Chikaosa T, editors. Ancient underground opening and preservation: assessing the relative stability of the Mogao grottoes using a rock mass quality classification approach. Leiden: CRC Press; 2015.
Shi YC. Impact of future earthquake disasters on Dunhuang Mogao grottoes and its ancillary buildings. Northwest Seismol J. 1996;18:42–7.
China ICOMOS. Principles for the conservation of heritage sites in China. Beijing: Cultural Heritage Press; 2002.
Sun SL, Chen GQ, Wang XW, Chai BL. Preliminary construction of the ontology inspection system of cultural heritage sites: a case study of the cultural relics of Dunhuang Mogao grottoes. China Cult Herit Sci Res. 2018;2:32–8.
This study was supported by the National Key Research and Development Project of China (Research and Development of Rock Mass Reinforcement for Slate in Horizontal Roof Cave; Project No. 2019YFC1520604),the Major Special Projects of Science and Technology in Gansu Province (Research and Demonstration of Anti-weathering Technology in Sandstone Grottoes Temple; Project No. 18ZD2FA001), and Science and Technology Research Project of Cultural Relics Protection in Gansu Province (Simulation Experimental Study on Water Vapor Migration in Surrounding Rock of Mogao Grottoes; Project No. 201604).
College of Civil Engineering and Mechanics, Lanzhou University, Lanzhou, 730000, China
Xudong Wang & Yanwu Wang
Key Laboratory of Mechanics on Disaster and Environment in Western China, The Ministry of Education of China, Lanzhou, 730000, China
Dunhuang Academy, Dunhuang, 736200, China
Yanwu Wang, Qinglin Guo & Qiangqiang Pei
Research Center for Conservation of Cultural Relics of Dunhuang, Dunhuang, 736200, China
Yanwu Wang, Qinglin Guo, Qiangqiang Pei & Guojing Zhao
Palace Museum, Beijing, 100009, China
Xudong Wang
Cultural Heritage Conservation and Design Consulting Co., Ltd of Gansu Mogao Grottoes, Dunhuang, 736200, China
Guojing Zhao
Yanwu Wang
Qinglin Guo
Qiangqiang Pei
XW and YW conceived and wrote this article. QG collected the data on historical reinforcement. QP completed numerical simulation by GEOSLOPE software. GZ analyzed the result of 3D scanning. YW also contributed to data analysis and processing. All authors read and approved the final manuscript.
Correspondence to Yanwu Wang.
Wang, X., Wang, Y., Guo, Q. et al. The history of rescuing reinforcement and the preliminary study of preventive protection system for the cliff of Mogao Grottoes in Dunhuang, China. Herit Sci 9, 58 (2021). https://doi.org/10.1186/s40494-021-00537-w
Cliff of sandy conglomerate
Numerical simulation | CommonCrawl |
Equation of a Hyperbola with Center at the Origin
A hyperbola is a conic section formed by the intersection of a cone by a plane at an angle where both bases are intersected. The hyperbola is composed of two branches that are a reflection of each other. The hyperbola is also defined as the set of all points in the Cartesian plane so that the difference of the distances between any point and the foci is equal to a constant.
Hyperbolas have two lines of symmetry. The transverse axis is the segment that passes through the center and joins the vertices. The foci are located on the line that contains the transverse axis. The conjugate axis is perpendicular to the transverse axis and connects the covertices. The center is the point of intersection of the transverse axis and the conjugate axis. Hyperbolas also have two asymptotes, which also intersect at the center.
Finding the equation of the hyperbola with center at the origin.
See equations
Standard form of hyperbolas with center at the origin
Finding the vertices and foci of a hyperbola centered at the origin
Determine the equation of hyperbolas using vertices and foci
Hyperbolas with center at the origin – Examples with answers
Hyperbolas with center at the origin – Practice problems
The standard form of a hyperbola gives us information about the location of the vertices and the foci and from there we can define the hyperbola completely. There are two variations of the hyperbola equations that have the center at the origin depending on their orientation.
We can have hyperbolas oriented horizontally or vertically in the Cartesian plane.
Equation of the horizontal hyperbola
The standard form of a hyperbola that has its center at (0, 0) and whose transversal axis is on the x-axis is:
$latex \frac{{{x}^2}}{{{a}^2}}-\frac{{{y}^2}}{{{b}^2}}=1$
$latex 2a$ is the length of the transversal axis (segment that joins the vertices)
The coordinates of the vertex are $latex (\pm a, 0)$
$latex 2b$ is the length of the conjugate axis (segment that joins the covertices)
The coordinates of the covertices are $latex (0, \pm b)$
$latex 2c$ is the distance between the foci
We find c using $latex {{c}^2}={{a}^2}+{{b}^2}$
The coordinates of the foci are $latex (\pm c, 0)$
The equations of the asymptotes are $latex y=\pm \frac{b}{a}x$
Equation of the vertical hyperbola
When the hyperbola has the center at the origin, (0, 0), and its transversal axis is the y axis, its equation is:
$latex \frac{{{y}^2}}{{{a}^2}}-\frac{{{x}^2}}{{{b}^2}}=1$
$latex 2a$ is the length of the transverse axis
The coordinates of the vertex are $latex (0, \pm a)$
$latex 2b$ is the length of the conjugate axis
The coordinates of the covertices are $latex (\pm b, 0)$
$latex 2c$ is the distance between the foci, where, $latex {{c}^2}={{a}^2}+{{b}^2}$
The coordinates of the foci are $latex (0, \pm c)$
The equations of the asymptotes are $latex y=\pm \frac{a}{b}x$
We can find the vertices and the foci using the equation of a hyperbola and following these steps:
Determine the orientation of the hyperbola by finding whether the transverse axis is located on the x-axis or on the y axis:
Case 1. If the equation has the form $latex \frac {{{x}^2}}{{{a}^2}} – \frac{{{y}^2}}{{{b}^2}}=1$, the transverse axis is located on the x-axis. The coordinates of the vertices are $latex (\pm a, 0)$ and the coordinates of the foci are $latex (\pm c, 0)$.
Case 2. If the equation has the form $latex \frac{{{y}^2}}{{{a}^2}}-\frac{{{x}^2}}{{{b}^2}}=1$. The coordinates of the vertices are $latex (0, \pm a)$ and the coordinates of the foci are $latex (0, \pm c)$.
We can find the value of a using the equation $latex a=\sqrt{{{a}^2}}$.
We can find the value of c using the equation $latex {{c}^2}={{a}^2}+{{b}^2}$.
To find the equation of a hyperbola centered at the origin if we know the coordinates of the vertices and the foci, we can follow the following steps:
Step 1: Determine the orientation of the hyperbola. This requires us to find out whether the transverse axis is located on the x-axis or on the y axis.
1.1. When the coordinates of the vertices have the form $latex (\pm a, 0)$ and the coordinates of the foci have the form $latex (\pm c, 0)$, the transverse axis is on the x axis and we use the equation $latex \frac{{{x}^2}}{{{a}^2}}-\frac{{{y}^2}}{{{b}^2}}=1$.
1.2. When the coordinates of the vertices have the form $latex (0, \pm a)$ and the coordinates of the foci have the form $latex (0, \pm c)$, the transverse axis is on the y axis and we use the equation $latex \frac{{{y}^2}}{{{a}^2}}-\frac{{{x}^2}}{{{b}^2}}=1$.
Step 2: We use the equation $latex {{b}^2}={{c}^2}-{{a}^2}$ to find the value of $latex {{b}^2}$.
Step 3: We use the values of $latex {{a}^2}$ and $latex {{b}^2}$ in the equation obtained in step 1.
The methods and steps used to find the equations of hyperbolas and the coordinates of vertices and foci seen above are applied to solve the following examples. Look at the examples carefully and analyze the process used.
What are the vertices and foci of the hyperbola that has the equation $latex \frac{{{y}^2}}{16} – \frac{{{x}^2}}{9}=1$
We can see that the equation has the form $latex \frac{{{y}^2}}{{{a}^2}}-\frac{{{x}^2}}{{{b}^2}}=1$, so the transverse axis is located on the y axis. Since the hyperbola is centered at the origin, the vertices are the y intercepts of the graph. To find the vertices, we use $latex x = 0$ and solve for y :
$latex \frac{{{y}^2}}{16}-\frac{{{x}^2}}{9}=1$
$latex \frac{{{y}^2}}{16}-\frac{0}{9}=1$
$latex \frac{{{y}^2}}{16}=1$
$latex {{y}^2}=16$
$latex y=\pm 4$
The vertices are located at $latex (0,\pm 4)$.
Now, we use the equation $latex {{c}^2}={{a}^2}+{{b}^2}$ to get the value of c. Therefore, we have:
$latex {{c}^2}={{a}^2}+{{b}^2}$
$latex c=\sqrt{{{a}^2}+{{b}^2}}$
$latex c=\sqrt{16+9}$
$latex c=\sqrt{25}$
$latex c=\pm 5$
The foci are located at $latex (0, \pm 5)$.
What is the equation of the hyperbola that has vertices at (±4, 0) and foci at (±5, 0)?
The foci and vertices are located on the x-axis. This means that the transverse axis is on the x-axis. Therefore, the equation will have the following form:
The vertices are $latex (\pm 4, 0 )$, which means that $latex a=4$ and we have $latex {{a}^2}=16$.
The foci are $latex (\pm 5,0)$, which means that $latex c=5$ and we have $latex {{c}^2}=25$.
We determine the value of $latex {{b}^2}$ using the equation $latex {{b}^2}={{c}^2}-{{a}^2}$:
$latex {{b}^2}={{c}^2}-{{a}^2}$
$latex {{b}^2}=25-16$
$latex {{b}^2}=9$
Using these values, we have the following hyperbola equation:
$latex \frac{{{x}^2}}{16}-\frac{{{y}^2}}{9}=1$
Use what you have learned to solve the following hyperbola equation problems. See the solved examples above in case you need help with this.
What are the vertices and foci of the hyperbola $latex \frac{{{x}^2}}{9}-\frac{{{y}^2}}{25}=1$?
Vertices: $latex (0, \pm 3)$
Foci: $latex (0, \pm 9)$
Vertices: $latex (\pm 3, 0)$
Foci: $latex (\pm \sqrt{34}, 0)$
Foci: $latex (\pm 34, 0)$
Foci: $latex (0, \pm \sqrt{34})$
What is the equation of the hyperbola that has the vertices $latex (\pm 6, 0)$ and the foci $latex (\pm 2 \sqrt{10}, 0)$?
$latex \frac{{{x}^2}}{26}-\frac{{{y}^2}}{10}=1$
$latex \frac{{{y}^2}}{36}-\frac{{{x}^2}}{16}=1$
Interested in learning more about the equations of a hyperbola? Take a look at these pages:
Equation of a Hyperbola with Examples
Equation of a Hyperbola with Center Outside the Origin
Elements of the Hyperbola with Diagrams
[email protected]hispas.com | CommonCrawl |
Spectrum splitting for efficient utilization of solar radiation: a novel photovoltaic–thermoelectric power generation system
Esam Elsarrag ORCID: orcid.org/0000-0001-7163-94591,
Hans Pernau2,
Jana Heuer2,
Nibul Roshan1,
Yousef Alhorr1 &
Kilian Bartholomé2
Standard photovoltaic solar cells (PV cells) use only about half of the light spectrum provided by the sun. The infrared part is not utilized to produce electricity. Instead, the infrared light heats up the PV cells and thereby decreases the efficiency of the cell. Within this research project, a hybrid solar cell made of a standard PV cell and a thermally driven thermoelectric generator (TEG) is being developed. The light of the sun splits at about 800 nm. The visible and ultraviolet part is transferred to the PV cell; the infrared part illuminates the thermal TEG cell. With the hybrid solar cell, the full solar spectrum is exploited. In this paper, theoretical and experimental results for improving the performance of thermoelectric elements coupled with photovoltaic modules have been presented. The proposed concepts and the experimental results have provided a key input to develop a large scale of a hybrid PV-TE system.
The basic idea for a combined PV and thermoelectric solar cell has been published in 2008 (Tritt et al. 2008). The history of thermoelectricity began in 1823 when Seebeck made his experiments about the conversion of a temperature gradient into an electrical current (Seebeck 1895). Especially within the last decade research on thermoelectric materials and systems has been intensified due to the awareness of the need to increase the efficiency of energy consumption. Thermoelectric devices can convert waste heat directly into electrical energy. Many efforts in this field have been made to implement thermoelectric generators (TEG) in automotive applications. The conversion efficiency of TE generators depends on the available temperatures and the material properties, namely the dimensionless figure of merit ZT:
$${\rm ZT} = \frac{{\alpha^{2} \sigma }}{\lambda }T$$
T is the absolute temperature, α the Seebeck coefficient, σ the electrical conductivity and λ the thermal conductivity. Based on this value the conversion efficiency of the thermoelectric material using the temperature gradient between the hot side temperature T h and cold side temperature T c can be calculated as:
$$\eta_{\text{TE}} = \eta_{\text{c}} \left( {\frac{{\sqrt {1 + {\text{ZT}}} - 1}}{{\sqrt {1 + {\text{ZT}}} + \frac{{T_{c} }}{{T_{h} }}}}} \right)$$
where η c is the Carnot efficiency. The higher the ZT value of the material the closer is the efficiency to the Carnot limit. Modern thermoelectric materials reach ZT values larger than 1 and with efficiencies more than 4 % (Tritt et al. 2008). Using the technique of thermoelectric generators, to convert the infrared part of the sun spectrum into electrical energy, we could increase the overall performance of a combined PV and TE solar cell by approximately 10 % of the PV, thereby achieving around 20 % efficiency with the combined system rather than the 17–18 % efficiency from the PV-only setup.
The basic idea of PV-TE was introduced by Tritt (2008) and Kraemer et al. (2011) who studied the utilization of both ultraviolet (UV) and infrared (IR) parts. Various papers have been published dealing with the combined use of thermoelectric and PV or solar thermal systems. Baranowski et al. (2012) claimed efficiencies of 15.9 % for concentrated solar thermoelectric generators (STEG) by developing a balance model and analyzing the present day materials under ideal conditions. A number of works on the STEG hybrids are based on concentrating solar power on to TEGs. Chávez Urbiola and Vorobiev (2013) designed and tested such a system with co-generation of hot water which was used as the coolant for the TEG hotside and achieving 5 % electrical efficiency. The studies conducted by Eswaramoorthy and Sanmugam (2013) and Kalogirou (2013) on the use of such systems in specific geographic locations gave more insight into the feasibility and possibility of large scale deployment of the systems. Leon et al. (2012) and Lertsatitthanakorn et al. (2013a, 2013b) evaluated the possibilities of concentrated solar power on hybrid systems using different strategies for TEG design and the cooling technique. Lippong et al. (2012) successfully implemented a cooling mechanism for solar TEG hybrid using phase change material and implied the possibility of using it as a sustainable system for independent operation. McEnany et al. (2011) developed an analysis model and denotes that, with the presently available materials and technology, efficiencies of more than 10 % can be achieved using solar TEG hybrid systems by the cascading of TEGs and under high temperature and optical irradiance operation. Meir et al. (2013) suggested controlled shaping of electric potential distribution in the thermoelectric converters for more efficient generation of thermoelectric energy, in theory. Mizoshiri et al. (2012) tested a hybrid system by implementing spectrum splitting on a thin-film TEG and focusing the near infra-red (NIR) radiation onto the TEG while the PV received the rest of the spectrum. The use of thin-film selective absorber coating for TEGs in the performance of hybrid systems was investigated by Ogbonnaya et al. (2013). Van Sark (2011) developed a model to analyse the feasibility of a PV-thermoelectric module in outdoor conditions and provided very optimistic results by considering ideal conditions of operation. Advances in the related fields such as: (1) the development of high-performance spectrally-selective solar absorber based on a yttria-stabilized zirconia cermet with high- temperature stability by Wang et al. (2011), (2) thin-film TEG model by Weinstein et al. (2013) which can be used in place of conventional TEGs with minimal losses, and (3) the multi-hybrid cell by Yang et al. (2013), which can harvest mechanical, solar and thermal energy at the same time, provided strength to the optimistic feasibility predictions of van Sark and Zhang et al. (2013) to come true. One such promising field is the solar spectrum splitting for energy co-generation. Within all these works, the splitting of the solar spectrum was discussed theoretically but not investigated in an extensive practical manner, except for Mizoshiri et al. (2012) who generated an open voltage of 79 mV.
This study will investigate the performance of a thermoelectric generator by changing its material constitution and design features. The TEG is anticipated to be integrated with PV modules to form a hybrid photovoltaic–thermoelectric generator and increase the overall conversion efficiency from solar irradiance to electricity.
The first system setup
Figure 1 shows a simplified solar spectrum and the energy fractions which could be used by the PV cell and the TEG. Based on this concept, the first principal design was developed and implemented in a versatile test hybrid cell as shown in Fig. 2. This system consists of 15 cm × 15 cm monocrystalline PV cell, 1.5 cm × 1.5 cm TEG [Quickohm Model QC 31-1.0-3.9 M (Quick Cool Shop 2015)] and a beam splitter. For the first test setup, a so called "cold mirror" made by OpticBalzers (Datasheet: Cold Mirror 2015) was used to split the solar radiation. The beam splitter was placed at an angle of 45° to both the PV and TEG.
Simplified solar spectrum and energy ratios to be used within the PV cell and the TEG (Tritt et al. 2008)
a Principal design idea; b versatile test hybrid cell
The spectral characteristic of this mirror is shown in Fig. 3. As shown in the figure, the cut-off wavelength of this mirror is 700 nm, a little bit lower than the desired 800 nm, this leads to an approximate 50/50 splitting of the energy in the setup.
Transmission and reflectance data of the used mirror by OpticBalzers (Product Page: Hi-Z 2015)
The test rig was designed to allow independent movements of the system components. It offers enough space to test different types of PV cells, absorbers and beam splitters. In the test rig both the PV cell and the TEG can be cooled, the input and output temperature of both coolers can be monitored. Both coolers use a liquid cooling media provided by a radiator cooling tower with an estimated cooling power of 1000 W. The lowest possible temperature depends on the surrounding temperature during measurements.
It has to be evaluated within the project if a tailored mirror with 800 nm or another cut-off wavelength will achieve better performance or not. As the project aims to use commercially available parts to minimize system costs for the final hybrid module, the mirror from OpticBalzers was considered to initiate the tests.
To verify the experimental data presented by Seebeck (1895), FEM simulation with Comsol Multiphysics was performed. The thermal absorber was simulated using the solar radiation tool enclosed in the heat transfer module. Two commercially available absorber materials were chosen for the simulations and lab tests. These are the "Metal Velvet™" absorber by Acktar Advanced Coatings (Website 2015) and the "Tinox® energy Al" by Almecosolar (Datasheet: Solare Absorberbeschichtungen 2015). The absorption data of both absorbers are shown in Fig. 4. The main difference between these two absorbers is the absorbance of IR light above 2.5 µm. "Metal Velvet™" is black up to very long wavelength as "Tinox® energy Al" is a so-called selective absorber and becomes transparent above 2.5 µm that leads to reduced emission losses. That is because the emissivity and the absorbance of an optically dense body are equal. If the absorber is heated by the sun, it will emit black body radiation in the range of about 5–8 µm wavelength depending on its temperature. This radiation loss reduces the maximum temperature which can be achieved in the system. The simulation results have shown that the "Metal Velvet™" absorber can reach up to 110 °C but the "Tinox® energy Al" can reach up to 345 °C in vacuum (the coating was in both cases attached to a 250 µm aluminum plate).
Absorption data of (black dots) the "Metal Velvet™" absorber by Acktar Advanced Coatings (Datasheet: Cold Mirror 2015) and (red circles) the "Tinox® energy Al" by Almecosolar (Website 2015)
In the next simulation step, a TEG and absorber were included in the model with a parameterized footprint area and height. The cold side of the TEG was attached to a 45 °C surface with a thermal conductivity of 1000 W/mK. The thermal conductivity between TEG and absorber plate was set to infinite.
Changing the footprint area from 2.5 × 2.5 to 50 × 50 mm2 the achievable hot and cold side temperatures as well as the heat flux through the TEG were simulated. The temperatures and an estimated generator power are plotted in Fig. 5. The conversion efficiency of the TEG was calculated from measured ZT data; together with the simulated heat flux the power was obtained. It can be seen that starting from the side length of around 18 mm of a square cut TEG; the TEG power output was proportional the temperature difference between the hot side and the cold side surfaces.
TEG output power and temperatures of the TE-absorber depending on the side length (a)
For the next steps, the TEG model will be enhanced using the Comsol models developed by Jägle et al. (2008). Using the material data of the real TEG modules, the real performance of the system can be evaluated with a good accuracy.
The second system setup
A second set of tests was conducted to compare the performance of the Hybrid PV system with a standard system. The Hybrid system consisted of a small size (15 cm × 15 cm) monocrystalline, custom made, low power PV Panel and a comparable sized TEG, Model HiZ-2, (2.9 × 2.9 cm). The setup makes use of the Bismuth Telluride based 'HZ-2' TEG Model from Hi Z (Product Page: Hi-Z 2015) which accommodates 97 thermocouples in 2.9 cm × 2.9 cm × 0.508 cm and has a conversion efficiency of 4.5 %. The TEG typically produces 2.5 Watts at 3.3 volts at Matched Load with a 200 °C temperature gradient between the surfaces at 30 °C ambient temperature. The standard system had a similar PV only setup. The testing was conducted in a solar simulator chamber (Model: SEC 1100, Manufacturer: Atlas) [Product Page: Atlas SEC 1100 (2015)].
The test setup was such that, both the benches were tested simultaneously inside the chamber. The Hybrid bench had the light falling on the cold mirror which was at 45° to the fixed light source to facilitate splitting of the light by the cold mirror (angle of incidence 45°). The mirror would split the incident light to the PV and TEG surfaces. The absorber surface with the TEG would get the IR radiation passing via the mirror. The PV surface, which is at 45° to the mirror and hence perpendicular to the light source, gets illuminated with the rest of the wavelength which is reflected by the mirror. The cold side of the TEG was cooled by an Aquaduct 360 Eco Mark II External water cooling tower (2015) and this temperature was dependent on the ambient temperature. The ambient temperature inside the chamber was kept at a constant maximum of 50 °C. The normal test bench had the similar PV facing the light at 45° and was parallel to the mirror in the Hybrid bench, to make sure that both test setups had the same amount of light incident on them. The measurements were made using a purpose built microcontroller based embedded system using sensors to monitor the current, voltage and temperature levels of the different parts of the setup as shown in Fig. 6. The logging was done in real time high frequency samples and saved to a memory card in CSV format for easy analysis. The sensors used included LM35 precision IC for temperature sensing with a range of 0–100 °C and an accuracy of 0.25 °C at high ambient temperatures (Datasheet: LM35 sensor 2015). The PV and TEG currents were measured using INA219 based sensors, with a resolution of 0.8 mA and a maximum range of ±3.2 A measurement (Datasheet: INA219 sensor 2015). A 20 × 4 parallel interface graphical LCD provided real time data display for monitoring purposes. The data saving was done via a memory card shield and a DS1307 based Real Time Clock (RTC) module (Datasheet: RTC 2015). The loads used for both PVs were 8 Ohm independent resistive Loads and the TEG having a 1 Ohm resistive Load. The irradiance levels were gradually changed in 8 steps from 300 to 800 W/m2, which were the available steps in the SEC 1100 Model. The Output of the Normal system provides an output without the sunlight splitting, while the Hybrid System performance is after the sunlight splitting.
Schematic of the measurement sensor connections to the setups
The first setup results
To verify the simulation results first laboratory test have been done using the same 15 × 15 mm2 TEG [Quickohm Model QC 31-1.0-3.9 M ( 2015)] under the absorber plate. Each absorber material is attached to a 250 µm Aluminum sheet. These absorber plates are interfaced with the TEG using a very thin layer (less than 1 mm) of Arctic Silver 5 thermal conductive paste which a Thermal conductivity of 8.9 W/mK (Product Page: Arctic Silver 2015). The measurements are performed in the solar simulator setup. The spectrum shape is similar to the AM1.5 standard and can be adjusted in its flux level from 0.12 to 1.1 suns. The output power of the TEG's is measured via the voltage over a reference resistor of 1 Ω. The cold side is cooled using a radiator cool tower system. The temperatures of the cooler plate and the absorber plate are measured with PT100 thin film thermometers. The voltages, as well as the two resistances, are measured with a Keithley 2700/7700 multi-meter. The obtained data are plotted in Fig. 7 over the radiation flux of the solar simulator. Both Fig. 7a, b show that the temperature difference between two surfaces of the absorbers "Metal Velvet™" and "Tinox® energy Al". The power output of the TEG(s) is proportional to the level of solar irradiations falling on the absorber's surface.
a Temperature of the absorber made from the "Metal Velvet™" and the cooler plate and the output power of the TEG plotted over the radiation flux of the solar simulator. b Temperature of the absorber made from the "Tinox® energy Al" and the cooler plate and the output power of the TEG plotted over the radiation flux of the solar simulato
As revealed in Fig. 4, the absorbance of "Metal Velvet™" keeps at 100 % corresponding to any values of wavelength from 0 to 10,000 nm. The maximum temperature difference was around 18 K under the solar radiation level at 1.1 suns as shown in Fig. 7a. The corresponding power output at 1.1 suns was about 32 mW. However, due to the selective absorbance characteristic of "Tinox® energy Al", the absorber greatly reduced the radiation emission heat loss and sustained a higher temperature between the hot and cold surface. As shown in Fig. 7a, b, the power output and the temperature difference between the two sides of "Tinox® energy Al" absorber-TEG assembly were always higher than the "Metal Velvet™" absorber-TEG assembly under different solar irradiation conditions of 500, 700 and 1000 W/m2.
In order to achieve a higher temperature difference in the TEG, apart from changing the absorbance characteristic, another measure aimed to reduce the convective heat loss on the surface of the absorber by covering the absorber surface with the honeycomb was investigated. The experimental setup with honeycomb cover is shown in Fig. 8. At first, two different material absorbers had been cut into 7.5 × 7.5 cm2 pieces and had been placed on a thermally insulating Styropor block inside the solar simulator as shown in Fig. 9. The temperature of the absorber was measured with PT 100 sensor glued to the backside of the absorber plate covered inside the Styropor block. In the first experiment, the setups were covered with honey-comb. The purpose experiment only intends to compare the performance two different absorber materials.
Honeycomb structure on top of the absorber to reduce convection losses
Setup to measure the different absorber materials in the solar simulator
The two absorbers being compared in this part were "KG-1" and "Tinox". The thickness of the heat absorber glass KG-1 was 3 mm. The thermal mass of "KG-1" is 40 times higher than the non-transparent absorber "Tinox" which was deposited on a 0.2 mm aluminum foil. Owing to the high thermal mass, the response time to a radiation change of "KG-1" absorber was much higher.
Figure 10 shows the temperature profiles of the "KG-1" absorber and the "Tinox" absorber under the solar irradiation of 0.5, 0.8 and 1.1 suns against time. It is obvious to note that the temperature of the "Tinox" absorber was always much higher than the "KG-1" glass absorber. Because of the high thermal mass, the response time for a radiation change and the maximum temperature reached were low.
Temperature of the KG-1 glass (red) and the Tinox absorber (black) as a function of time
Finally, the achieved temperatures with and without the honey-comb structure of three types of absorbers are shown in Fig. 11. The use of the honey-comb-structure leads to an increased temperature as the convective heat loss is reduced. Again, it is clear that minimizing the convective heat loss from the absorber surface enables to maximize the absorber's temperature and potentially increases the electricity generation by the thermoelectric effect.
Maximum reached temperatures of the different absorber materials with and without the mounted honeycomb structure
The second setup results
The solar simulator was set to provide irradiance between 200 and 800 W/m2, as shown in Fig. 12. Initially it is aimed to compare only the outputs the Normal (conventional) PV and the PV with the split mirror using two benches on the same time. It was noted that the PV with the splitted spectrum performed better than the Normal PV at low irradiance levels up to 700 W/m2. Beyond this level the Normal PV produced more output than the PV with the split mirror see Fig. 13a. The power difference between the splitted spectrum PV and the full spectrum PV exceeded the 40 % at low irradiance as shown in Fig. 13b. The PVs temperatures during the test are shown in Fig. 14, which clearly shows that the split spectrum PV was cooler than the full spectrum PV at all times.
The different irradiance values on floor level used in the testing
Comparison of the power outputs of the PV panels only, in the normal (full spectrum) and hybrid (split spectrum) systems: a PV power outputs; b performance difference in percentage
The temperatures of both the PVs during the testing periods
The performance comparison of the hybrid system (PV +TEG) and the Normal PV only system provided clear information on the difference in power that can be produced if the full spectrum of the sunlight is harnessed as shown in Fig. 15. The advantage of the power output of the hybrid system varied with the irradiance levels due to the factors mentioned above as well as the temperature difference between the TEG surfaces. The TEG power output is slightly better at higher irradiance levels; an average of around 10 % of the PV power output throughout the test with a constant ambient temperature of 50 °C. The efficiency curve of the TEG as calculated from the equations stated above along with information from the tests (Product Page: Hi-Z 2015) is provided in Fig. 16. The low TEG module efficiency is due to the lower temperature and heat absorbed by the TEG.
Power output comparison of the normal and hybrid systems
Comparison between Carnot efficiency and calculated TEG efficiency
The hybrid system (PV + TEG) performed better than the Normal PV throughout the test period and the maximum difference achieved was nearly double that of the Normal (full spectrum) PV as shown in Fig. 17. The difference between the power outputs increased at lower irradiance levels (nearly 80 % difference at 300 W/m2) however; the difference reduced as the irradiance levels increased, (nearly around 5 % at 800 W/m2) Further studies based on change of the ambient temperature (instead of using the same ambient temperature at all irradiance levels) can lead to further information for controlling or tuning the system performance. This data can also be used to predict the performance of the hybrid system at different ambient climate conditions.
Advantage of hybrid system performance over the normal system
This study investigated the performance of a photovoltaic (PV) and thermoelectric generator (TEG) assembly by changing its material constitution and design features. The TEG is anticipated to be integrated with PV modules to form a hybrid photovoltaic along with a sunbeam splitter to increase the overall conversion efficiency from solar irradiance to electricity.
The thermoelectric conversion efficiency is proportional to the temperature difference between the absorber's hot and cold surfaces; however, the PV efficiency reduces with the increase of its temperature. The methods used to enhance the hybrid system performance were proposed. Their corresponding experiments were performed and the initial results were presented. Conclusively, proper selections of selective absorbance materials of the absorber are contributive to the thermoelectric generation. Alleviation of the convective heat loss from the surface of the absorber results in substantial positive impact to a TEG. The PV showed a better overall performance with the beam splitter. The proposed concepts and the positive experimental results provide useful information and reference for the further development of a hybrid PV-TE system for field testings.
Baranowski, L. L., Snyder, G. J., & Toberer, E. S. (2012). Concentrated solar thermoelectric generators. Energy and Environmental Science, 5(10), 9055–9067.
Chávez Urbiola, E., Vorobiev, Y. (2013). Investigation of solar hybrid electric/thermal system with radiation concentrator and thermoelectric generator. International Journal of Photoenergy.
Datasheet: Cold Mirror. (2015). http://www.opticsbalzers.com. Accessed June 2015.
Datasheet: INA219 sensor. (2015). http://www.adafruit.com/datasheets/ina219.pdf. Accessed Sep 2015.
Datasheet: LM35 sensor. (2015). http://www.ti.com/lit/ds/symlink/lm35.pdf. Accessed Sep 2015.
Datasheet: RTC. (2015). http://datasheets.maximintegrated.com/en/ds/DS1307.pdf.
Datasheet: Solare Absorberbeschichtungen. (2015). http://www.almecosolar.com. Accessed June 2015.
Eswaramoorthy, M., & Shanmugam, S. (2013). Energy sources, Part A: recovery, utilization, and environmental effects. Energy Sourc, 35, 487.
Jaegle, M., Bartel, M., Ebling, D., Jacquot, A., & Böttner, H. (2008). Anisotropy and inhomogeneity measurement of the transport properties of spark plasma sintered thermoelectric materials, in European Thermoelectric Conference Paris.
Kalogirou, S. A. (2013). Solar thermoelectric power generation in cyprus: selection of the best system. Renewable Energy, 49, 278–281.
Kraemer, D., et al. (2011). High-performance flat-panel solar thermoelectric generators with high thermal concentration. Nature Materials, 10, 532.
Leon, M. T. D., Chong, H., & Kraft, M. (2012). Procedia Engineering, 47, 76.
Lertsatitthanakorn, C., Jamradloedluk, J., & Rungsiyopas, M. (2013a). Thermal modeling of a hybrid thermoelectric solar collector with a compound parabolic concentrator. Journal of Electronic Materials, 42, 2119.
Lertsatitthanakorn, C., Jamradloedluk, J., Rungsiyopas, M., Therdyothin, A., & Soponronnarit, S. (2013b). Performance analysis of a thermoelectric solar collector integrated with a heat pump. Journal of Electronic Materials, 42, 2320.
Lippong, T., Singh, B., Date, A., Akbarzadeh, A. (2012). 2012 IEEE International Conference in Power and Energy (PECon), p. 105.
McEnaney, K., Kraemer, D., Ren, Z. F., & Chen, G. (2011). Modeling of concentrating solar thermoelectric generators. Journal of Applied Physics, 110, 6.
Meir, S., Stephanos, C., Geballe, T. H., & Mannhart, J. (2013). Highly-efficient thermoelectronic conversion of solar energy and heat into electric power. Journal of Renewable and Sustainable Energy, 5, 043127.
Mizoshiri, M., Mikami, M., & Ozaki, K. (2012). Thermal-photovoltaic hybrid solar generator using thin-film thermoelectric modules. Japanese Journal of Applied Physics, 51, 06fl07.
Ogbonnaya, E., Gunasekaran, A., & Weiss, L. (2013). Microsystem technologies-micro-and nanosystems-information storage and processing systems, 19, 995.
Product Page: Aquaduct. (2015). http://shop.aquacomputer.de/product_info.php?products_id=3029. Accessed Sep 2015.
Product Page: Arctic Silver. (2015). http://www.arcticsilver.com/tc.htm. Accessed Sep 2015.
Product Page: Atlas SEC 1100. (2015). http://atlas-mts.com/products/product-detail/pid/242/. Accessed September 2015.
Product Page: Hi-Z 2. (2015). http://www.hi-z.com/uploads/2/3/0/9/23090410/hz-2.pdf.
Quick cool shop. (2015). http://www.quick-cool-shop.de/. Accessed Sep 2015.
Seebeck, T.J. (1895). Magnetische Polarisation der Metalle und Erze durch Temperaturdifferenz. W. Engelmann, Leipzig, Ostwalds Klassiker der exakten Wissenschaften Nr 70.
Tritt, T. M., Böttner, H., & Chen, L. (2008). Thermoelectrics: direct solar thermal energy conversion. MRS Bulletin, 33, 366–368. doi:10.1557/mrs2008.73.
van Sark, W. (2011). Feasibility of photovoltaic—thermoelectric hybrid modules. Applied Energy, 88, 2785.
Wang, N., Han, L., He, H. C., Park, N. H., & Koumoto, K. (2011). A high-performance spectrally-selective solar absorber based on a yttria-stabilized zirconia cermet with high-temperature stability. Energy and Environmental Science, 4, 3676.
Website. (2015). www.acktar.com. Accessed June 2015.
Weinstein, L. A., McEnaney, K., & Chen, G. (2013). Modeling of thin-film solar thermoelectric generators. Journal of Applied Physics, 113, 164504.
Yang, Y., Zhang, H. L., Lin, Z. H., Liu, Y., Chen, J., Lin, Z. Y., et al. (2013). Energy and Environmental Science, 6, 2429.
Zhang, M., Miao, L., Kang, Y. P., Tanemura, S., Fisher, C. A. J., Xu, G., et al. (2013). Efficient, low-cost solar thermoelectric cogenerators comprising evacuated tubular solar collectors and thermoelectric modules. Applied Energy, 109, 51.
The authors would like to acknowledge the Qatar National Research Fund for funding the presented work in the NPRP: 5-363-069.
Gulf Organisation for Research and Development, QSTP, Doha, Qatar
Esam Elsarrag
, Nibul Roshan
& Yousef Alhorr
Department of Energy Systems, Fraunhofer IPM, Freiburg, Germany
Hans Pernau
, Jana Heuer
& Kilian Bartholomé
Search for Esam Elsarrag in:
Search for Hans Pernau in:
Search for Jana Heuer in:
Search for Nibul Roshan in:
Search for Yousef Alhorr in:
Search for Kilian Bartholomé in:
Correspondence to Esam Elsarrag.
Elsarrag, E., Pernau, H., Heuer, J. et al. Spectrum splitting for efficient utilization of solar radiation: a novel photovoltaic–thermoelectric power generation system. Renewables 2, 16 (2015) doi:10.1186/s40807-015-0016-y | CommonCrawl |
Home Journals EJEE Optimal Feeder Routing and DG Placement Using Kruskal's Algorithm
Optimal Feeder Routing and DG Placement Using Kruskal's Algorithm
Gholam-Reza Kamyab
Department of Electrical Engineering, Gonabad Branch, Islamic Azad University, Gonabad 9691664791, Iran
[email protected]
In this paper, the optimal feeder routing along with optimal distributed generator placement is formulated as an optimization problem. In this problem, the total cost of capital recovery, supply interruption and energy losses are minimized. Also, line loading capacity and bus voltage constraints are applied. By proposing a novel method to code the solutions of the optimization problem with the facilitation obtained from the utilization of the Kruskal's algorithm, it is guaranteed that graphs of all solutions would always be spanning trees. The main result of the implementation of this method within meta-heuristic algorithms is to limit the search space to radial networks leading to snap quicker answers with higher degree of optimality. A distribution network with a 24 load points and 42 candidate branches is used as a baseline to indicate the effectiveness of the proposed method which was tested using three meta-heuristic algorithms including genetic algorithm, particle swarm optimization and simulated annealing.
electrical distribution network planning, distribution feeder routing, distributed generators
The feeder routing is one of the main parts of distribution system planning, aiming at determining number of feeders and their routes to connect demand locations with substations undertaking the satisfaction of technical and physical constraints in a way that the demand is met at minimum cost. The growth of peak demand, low reliability, and high-power losses are major problems of distribution networks. To deal with these problems, the use of distributed generators (DGs) in distribution networks has considerably grown to satisfy the needs for providing load locally, reduce the peak demand of distribution network, reduce power losses, increase reliability, and improve voltage profile. At the presence of distributed generators, feeder routing would have more significant role because DGs have great effect on determining the optimal route. In this paper, feeder routing and DG placement issues are considered simultaneously.
Because distribution feeder routing is a large-scale non-convex problem which involves many variables and constraints, the preferred solution mostly contains meta-heuristic optimization approaches rather than mathematical techniques. Since distribution networks are generally used radially, one of the main constraints of the problem of distribution feeder routing is the radiality of the feeding paths of all load points. This is a hard limit discrete constraint. It is therefore recommended not to use the penalty concept at optimization process.
Generally, the problem of optimal routing of distribution feeders, independently or in combination with other issues of the distribution network planning problem, such as determining the location and optimal size of substations, has been considered in a variety of studies, some of which are referred to hereafter. The combination of the steepest descent approach and the simulated annealing technique is used for optimal planning of radial distribution networks, taking the uncertainties of the inputs and the various models of determining the interruption costs into account [1]. In this paper, only the connectivity of network configuration is checked using a network connectivity matrix. A stochastic model for the expansion planning of an active distribution network comprising shared electric vehicle charging stations, solar based distributed generations, and battery energy storage systems is presented [2]. A mixed integer linear programming model is presented for short-term expansion planning of power distribution systems [3, 4]. The model is able to solve the problems of optimal allocation of voltage regulators and capacitor banks, optimal reconductoring of distribution networks and determining optimal tap position of distribution transformers. A practical methodology based on georeferenced data for planning a resilient underground distribution network is presented [5]. In this paper, a modified-prim algorithm is used to determine optimal location of distribution transformers and to find the minimal path of the medium voltage network. A multi-objective joint planning model for active distribution network planning is presented [6]. In this paper, using the multi-objective natural aggregation algorithm, the location and size of the electric vehicle charging stations, renewable energy sources, battery energy storage system, and distribution network expansion schemes are determined. By using an improved harmony search algorithm, optimal location and size of distribution substations and feeders are investigated in the presence of distributed generators [7]. In this paper, in order to keep the radial structure of distribution network, the simultaneous satisfaction of the two constraints is evaluated: first, the determinant of the branch-node matrix must be zero, and second, the number of branches should be less than the number of nodes by one. An adaptive genetic algorithm is applied to determine the optimal site and size of sub-transmission substations and renewable and non-renewable distributed generations associated with optimal feeder routing [8]. In this paper, a method is introduced based on the rank of the Laplacian matrix of bus incidence matrix for checking the radiality of networks with a desired number of HV/MV substations. A stochastic optimization algorithm is provided to find the optimal feeder routing considering the stochastic variations of electric vehicle charging stations as well as photovoltaic and wind distributed generators [9]. In this paper, the radial structure of feeders is not considered. The simulated annealing algorithm is used for optimal planning of new urban distribution networks based on the selection of the best subset of paths providing back-feed from the entire path set generated for the available cable routes [10]. In this paper, the interconnecting and/or ring (not radial) feeders are searched. Using genetic algorithm, the planning of a hybrid AC/DC distribution system involves determining the optimal location and size of the AC/DC distribution substations, as well as the length and capacity and path of the AC/DC feeders on both low and medium voltage sides [11]. In this paper, the Minimum Spanning Tree method is used for routing feeders on both the MV and LV sides of the distribution system. A biogeography-based optimization is employed to find the optimal location and rating of distribution transformers and substations, as well as the type and route of medium and low voltage feeders based on uniform or non-uniform load density [12]. The Imperialist Competitive Algorithm (ICA) is used to find an optimal route of medium-voltage (MV) feeders at the presence of load forecasting uncertainties in multistage mode [13]. The presented solution uses two features of a tree in a graph to check the radial structure of the network in any iteration of ICA. A combined methodology [14] implemented by the particle swarm optimization (PSO) technique for the distribution network expansion planning, considering DGs in the presence of load and price uncertainties under electricity market environment has been presented; however they have ignored any expansion plan that does not have a radial structure. A multistage expansion planning framework [15] has been proposed to find optimal sizing, siting and timing of HV substation and medium voltage feeders' routes using an imperialist competitive algorithm (ICA) with an efficient coding. In their presented method, during the optimization process to maintain the radial structure of a network, firstly, the vectors (countries) of the ICA are manipulated so that the number of their "1" do not change, and secondly, by executing a subroutine only solutions that do not have a loop are accepted, and the other solutions are removed from the simulation process. A graph-theoretic [16] based feeder routing of the given power distribution system is observed and the impact of DG integration on the feeder routing have been proposed. In their approach firstly, using the proposed methodology, a set of near optimal solutions has been originated and then an optimal solution from the set of near optimal solutions is selected by running the modified load flow program for each of the near optimal solutions. The particle swarm optimization technique [17] is used to determine the optimal location and size of MV/LV distribution substations, and a modified-Prim algorithm is used to find the optimum feeder routing of LV and MV networks. A multi-objective planning algorithm [18] using dynamic programming is suggested to determine the optimal feeder routes and branch conductor sizes with simultaneous optimization of cost and reliability. Their proposed method guarantees that the radiality constraint is never violated since the network nodes are connected one by one using dynamic programming; however, because the dynamic programming suffers heavily from the "curse of dimensionality", it limits its application to small networks. A modified bacterial foraging technique [19] for optimal feeder routing in radial distribution system planning has been used to provide a solution rapidly with a better probability of achieving a global optimal solution. A direct search technique [20, 21] is applied for optimum feeder routing in radial distribution system. In these papers, the concept of principle of optimality theorem is effectively used to make the direct method more computationally efficient, reducing total numbers of radial paths. A new technique [22] employing discrete particle swarm optimization (DPSO) method is presented to find optimally distribution transformer and substation locations and ratings, as well as, the route and type of Medium Voltage (MV) and Low Voltage (LV) feeders. An improved genetic algorithm [23] is applied to determine optimal sizing and locating of the high and medium voltage substations, as well as medium voltage feeders routing. They use a subroutine to check the loop conditions in the network at any iteration according to crossover and mutation processes. Optimal planning of radial distribution network [24] is done by employing simulated annealing technique and the steepest descent approach is used to generate the initial solution for the optimization procedure. They check the graph connectivity of new solutions by evaluating a network connectivity matrix. The ant colony system algorithm (ACS) [25] is adapted to find the solution of the optimal planning problem of primary distribution circuits. They enforce the radial characteristic of the network by a proposed branch selection approach.
The main challenge of the routing of distribution feeders is that the solution (network configuration) should be radial. On the other hand, using the Kruskal's algorithm, we can find the minimum spanning tree for a connected weighted graph. Therefore, in this paper, first a coding method for each solution is proposed. Using this coding together with the Kruskal's algorithm, we can limit the optimal solution search space to no more than the radial (tree) distribution solutions. Then, the capability of the proposed coding is examined with its application in the implementation of the three meta-heuristic algorithms (GA, PSO and SA) to devise a comprehensive solution method for the feeder routing and DG placement problems altogether.
At the rest of the paper, first, in section 2, the problem is formulated. Then, in the third section, problem-solving algorithms are introduced. In Section 4, the proposed method for coding the solution of the problem and in Section 5, its fitness function is presented. Section 6, contains the results of implementing the proposed method on a distribution system. Finally, in the seventh section, the paper ends with expressing the conclusion of the study.
2. Problem Formulation
In this paper, optimal feeder routing along with DG placement is defined as an optimization problem in which the objective function is the minimized total cost which satisfies the specified constraints. In this section, the objective function and the constraints and the modeling of DGs are introduced.
2.1 Objective function
Eq. (1) introduces the objective function of the optimization problem [21, 24]:
$c_{t}=c_{c}+c_{i}+c_{i}$ (1)
where, $C_{t}, C_{C}, C_{i}$ and $C_{l}$ are the total annual cost, capital recovery fixed cost, the cost of supply interruption and the cost of energy losses, respectively.
The capital recovery cost can be calculated as [21, 24]:
$c_{c}=g \sum_{k \in M} c_{k}$ (2)
where, $g$ is the capital recovery rate of the fixed cost and $C_{k}$ is the cost of the branch $k$ of the main feeder. It should be noted that costs of branches originating from the source substation include both the lines and the corresponding substation costs. M stands for the set of branches of a network configuration that has been investigated.
Considering the facts that in radial networks there is no alternative supply route and the outage of a branch interrupts the delivery to all consumers supplied through it, the cost of supply interruption can be obtained by Eq. (3) [21, 24]:
$c_{i}=c_{i p} \alpha d \sum_{k \in M} \lambda_{k} R e\left\{I_{k}\right\} \sqrt{3} U_{r}$ (3)
where, $c_{i p}, \alpha, d, \lambda_{k}, I_{k} \operatorname{and} U_{r}$ indicate the cost per unit of energy not delivered, the load factor, the repair duration, the branch failure rate, the branch current at peak load and the network rated voltage, respectively.
The cost of energy losses is calculated by Eq. (4):
$C_{l}=\mathbf{8 7 6 0 c}_{l p} \boldsymbol{\beta} \sum_{k \in M} \boldsymbol{r}_{k}\left|\boldsymbol{I}_{k}\right|^{2}$ (4)
where, $c_{l p}, r_{k}$ and $I_{k}$ are the cost per unit of energy lost (the cost of one watt-hour of energy loss), the branch resistance and the branch current at peak load, respectively.
The coefficient β is the loss factor defined by (5) in terms of the load factor ($\alpha$):
$\beta=0.15 \alpha+0.85 \alpha^{2}$ (5)
2.2 Constraints
The constraints to be satisfied are:
i) Loading capacity constraints: The current which passes through a branch of the network should be within its thermal capacity limit:
$\left|I_{k}\right| \leq\left|I_{k \max }\right| \cdots \cdots \cdots \cdot \forall \cdot k \in M$ (6)
where, $\left|I_{k}\right|$ is the current magnitude of the branch k and $\left|I_{k \max }\right|$ is the upper bound of $\left|I_{k}\right|$. The current magnitudes of the network branches are calculated by performing load flow.
ii) Bus voltage constraints: The voltage of a bus must be within its allowable limits:
$V_{i m i n} \leq\left|V_{i}\right| \leq V_{i m a x}$ (7)
where, $\cdot\left|V_{i}\right|$ is the voltage magnitude of bus i and $V_{i m i n}$ and $V_{i m a x}$ are the upper and lower bounds of $\left[V_{i}\right]$ respectively. The voltage magnitudes of the network buses are calculated by performing load flow.
iii) Radiality constraint: Since distribution networks are operated radially, one of the main constraints of the problem is that the network structure should be radial. In other words, the graph of the network should be a spanning tree, which is a tree that connects all nodes (buses) and has no loops. This is a hard limit discrete constraint so the use of penalty concept within the optimization process is not recommended [15].
2.3 Modeling of distributed generators
The distributed generator unit in each bus is modeled as negative PQ load. Because the distributed generator injects active power into a network, the active power of the PQ load is considered to be a constant and a negative value. Since it is assumed here that the distributed generator operates with a unity power factor, the reactive power of the PQ load is considered as constant value of zero.
3. The Problem Solving Method
Given that the problem of the optimal feeder routing along with DG placement is a complex and combinatorial problem, in this paper a meta-heuristic optimization approach is used to solve it. Therefore, the use of three meta-heuristic algorithms including genetic algorithm (GA), the particle swarm optimization (PSO) algorithm, and the simulated annealing (SA) algorithm are tested. Before applying these algorithms, a possible solution of the problem must first be encoded as a string (vector) of numbers. Also a fitness function that expresses the degree of achieving to the main objective of the problem and the level of satisfying the constraints must be defined to evaluate each solution. After that, the implementation of the meta-heuristic algorithm begins by generating an initial solution or a population of initial solutions and continues by iterating an iterative process. At each iteration round, by applying the particular operators of the corresponding algorithm to the current population/solution, a new population/solution is generated which is an improvement regarding the previous one in terms of the fitness function. The iterative process of the algorithm is stopped when a predetermined stop condition is met. Then the best solution by far is introduced as the final optimal solution.
The process of generating new population/solution depends on the algorithm used. In genetic algorithm, the population of new solutions is generated by applying selection, crossover and mutation operators on the current population. In PSO algorithm, the new population of particles (solutions) is generated by updating the velocity and position values of the current population particles. The degree by which any particle will be updated depends on the best position obtained by the particle itself and the best global position obtained by all the particles at the previous iterations [14]. In SA algorithm, new solution at any iteration is generated in the neighborhood of the current solution by a certain neighborhood structure. Better solutions are always accepted, and the new solutions which are not better are also accepted with some probabilities to avoid falling into local optimal solution.
In the optimal distribution feeder routing, each solution is a configuration of the network, the traditional coding method for a configuration is to allocate a bit to each candidate branch (line) of the under-study network. Having "one" at any of corresponding bits indicates that the corresponding branch is a member of the corresponding network configuration while "Zero" means that the corresponding branch does not exist there.
To overcome the distribution feeder routing problem the main challenge is that the optimal solution (network configuration) should be radial, while if the traditional coding method is used to code the solutions (network configurations), most of the search space is formed by non-radial configurations (unacceptable solutions). In this paper, however, using the Kruskal's algorithm, a method is proposed for coding the distribution network configuration, which limits the search space to radial configurations only.
4. Coding Solution
The solution to the problem should specify the route of distribution feeders and the location of the DGs. So, each solution is coded by a vector X as follows:
$X^{T}=\left[\begin{array}{ll}w^{T} & D^{T}\end{array}\right]=[\underbrace{w_{1} \quad w_{2} \quad \cdots \quad w_{n b r}}_{\text {feeder routes }} / \underbrace{d_{1} \quad d_{2} \quad \ldots \quad d_{n D G}}_{D G \text { loactions }}]$ (8)
The vector X consists of two parts. The first part which contains the vector $w^{T}$, encodes the route of the distribution network, and the second part, which includes the vector $D^{T}$, encodes the location of the DGs. The element $w_{i}$, which is a number within the interval of [0,1], represents the weight of the candidate branch i within the execution of Kruskal's algorithm. The purpose of this research is to code routes so that using Kruskal's algorithm in graph theory, the search operation is performed only in the space of radial paths. In this case, high-quality optimal solutions are easily available. The element $d_{i}$ represents the bus number which is used to be a candidate to install i-th DG, and therefore it is an integer number between one and the number of candidate buses.
Using Kruskal's algorithm, a radial solution corresponding to the values of the elements of the vector $w^{T}$ is generated. Kruskal's algorithm is a greedy algorithm in graph theory, which is used to find a minimum spanning tree for a connected edge-weighted graph. The minimum spanning tree (MST) of a connected, edge-weighted graph is a subset of the edges of the graph that connects all the vertices together, without any cycles and with the minimized total edge weight. The process of using the Kruskal's algorithm to generate a radial solution corresponding to the weight vector w is as follows:
1- All candidate branches are arranged in an ascending order by their weights to form an ordered set, called set A.
2- The set of branches of the radial network corresponding to the weight vector w, called set B, is initially set to empty.
3- Moving forward along set A, for each member, if the union of branches in set B and the current member of A do not create any cycles, then the selected branch is added to the set B.
4- The branches of the resultant set B provide the radial solution corresponding to the vector w.
To clarify the issue, the implementation of Kruskal's algorithm is illustrated with a simple example.
Example: Consider the simple network shown in stage 1 of Figure 1, which has four nodes and five candidate branches with numbers 1-5. Suppose the weights of the vector encoding the routes of this network are $W^{T}=$[0.7 0.9 0.4 0.1 0.6]. The steps needed to find the corresponding network with the vector W are as follows:
1- Because the order of ascending weights of candidate branches are {$w_{4}=0.1, w_{3}=0.4, w_{5}=0.6, w_{1}=0.7, w_{2}=0.9${, the result of sorting the candidate branches will be the set A = {4, 3, 5, 1, 2}.
2- We initially set the set B to empty. So B = {}. B is the set of branches of the radial network corresponding to the weight vector W.
3- We select the first branch of set A, which is branch 4. If this branch is added to set B, the stage 2 of Figure 1 would be the outcome that does not produce any cycles. So we add this branch to set B, and so B = {4}.
4- We select the second branch of set A, which is branch 3. If branch 3 is added to the branches of set B, the stage 3 of Figure 1 would be the result that does not produce any cycles. So we add branch 3 to set B, and so B = {4, 3}.
5- We select the third member of set A, which is branch 5. If branch 5 is added to set B, the stage 4 of Figure 1 would be the upshot which generates a cycle. Therefore, we do not add branch 5 to set B.
6- We select the fourth member of set A, which is branch 1. If it is added to set B, the stage 5 Figure 1 would be the outcome that does not create any cycles. So we add branch 1 to set B, which means B = {4, 3, 1}.
7- We select the fifth member of set A, which is branch 2. Adding this branch to set B will lead us to the stage 6 of Figure 1 which generates a cycle. Thus, we do not add branch 5 to set B.
8- Now, the final radial solution corresponding to route W is set B = {4, 3, 1}, which is shown in the stage 7 of Figure 1.
Figure 1. Graphs of the stages to reach a radial feeding path for a hypothetical example
5. Fitness Function and Generation of Possible Solution
In the meta-heuristic algorithms, the quality of each solution is evaluated by a fitness function. The fitness function must be defined in a way that it can show how much each solution can satisfy the main goal and the constraints of the problem. In the current subject, the goal is to minimize the cost function defined in Eq. (1). The problem constraints are the load capacity constraints and the bus voltage constraints defined in relations 6 and 7, respectively. Therefore, the fitness function for each solution is defined in relation 9, which must be minimized by a meta-heuristic algorithm.
$f(X)=\left\{\begin{array}{ccc}C_{t}(X) & \text { if } & \text { no limit violation } \\ C_{t}(X)+\gamma & \text { if } & \text { at least one limit violation }\end{array}\right.$ (9)
where, X denotes a solution, that is, the path of the distribution network feeders and the location of the DGs. f(X) is the fitness function of solution X. $C_{t}(X)$ is sum of the total costs introduced in Eq. (1); γ is a penalty factor (a large number). The "Limits" are the load capacity constraints defined in Eq. (6) or the bus voltage constraints defined in Eq. (7).
The fitness function in Eq. (9) is defined so that if none of the load capacity constraints or the bus voltage constraints are violated, then the value of the fitness function is equal to $C_{t}(X)$. However, if one of these constraints is violated, the value of the fitness function is equal to $C_{t}(X)+\gamma$, which is a large value, and therefore the meta-heuristic algorithm will not choose these types of solutions as the optimal solution.
To calculate the fitness function of each solution X, the following steps must be taken:
1- Determine the radial network corresponding to the part W of the vector X using the Kruskal's algorithm.
2- Apply DG powers in load buses corresponding to the part D of the vector X .
3- Perform the load flow. In this research, since the network is always radial type, the forward-backward load flow, which is special for radial distribution networks and has a high computational speed, is used.
4- Calculate the total cost ($C_{t}(X)$) from Eq. (1) using the load flow results.
5- Check the load capacity and bus voltage constraints according to the load flow results.
6- Determine the value of the fitness function using Eq. (9).
It should be noted that since all solutions are always produced in a way that the corresponding network would be radial, the radiality constraint is automatically provided.
6. Simulation Results
In this section, the effectiveness of the proposed method is studied on a rural 10kV network reported in the study [21, 24]. Figure 2 shows the graph of available network routes. The network has 24 load points (transformers 10 kV/0.4 kV) and 42 available branches for their supply from the substation 35 kV/10.5 kV at node 1. To check the effects of DGs, four DG units of equal capacity are considered with a total DG capacity of 1.33 MVA at unity power factor. The predetermined locations of the DGs according to reference [16] are in nodes 7, 8, 9 and 10. But we obtain the optimal locations of these DGs and compare them with the predetermined locations. The details of consumption at load points, length of graph branches, line data and load data can be obtained in tables I, II and III of reference [24]. Cost and complementary load data have been given in Table 1.
The substation equipment and building capital cost per outgoing line is 75 k$. This amount is added to the costs of all branches directly connected to the source substation. The upper and lower bounds of voltages in all the buses are assumed to be 0.95 to 1.05 per unit respectively. The penalty factor in fitness function (γ) was set to 1×106.
To verify the effectiveness of the proposed method, multiple simulations were performed by three GA, PSO and SA algorithms in Matlab environment, and the comparative results are presented in the remainder of this section. The values of some parameters used for the simulation of the algorithms are given in Table 2. It is mentionable that the population size and number of generations in GA and PSO algorithms and the number of iterations in SA algorithm are chosen large enough making sure that in all three algorithms the optimal solution is obtained after complete convergence.
Figure 2. Graph of available supply routes for the rural 10kV network
Table 1. Cost coefficients and complementary load data
Power factor at all load points
Load factor (α) at all load points
Investment cost per kilometer of each branch (ck)
15000 US$/Km
cost per unit of energy not delivered (cip)
4 US$/KWh
cost per unit of energy lost (clp)
0.1 US$/KWh
the capital recovery rate of the fixed cost (g)
Table 2. The values of some parameters for the implementation of GA, PSO and SA
Population size for GA and PSO
Maximum generations for GA
Maximum generations for PSO
Number of iterations for SA
6.1 Comparison between Kruskal's coding and traditional coding
In the previous sections, the Kruskal's algorithm was proposed to code the distribution network configurations to solve the distribution feeder routing problem. In this section, this proposed coding method is compared with the traditional coding method. Here since we just want to compare the effects of these coding methods on the feeder routing, in this case, we do not consider any distributed generator for the system. The results of solving the distribution feeder routing problem using three algorithms GA, PSO and SA with both the coding methods are given in Table 3; where the best values of the fitness function obtained from the implementation of 10 times simulations for random number generator seeds 0-9 for all three algorithms and both the coding methods are presented.
It is observed carefully in Table 3 that in all the three algorithms, when using traditional coding, the value of the fitness function is very large, due to the fact that one or more of the constraints including bus voltage constraints, loading capacity constraints and/or radial constraint are not satisfied. Therefore, the traditional coding method does not work well for any of the three algorithms. However, when the proposed Kruskal's method is used, all three values of the fitness function are small, which means that the solutions obtained by all the three algorithms satisfy all constraints. Meanwhile, GA and SA methods have reached the same solution which is better than the solution of the PSO algorithm because the amount of their fitness function is lower. So, it can be concluded that while the use of traditional coding method with all the three algorithms has not even been able to achieve a feasible solution, the use of Kruskal's coding method looks very effective to solve the distribution feeder routing problem. Therefore, the Kruskal's coding method is used in the rest of the study.
Table 3. Comparison of routing results to two coding methods with three algorithms
Fitness (×104)
Traditional coding
Kruskal's coding
PSO
6.2 Comparison of the algorithms
Table 4 shows the results obtained from solving the optimal feeder routing and DG placement problem for the following three scenarios and for the above-mentioned 25-bus distribution network with all the three algorithms. In this table, the best value of the fitness function obtained from the implementation of 10 times simulations for the random number generator seed 0 to 9 for each of the three PSO, GA and SA algorithms and with the Kruskal's coding method is given.
Scenario 1) It is assumed that there is no DG and only the feeder routing problem is solved.
Scenario 2) It is assumed that DGs are installed at proposed locations in reference [16], i.e., at nodes 7, 8, 9, and 10, and only the feeder routing problem is solved.
Scenario 3) The distribution feeder routing problem along with the optimal placement of the four DGs is simultaneously solved.
Considering Table 4, the following points can be deduced:
It is observed that all values obtained for the fitness function in Table 4 are not very large, which means that according to Eq. (9), the amount of the penalty factor (γ) does not have any effect on the value of the fitness function. In other words, the value of the cost function is equal to the total cost. Consequently, the solutions obtained by Kruskal's coding at all three scenarios and for all three PSO, GA, and SA algorithms have satisfied all the constraints.
By comparing the results of the second scenario with the third scenario, using all three algorithms, the amount of fitness function obtained at the third scenario is less (better) than the fitness value obtained at the second scenario. Therefore, the DG placement has improved the value of the objective function.
It can be seen that at all the three scenarios, the fitness function values of the solutions obtained by the GA are better (less) than the ones acquired by the PSO, and also the fitness function values of the solutions attained by the SA are better (less) than the ones achieved by the GA. Therefore, at all the three scenarios, the SA algorithm has found the best solutions.
Table 4. Fitness function values of the best solutions obtained for different scenarios and algorithms with Kruskal's coding
Fitness function value
Scenario 1 (Feeder Routing with no DG)
Scenario 2 (Only Feeder Routing with DGs in predefined buses)
Scenario 3 (Feeder Routing with DGs placement)
6.3 Effect of DGs
For further evaluations, the details of the best solutions obtained by the SA algorithm with Kruskal's coding for all the three scenarios are given in Table 5.
It can be seen that at all the three scenarios, the total cost (Ct) is equal to the value of its corresponding fitness function, which means that according to Eq. (9), the amount of the penalty factor has no effect on the value of the fitness function. In other words, the solutions obtained at all the three scenarios have satisfied all the constraints.
By comparing the results of the second and third scenarios with the first scenario, the presence of DGs at the second and third scenarios has led to a significant reduction in the cost of energy losses (Cl) and supply interruption (Ci) at the second and third scenarios compared to the first scenario, such that, the per unit total cost has dropped from 1 per unit at the first scenario to 0.42 per unit at the second scenario and to 0.36 per unit at the third scenario. Therefore, the existence of DGs has been very effective in reducing the cost of energy losses and supply interruption.
By comparing the results of the third scenario with the second, the total cost (Ct) has dropped from 0.42 per unit at the second scenario, where the DGs are at predefined locations (nodes 7, 8, 9, and 10), to 0.36 per unit at the third scenario, where the DGs are in optimal locations, and this cost reduction is mostly due to a reduction in the cost of supply interruption (Ci) at the third scenario, compared to the second one. Therefore, the DG placement has improved the value of the objective function.
As an example, variations in the fitness values of the evaluated solutions as well as optimal solutions during the steps of the implementing the SA at the third scenario are shown in Figure 3 which indicates that although the values of the fitness functions of the evaluated solutions at the initial steps have large fluctuations due to the higher probability of accepting weaker solutions to avoid being trapped by local optimal solutions, but at the next steps, the solutions gradually converge towards a final optimal solution. It is considerable that the scale of the vertical axis of the graph is logarithmic. The optimal solution found at the third scenario is displayed in Figure 4.
Table 5. Details of the best solutions obtained using the SA algorithm and Kruskal's coding at different scenarios
Fitness value
Per unit of total cost (Ct)
Total cost (Ct)
The capital recovery cost (Cc)
the cost of supply interruption (Ci)
The cost of Energy Losses (Cl)
Branches of the optimal network
[1 2 3 5 7 9 12 14 19 21 22 24 26 27 29 30 32 33 34 35 36 37 39 40]
[1 2 3 6 9 13 15 17 19 21 22 24 26 27 29 30 32 33 35 36 37 38 41 42]
DG buses
[7 8 9 10]
[9 13 14 15]
Figure 3. Variations in the amount of fitness function during the steps of the SA algorithm
Figure 4. The obtained optimal solution at the third scenario
In this paper, the problem of the optimal feeder routing along with DG placement was formulated as an optimization problem whose presented solutions by three meta-heuristic algorithms including PSO, GA and SA were examined. Using the Kruskal's algorithm, a method for coding the distribution network configurations was presented by which the search space is restricted to radial configurations of the network. Numerical studies on a 10 kV distribution network with 24 load points and 42 available branches showed that, first, while using a traditional coding method with any of the three algorithms is not even able to achieve a feasible solution, the use of Kruskal's coding method proved to be highly effective in solving the distribution feeder routing problem. Second, the SA algorithm obtains the best solutions in comparison with the PSO and GA algorithms. Third, the presence of DGs massively reduces the cost of supply interruption and the cost of energy losses. Fourth, the DG placement reduces the total cost significantly.
[1] Nahman, J.M., Perić, D.M. (2020). Radial distribution network planning under uncertainty by applying different reliability cost models. International Journal of Electrical Power & Energy Systems, 117: 105655. http://doi.org/10.1016/j.ijepes.2019.105655
[2] Wang, S., Dong, Z.Y., Chen, C., Fan, H., Luo, F. (2019). Expansion planning of active distribution networks with multiple distributed energy resources and EV sharing system. IEEE Transactions on Smart Grid, 11(1): 602-611. http://doi.org/10.1109/TSG.2019.2926572
[3] Resener, M., Haffner, S., Pereira, L.A., Pardalos, P.M., Ramos, M.J. (2019). A comprehensive MILP model for the expansion planning of power distribution systems–Part I: Problem formulation. Electric Power Systems Research, 170: 378-384. http://doi.org/10.1016/j.epsr.2019.01.040
[4] Resener, M., Haffner, S., Pereira, L.A., Pardalos, P.M., Ramos, M.J. (2019). A comprehensive MILP model for the expansion planning of power distribution systems–Part II: Numerical results. Electric Power Systems Research, 170: 317-325. http://doi.org/10.1016/j.epsr.2019.01.036
[5] Valenzuela, A., Inga, E., Simani, S. (2019). Planning of a resilient underground distribution network using georeferenced data. Energies, 12(4): 644-662. http://doi.org/10.3390/en12040644
[6] Wang, S., Luo, F., Dong, Z.Y., Ranzi, G. (2019). Joint planning of active distribution networks considering renewable power uncertainty. International Journal of Electrical Power & Energy Systems, 110: 696-704. http://doi.org/10.1016/j.ijepes.2019.03.034
[7] Rastgou, A., Moshtagh, J., Bahramara, S. (2018). Improved harmony search algorithm for electrical distribution network expansion planning in the presence of distributed generators. Energy, 151: 178-202. http://doi.org/10.1016/j.energy.2018.03.030
[8] Salyani, P., Salehi, J., Gazijahani, F.S. (2018). Chance constrained simultaneous optimization of substations, feeders, renewable and non-renewable distributed generations in distribution network. Electric Power Systems Research, 158: 56-69. http://doi.org/10.1016/j.epsr.2017.12.032
[9] Ahmed, H.M., Eltantawy, A.B., Salama, M.M. (2017). A stochastic-based algorithm for optimal feeder routing of smart distribution systems. In 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1-4. http://doi.org/10.1109/CCECE.2017.7946677
[10] Nahman, J., Perić, D. (2017). Path-set based optimal planning of new urban distribution networks. International Journal of Electrical Power & Energy Systems, 85: 42-49. http://doi.org/10.1016/j.ijepes.2016.08.001
[11] Ghadiri, A., Haghifam, M.R., Larimi, S.M.M. (2017). Comprehensive approach for hybrid AC/DC distribution network planning using genetic algorithm. IET Generation, Transmission & Distribution, 11(16): 3892-3902. http://doi.org/10.1049/iet-gtd.2016.1293
[12] Yosef, M., Sayed, M.M., Youssef, H.K. (2015). Allocation and sizing of distribution transformers and feeders for optimal planning of MV/LV distribution networks using optimal integrated biogeography based optimization method. Electric Power Systems Research, 128: 100-112. http://doi.org/10.1016/j.epsr.2015.06.022
[13] Khatami, H., Ravadanegh, S.N. (2015). Probabilistic optimal robust multistage feeder routing under load forecasting uncertainty. IET Generation, Transmission & Distribution, 9(14): 1977-1987. http://doi.org/10.1049/iet-gtd.2014.1097
[14] Hemmati, R., Hooshmand, R.A., Taheri, N. (2015). Distribution network expansion planning and DG placement in the presence of uncertainties. International Journal of Electrical Power & Energy Systems, 73: 665-673. http://doi.org/10.1016/j.ijepes.2015.05.024
[15] Ravadanegh, S.N., Roshanagh, R.G. (2014). On optimal multistage electric power distribution networks expansion planning. International Journal of Electrical Power & Energy Systems, 54: 487-497. http://doi.org/10.1016/j.ijepes.2013.07.008
[16] Kumar, D., Samantaray, S.R., Joos, G. (2014). A reliability assessment based graph theoretical approach for feeder routing in power distribution networks including distributed generations. International Journal of Electrical Power & Energy Systems, 57: 11-30. http://doi.org/10.1016/j.ijepes.2013.11.039
[17] Hasan, I.J., Gan, C.K., Shamshiri, M., Ab Ghani, M.R., Omar, R. (2014). Optimum feeder routing and distribution substation placement and sizing using PSO and MST. Indian Journal of Science and Technology, 1682-1689.
[18] Ganguly, S., Sahoo, N.C., Das, D. (2013). Multi-objective planning of electrical distribution systems using dynamic programming. International Journal of Electrical Power & Energy Systems, 46: 65-78. http://doi.org/10.1016/j.ijepes.2012.10.030
[19] Singh, S., Ghose, T., Goswami, S.K. (2011). Optimal feeder routing based on the bacterial foraging technique. IEEE Transactions on Power Delivery, 27(1): 70-78. http://doi.org/10.1109/TPWRD.2011.2166567
[20] Samui, A., Samantaray, S.R., Panda, G. (2012). Distribution system planning considering reliable feeder routing. IET Generation, Transmission & Distribution, 6(6): 503-514. http://doi.org/10.1049/iet-gtd.2011.0682
[21] Samui, A., Singh, S., Ghose, T., Samantaray, S.R. (2011). A direct approach to optimal feeder routing for radial distribution system. IEEE Transactions on Power Delivery, 27(1): 253-260. http://doi.org/10.1109/TPWRD.2011.2167522
[22] Ziari, I., Ledwich, G., Ghosh, A. (2011). Optimal integrated planning of MV–LV distribution systems using DPSO. Electric Power Systems Research, 81(10): 1905-1914. http://doi.org/10.1016/j.epsr.2011.05.015
[23] Najafi, S., Hosseinian, S.H., Abedi, M., Vahidnia, A., Abachezadeh, S. (2009). A framework for optimal planning in large distribution networks. IEEE Transactions on Power Systems, 24(2): 1019-1028. http://doi.org/10.1109/TPWRS.2009.2016052
[24] Nahman, J.M., Peric, D.M. (2008). Optimal planning of radial distribution networks by simulated annealing technique. IEEE Transactions on Power Systems, 23(2): 790-795. http://doi.org/10.1109/TPWRS.2008.920047
[25] Gomez, J.F., Khodr, H.M., De Oliveira, P.M., Ocque, L., Yusta, J.M., Villasana, R., Urdaneta, A.J. (2004). Ant colony system algorithm for the planning of primary distribution circuits. IEEE Transactions on Power Systems, 19(2): 996-1004. http://doi.org/10.1109/TPWRS.2004.825867 | CommonCrawl |
MiRNA-disease interaction prediction based on kernel neighborhood similarity and multi-network bidirectional propagation
Volume 12 Supplement 10
Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018: medical genomics
Yingjun Ma1,
Tingting He2,3,
Leixin Ge4,
Chenhao Zhang2 &
Xingpeng Jiang2,3
BMC Medical Genomics volume 12, Article number: 185 (2019) Cite this article
Studies have shown that miRNAs are functionally associated with the development of many human diseases, but the roles of miRNAs in diseases and their underlying molecular mechanisms have not been fully understood. The research on miRNA-disease interaction has received more and more attention. Compared with the complexity and high cost of biological experiments, computational methods can rapidly and efficiently predict the potential miRNA-disease interaction and can be used as a beneficial supplement to experimental methods.
In this paper, we proposed a novel computational model of kernel neighborhood similarity and multi-network bidirectional propagation (KNMBP) for miRNA-disease interaction prediction, especially for new miRNAs and new diseases. First, we integrated multiple data sources of diseases and miRNAs, respectively, to construct a novel disease semantic similarity network and miRNA functional similarity network. Secondly, based on the modified miRNA-disease interactions, we use the kernel neighborhood similarity algorithm to calculate the disease kernel neighborhood similarity and the miRNA kernel neighborhood similarity. Finally, we utilize bidirectional propagation algorithm to predict the miRNA-disease interaction scores based on the integrated disease similarity network and miRNA similarity network. As a result, the AUC value of 5-fold cross validation for all interactions by KNMBP is 0.93126 based on the commonly used dataset, and the AUC values for all interactions, for all miRNAs, for all disease is 0.93795、0.86363、0.86937 based on another dataset extracted by ourselves, which are higher than other state-of-the-art methods. In addition, our model has good parameter robustness. The case study further demonstrated the predictive performance of the model for novel miRNA-disease interactions.
Our KNMBP algorithm efficiently integrates multiple omics data from miRNAs and diseases to stably and efficiently predict potential miRNA-disease interactions. It is anticipated that KNMBP would be a useful tool in biomedical research.
MicroRNAs (miRNAs) are a category of single-stranded small-non-coding RNAs(~ 22 nt) which play important roles in gene regression via interference in post-transcriptional regulation [1, 2]. In the past decades, microRNAs were found in eukaryotes and viruses besides prokaryotes [3]. Previous research has shown that miRNAs was related to several human diseases like cancer, Alzheimer's disease and Diabetes Mellitus etc. [4,5,6]. miR-375 was found to be significant in the growth and response to metabolic stress of pancreatic islets [7].miR-21 negatively regulated Pdcd4 which can suppress TPA-induced neoplastic transformation [8]. miRNA-200 was detected in the metastasis of gastric adenocarcinoma cells [9]. miR-146a is a tumor suppressor inhibit NF-κB activity related to promotion and suppression of tumor growth [10].
Wang et al. [11] constructed a Directed Acyclic Graph (DAG) to describe a disease based on the MeSH descriptors. Then they calculated the disease semantic similarity by the DAG, and combined with the known miRNA-diseases interaction to construct the miRNA functional similarity, which was also used to preliminarily infer new potential functions or related diseases of miRNAs. Xu et al. [12] proposed a support vector machine (SVM) to predict the interaction between miRNA and tumor, but since the current database rarely provides a list of non–cancer miRNAs, therefore, the lack of negative samples leads to a supervised learning model that is not well suited for large-scale disease-miRNA interaction prediction.
The miRNA-disease interaction prediction problem can be regarded as a classification problem that lacks negative samples. According to this feature, a large number of network-based semi-supervised methods have been proposed, most of which are based on similar miRNAs (diseases) are more likely to interact with the same disease (miRNA). Chen et al. [13] adopted restart random walk (RWRMDA) to predict the potential miRNA-disease interaction, which restarted the known miRNA-disease interaction network, using random walks on miRNA functional similarity network to predict potential miRNA-disease interaction. Since the restart operator of RWRMDA is based on the known miRNA-disease interaction network, this method does not apply to predictions of new diseases that are not associated with any miRNA. The regularized least squares algorithm (RLSMDA) was also proposed by Chen et al. [14] in 2015 to predict miRNA-disease interactions, which uses both the disease semantic similarity and the miRNA functional similarity to calculate miRNA-disease interaction scores, and the weighted linear combination of the two scores was used as the final result. The method combined disease similarity network and miRNA similarity network to predict simultaneously, which improves the prediction accuracy and enhanced the predictive power of the model to some extent. However, the model is highly dependent on parameters, and how to set appropriate parameters is the defect of the model. Subsequently, in 2018, Chen et al. [15] released a Graph Regression model to predict miRNA–disease interactions by using singular value decomposition (SVD) to decompose the interaction matrix, the disease similarity matrix and the miRNA similarity matrix, then using partial least squares (PLS) to perform graph regression in interaction space, miRNA similarity space, and disease similarity space. SVD decomposition and PLS regression can eliminate noise to a certain extent, but it also causes information loss, which leads to the reduction of model accuracy. Recently, Chen et al. proposed two novel models: the hierarchical clustering recommendation algorithm [16] (BNPMDA) and the low rank matrix decomposition [17] (IMCMDA) algorithm to predict potential miRNA–disease interactions. Both models have the advantage of fewer parameters, but the former uses only known miRNA-disease interaction networks for inference, so it cannot predict new miRNAs and new diseases, and the latter leads to a reduction in prediction accuracy due to matrix decomposition. The miRNA functional similarity used in the above algorithms is based on the method of Wang et al. [11], which depends on the known miRNA-disease interactions, so these models cannot predict new miRNAs.
Luo et al. [18] proposed a Kronecker regularized least squares, which calculated miRNA functional similarity based on miRNA-gene interaction network and gene weight network, combined with disease semantic similarity to predict potential miRNA-disease interactions. The model enhances the predictive power of new miRNAs by integrating heterogeneous omics data of miRNAs, but the model is highly dependent on the weight coefficients of different similarity measurements, which greatly affects its promotion and practical application ability. Xiao et al. [19] constructed a graph regularized non-negative matrix factorization method, which decomposes the modified known miRNA-disease interaction network, and uses miRNA functional similarity and disease semantic similarity to construct regularization operators for prediction. The model can predict new miRNAs and new diseases, but more model parameters and stronger parameter dependencies also reduce the performance of the model. Both of these models use information outside the miRNA-disease interaction dataset to construct miRNA functional similarity, which enhances their ability to predict new miRNAs. However, they only use MeSH descriptors to describe disease similarity, resulting in a sparsely diseased network, which limits the predictive performance of the model.
Here, we propose a new framework, kernel neighborhood similarity and multi-network bidirectional propagation (KNMBP), which uses multiple omics data to infer unknown miRNA-disease interactions. KNMBP uses disease-gene interactions, disease-biological process interactions, and disease semantic information to construct a novel disease semantic similarity network, using miRNA-target interactions and gene weight networks to construct a novel miRNA functional similarity network. Different from previous methods, the miRNA functional similarity and disease semantic similarity calculated in this paper does not utilize the known miRNA-disease interaction, but excavates more feature information of miRNA and disease from other latest datasets, which greatly expands our ability to predict new miRNA and disease. The accumulated research [15, 20] shows that the known miRNA-disease interaction network also contains important feature information of miRNA and disease, and the reasonable use of this information can well enhance the prediction ability of the model. In these considerations, based on the modified miRNA-disease interaction, we use the kernel-based neighborhood similarity algorithm to calculate the disease kernel neighborhood similarity and miRNA kernel neighborhood similarity. Finally, based on the integrated miRNA (disease) similarity network, we constructed a bidirectional propagation model to predict potential miRNA-disease interaction scores. The experimental results show that KNMBP not only has a good ability to predict new interactions, new miRNAs and new diseases, but also has the advantage of parameter robustness.
Methods overview
To predict unknown miRNA-disease interactions, we propose a new KNMBP model with five parts, as shown in Fig. 1. First, we calculate miRNA functional similarity and disease semantic similarity by using multiple histological data other than miRNA-disease interaction information (as shown in step 1 of Fig. 1). Second, based on the modified known miRNA-disease interaction network, we use the kernel-based neighborhood similarity model (KSNS) to calculate the disease kernel neighborhood similarity and miRNA kernel neighborhood similarity (as shown in step 2 and step 3 of Fig. 1). Finally, based on the integrated miRNA (disease) similar network calculated by Diffusion Component Analysis (clusDCA), we released a bidirectional propagation algorithm to predict unknown miRNA-disease interaction scores (as shown in step 4 and step 5 in Fig. 1).
The flow diagram of KNMBP model. In Step 1 and Step 2, the red box indicates disease, the grass green triangle indicates the gene, the circle indicates the miRNA, the pentagon indicates the biological process corresponding to the disease, SFm and SSd represent improved miRNA functional similarity and disease semantic similarity, respectively, WKNNP represents a weighted k-neighborhood profile algorithm used to preprocess the interaction matrix. In Step 3, SIm and SId represent disease kernel neighborhood similarity and miRNA kernel neighborhood similarity, respectively. In Step 4, clusDCA represents the network fusion algorithm based on diffusion component analysis
Dataset collection
In order to fairly compare the performance of the model, we used two benchmark datasets to conduct experiments.
For benchmark dataset I, we utilized the dataset of miRNA-disease interaction prediction established by Chen et al. [16, 17]. The dataset I consists of three parts: First, 5430 interactions between 383 diseases and 495 miRNAs were extracted from HMDD v2.0 [21]. Second, based on the Medical Subject Headings (MeSH) descriptors in the U.S. National Library of Medicine, two semantic similarity matrices of diseases were established by wang et al. [11] and Xuan et al. [22], respectively. Third, the functional similarity matrix of miRNA was established by Lu et al. [23]. All these data can be downloaded from https://github.com/IMCMDAsourcecode/IMCMDA. However, Dataset I is based on the old version (HMDD v2.0), and it also has the disadvantage that the disease semantic similarity is very sparse and the miRNA functional similarity depends on the known miRNA-disease interaction. Therefore, we extracted information about miRNAs and diseases from several latest databases and built benchmark dataset II. We describe the establishment of dataset II from three aspects.
First, extract information about the disease. The Comparative Toxicogenomics Database (CTD) is an important database of disease research that provides a wealth of interactive information between disease and chemistry, genetic products, phenotypes and the environment [24]. Disease items in CTD are described by MeSH ID, which is a hierarchical vocabulary that provides a strict classification system for studying the relationships among various diseases, and the relationships between any diseases can be illustrated by a directed acyclic graph (DAG). For example, the MeSH ID of the disease "Deletion Syndrome (Partial)" was "MesH:C538288" in CTD, whose parent diseases are "Chromosome Deletion" and "Chromosome Disorders", and the corresponding MesH ID were "MesH:D002872" and "MesH: D025063", respectively. In order to get a detailed description of the disease, we download 12,988 diseases, including the names of diseases, multiple ID representations of the diseases, and information about their parent nodes. Furthermore, we downloaded gene-disease interactions, including 25,114,553 interactions between 46,045 genes and 7163 diseases. At the same time, disease-GO biological process interactions, including 1,727,119 interactions between 13,126 GOs and 7116 diseases were also downloaded.
Second, extract information about the miRNA. In order to accurately describe the relationship between miRNAs, we extracted as complete as possible miRNA interaction information from multiple latest databases. We obtained the miRNA-gene interaction information from experimentally verified databases, including TarBase (version 8.0) [25], miRTarBase (version 7.0) [26], miRNAMAP (version 2.0) [27], miRecord (version 4) [28]. DIANA-TarBase v8 is a reference database for indexing experimentally supported microRNA targets, has more than a decade of support in the field of non-coding RNA [25]. We downloaded 927,119 miRNA-gene interactions from the database, after the removal of non-human gene and converted the gene ID into Entrez Gene identifiers, a total of 423,392 interactions between 18,345 genes and 1084 miRNAs are retained. Meanwhile, we performed ID transformation of the genes in the miRTarBase database, deleted the null miRNAs and target genes, and finally obtained 381,088 interactions between 2599 miRNAs and 15,064 genes. Similarly, we extracted 83,071 interactions between 1135 target genes and 471 miRNAs from miRNAMAP, and obtained 1269 interactions between 767 target genes and 203 miRNAs from the miRecord. Based on miRBase [29], all of the above miRNAs were transformed into the v22 version using the R package 'miRBaseConverter', and the null and duplicate miRNAs were deleted. After integration, a total of 588,134 interactions between 2814 miRNAs and 18,468 genes were obtained. In addition, Lee et al. [30] integrated 21 omics data from multiple organisms by modifying bayes and used logarithmic likelihood scores to measure the probability of interaction between two genes with true functional links. To build similarity networks of genes, we downloaded the human weighted gene network data from the HumanNet database, which contained the log likelihood score of 476,399 interactions among 16,243 genes.
Third, extract interactive information of miRNA and disease. The human microRNA Disease Database (HMDD) collects large amounts of human miRNA-disease interactions from genetics, epigenetics, circulating miRNA and miRNA target interactions, and provides detailed annotation of miRNA-disease interactions [21]. In June 28, 2018, HMDD (version 3.0) [31] was also released, which provides 200.2% of human miRNA-disease interactions and has more evidence to classify. We extracted the disease information with MeSH ID or OMIM ID from HMDD v3.0, removed duplicate miRNA-disease interactions, and obtained 14,457 interactions between 1045 miRNAs and 627 diseases. To ensure all the miRNA similarity and all the disease similarity can be calculated, we delete the diseases and miRNAs not in the above two datasets, and finally got 10,561 interactions between 574 miRNAs and 579 diseases. The details of the two benchmark datasets are shown in Additional file 1.
Construction of disease semantic similarity network
In fact, most methods use MeSH descriptors to construct a directed acyclic graph of the disease, which contains common information between different diseases is used to describe the disease similarity, which leads to a sparsely similar network [16, 17]. In order to construct a more reasonable disease semantic similarity, we make full use of the various omics data to calculate the similarity of the disease. Protein-encoding genes can affect the pathogenesis of the disease to some extent [32], so disease-gene interactions also imply some features of the disease. Similarly, the gene ontology biological process of the disease is also the reflection of some characteristics of the disease. In this paper, we combine the disease-gene interactions (D-G) and disease-GO biological process interactions datasets (D-GO), and the MeSH descriptors of the disease, using the MultiSourcDSim model proposed by Lei et al. [33] to calculate the disease semantic similarity.
Based on the MeSH descriptor, a directed acyclic graph (DAG) can be used to describe the semantic relationship between diseases. Any disease d in the DAG can be expressed as DAG(d) = (d, S(d), F(d), A(d)), where S(d) and F(d), representing the set of direct child nodes and direct parent nodes of disease d, respectively, and A(d) represents the set constituted by all ancestor nodes of disease d.
First, combining the disease interaction dataset (D-G or D-GO) and DAG, the frequency FTc(d) of any disease d in the DAG can be calculated:
$$ {FT}_c(d)={f}_c(d)+\sum \limits_{d\in S(d)}{FT}_c(d) $$
where fc(d) represents the frequency of d in the interaction dataset c, it can be seen that the occurrence frequency of d in DAG is equal to the sum of the occurrence frequency of all its direct child nodes and the frequency of itself in the interaction dataset. Then, normalize the frequency of disease occurrence as follow:
$$ {PT}_c(d)=\frac{PT_c(d)}{PT_c(root)} $$
Where, PTc(root) represents the occurrence frequency of the root node in DAG. According to Eqs. 1 and 2, it can be known that 0 ≤ PTc(t) ≤ 1. Based on the more information shared, the higher the similarity. The disease similarity can be obtained:
$$ {S}_c\left({d}_1,{d}_2\right)={\displaystyle \begin{array}{c}\mathit{\operatorname{MAX}}\\ {}d\in COM\left({d}_1,{d}_2\right)\end{array}}\left(\frac{2\times \mathit{\log}\left({PT}_c(d)\right)}{\mathit{\log}\left({PT}_c\left({d}_1\right)\right)+\mathit{\log}\left({PT}_c\left({d}_2\right)\right)}\right) $$
Where, COM(d1, d2) is the set of the minimum common ancestor of the disease d1 and d2, and it is easy to see that 0 ≤ Sc(d1, d2) ≤ 1. According to D-G and D-GO, we can obtain two disease similarity networks {Sc, c = 1, 2}. After that, the clusDCA [34] was used to integrate the disease similar networks, and the integrated semantic similar network SSd was finally obtained.
Construction of miRNA functional similarity network
In order to overcome the dependence of miRNA functional similarity on known miRNA-disease interaction network, the algorithm can predict miRNAs not associated with any disease. We calculate the miRNA functional similarity by means of Luo [18] and Xiao's [19] methods. Specifically, we used miRNA target gene interaction network and gene similarity network to calculate miRNA similarity.
First, we normalized and symmetrized the log-likelihood score data between genes downloaded from HumanNet:
$$ {S}^g\left({g}_i,{g}_j\right)=\Big\{{\displaystyle \begin{array}{c}\frac{LLS\left(i,j\right)}{{\operatorname{MAX}}_{LLS}},\kern2em LLS\left(i,j\right)\ne 0\\ {}\frac{LLS\left(j,i\right)}{{\operatorname{MAX}}_{LLS}},\kern2em LLS\left(i,j\right)=0 andLLS\left(j,i\right)\ne 0\\ {}0,\kern5.00em Otherwise\end{array}}\operatorname{} $$
Where Sg(gi, gj) represents the similarity between gene gi and gene gj, LLS(i, j) represents the log-likelihood score between gene gi and gene gj, MAXLLS represents the maximum log-likelihood score. At this point, we can define the similarity between any gene gi and any gene set G:
$$ {S}^g\left({g}_i,\mathrm{G}\right)=\underset{g_j\in \mathrm{G}}{\max}\left\{{S}^g\left({g}_i,{g}_j\right)\right\} $$
Where, Sg(gi, G) represents the similarity between gi and G. Then, we can get the functional similarity between miRNA mi and miRNA mj:
$$ {SF}_m\left({m}_i,{m}_j\right)=\frac{\sum_{g\in {G}_i}{S}^g\left(g,{G}_i\right)+{\sum}_{g\in {G}_j}{S}^g\left(g,{G}_j\right)}{\left|{G}_i\right|+\left|{G}_j\right|} $$
Where, SFm(mi, mj) represents the functional similarity between mi and mj, Gi represent the gene set associated with mi, and |Gi| represent the number of genes in the set Gi.
Kernel-based neighborhood similarity
Reasonable use of known miRNA-disease interaction information can greatly improve the performance of the model [17, 18]. In this paper, based on the known miRNA-disease interactions, we used the kernel-based neighborhood similarity (KSNS) [35] to calculate miRNA (disease) kernel neighborhood similarity. KSNS not only comprehensively utilizes the distance similarity and structural similarity of samples, but also fully excavates the nonlinear structural similarity information between samples, achieving a good prediction effect in lncRNA-protein interaction prediction. In addition, to overcome the sparse problem of the interaction matrix, a weighted k-neighborhood profile (WKNNP) algorithm was proposed by Xiao et al. [19] to preprocess the interaction matrix, achieved good results. Based on the above two points, we first use WKNNP to preprocess the known interaction matrix, and then uses KSNS to calculate the kernel neighborhood similarity of miRNA (disease).
Let the matrix X of the NM rows and ND columns represent the miRNA-disease interaction matrix, then X can be expressed as: \( \mathrm{X}=\left[{M}_1^T,{M}_2^T,\cdots, {M}_{NM}^T\right]=\left[{D}_1,{D}_2,\cdots, {D}_{ND}\right] \), where Mi is the ith row vector of X, could be regarded as the interaction profile feature of miRNA mi; Dj is the jth column vector of X, could be regarded as the interaction profile feature of disease dj.
According to the WKNNP algorithm, we make use of K-nearest neighbor feature of mi to enrich the interaction profile Mi, then the modified interaction profile \( {\hat{M}}_i \) of mi is as follows:
$$ {\hat{M}}_i=\frac{1}{Q_{m_i}}{\sum}_{k=1}^K{w}^k{M}_k $$
Where \( {Q}_{m_i}={\sum}_{m_{j\in N\left({m}_i\right)}}{SF}_m\left({m}_i,{m}_j\right) \) denotes regularization weight, and N(mi) represents the K nearest set of mi (For sake of simplicity, let K = 15 in the paper). wk is the weight coefficient of the kth neighbor, and decay factor α ∈ [0, 1] (For sake of simplicity, let α = 0.8 in the paper), It is easy to see that the more closer miRNAs have higher weight coefficients. At this point, the modified interaction profile matrix can \( {X}_M=\left[{\hat{M}}_1^T,{\hat{M}}_2^T,\cdots, {\hat{M}}_{NM}^T\right] \) be obtained through Eq. 7. Similarly, we can get the disease modified interaction profile matrix \( {X}_d=\left[{\hat{D}}_1,{\hat{D}}_2,\cdots, {\hat{D}}_{ND}\right] \). Finally, the modified interaction profile matrix X is shown as follows:
$$ \hat{X}=\max \left\{X,\frac{1}{2}\left({X}_m+{X}_d\right)\right\} $$
Now, based on the \( \hat{\mathrm{X}} \), we make use of KSNS to calculate miRNA (disease) kernel neighborhood similarity. First, we construct the K-neighboring discriminant matrix of miRNA based on the miRNA functional similarity:
$$ {C}_{i,j}=\Big\{{\displaystyle \begin{array}{c}1,\kern2em j\in N\left({m}_i\right)\\ {}0,\kern2em j\notin N\left({m}_i\right) ori=j\end{array}}\operatorname{} $$
Where N(mi) represents the set of NK nearest miRNAs of mi, NK = ⌊PN × N⌋, PN denotes neighbors proportion parameter, N is the total number of samples, ⌊∙⌋ means round down. Then weight matrix W of miRNA is as follow:
$$ {\displaystyle \begin{array}{c}\mathit{\min}\frac{1}{2}{\left\Vert \Phi (X)W-\Phi (X)\right\Vert}_F^2+\frac{\mu_1}{2}{\left\Vert W\bigodot \left(1-C\right)\right\Vert}_F^2+\frac{\mu_2}{2}{\left\Vert W\right\Vert}_F^2\\ {}s.t.{W}^Te=e\ W\ge 0\ \mathit{\operatorname{diag}}(W)=0\end{array}} $$
Where, Φ(∙) denotes kernel function, ‖∙‖F representsFrobenius norm, ⨀ is an element-by-element multiplication, μ1 is non-neighborhood control parameters, μ2 is similarity regularization parameters, e = (1, 1, ……, 1)T. The first item of constraint requires the sum of reconstruction weights of each sample to be 1, the second requires that all elements in W are non-negative, and the third term indicates that the self-similarity of miRNA is 0. Using the Lagrange multiplier method and the Karush-Kuhn-Tucker (KKT) condition, the iterative formula of W is as follows:
$$ {W}_{ij}=\frac{{\left[k\left(X,\mathrm{X}\right)+{\mu}_1W\bigodot C\right]}_{ij}}{{\left[k\left(X,\mathrm{X}\right)W+{\mu}_1W+{\mu}_2W\right]}_{ij}}{W}_{ij} $$
Where k(X, X) represents the kernel matrix of X. In this paper, we select Gaussian kernel function, which is represented as:
$$ k\left({x}_i,{x}_j\right)=\left\langle \Phi \left({x}_i\right),\Phi \left({x}_j\right)\right\rangle =\exp \left(-{\left\Vert {x}_i-{x}_j\right\Vert}^2/\upgamma \right) $$
Where k(xi, xj) is the kernel of any two samples of xi, xj. \( \upgamma =\frac{\sum {\left\Vert {x}_i\right\Vert}^2}{NM} \) represents the regularized bandwidth parameter. After that, we conducted multiple normalization operations on the weight matrix W to obtain the miRNA kernel neighborhood similarity matrix SIm, and the normalization formula is as follows:
$$ {SI}_m={D}^{-\frac{1}{2}}{\mathrm{W}}^T{D}^{-\frac{1}{2}} $$
Where, the diagonal matrix D = diag (d1, d2, …, dNM), \( {d}_j=\sum \limits_{i=1}^{NM}{W}_{i,j} \). Similarly, we can get the disease kernel neighborhood similarity SId. Then the clusDCA [34] was used to integrate the miRNA functional similarity SFm (disease semantic similarity matrix SSd) and kernel neighborhood similarity SIm (kernel neighborhood similarity SId) to obtain the final miRNA similarity matrix Sm= (disease similarity matrix Sd).
Bidirectional propagation algorithm
Based on miRNA similarity, disease similarity and known miRNA-disease interaction information, we proposed a bidirectional propagation algorithm to predict the miRNA-disease interaction score.
Let (F)NM × ND be the miRNA-disease interaction score matrix, then F can be decomposed as \( F=\left[{FM}_1^T,{FM}_2^T,\cdots, {FM}_{NM}^T\right]=\left[{FD}_1,{FD}_2,\cdots, {FD}_{ND}\right] \), Where, \( {FM}_i^T \) represents the predicted interaction score of miRNA mi with all diseases, and FDj denotes the predicted interaction score of disease dj. Based on the hypothesis that higher similarity miRNAs are more likely to be interacted with the same disease, we can get:
$$ \sum \limits_{i,j}^M{s}_{i,j}^m{\left\Vert \frac{1}{\sqrt{d_i^m}},{FM}_i,-,\frac{1}{\sqrt{d_j^m}},{FM}_j\right\Vert}^2= tr\left({F}^T\left(I-{D_m}^{-\frac{1}{2}}\bullet {S}_m\bullet {D_m}^{-\frac{1}{2}}\right)F\right) $$
Where \( {s}_{i,j}^m={\left({S}_m\right)}_{i,j} \) denotes the similarity of mi and mj. \( {d}_i^m=\sum \limits_{j=1}^{NM}{s}_{i,j}^m \), and the diagonal matrix \( {D}_m=\mathit{\operatorname{diag}}\left({d}_1^{\mathrm{m}},{d}_2^{\mathrm{m}},\cdots, {d}_{NM}^{\mathrm{m}}\right) \). Similarly for diseases, we can get:
$$ \sum \limits_{u,v}^{ND}{s}_{u,v}^d{\left\Vert \frac{1}{\sqrt{d_u^d}}{FD}_s-\frac{1}{\sqrt{d_v^d}}{FD}_t\right\Vert}^2= tr\left({F}^T\left(I-{D_d}^{-\frac{1}{2}}\bullet {S}_D\bullet {D_d}^{-\frac{1}{2}}\right)F\right) $$
Where \( {s}_{u,v}^d={\left({S}_d\right)}_{u,v} \) denotes the similarity of du and dv. \( {d}_u^d=\sum \limits_{k=1}^{ND}{s}_{u,k}^d \), and the diagonal matrix \( {D}_d=\mathit{\operatorname{diag}}\left({d}_1^d,{d}_2^d,\cdots, {d}_{ND}^d\right) \). By this stage, the bidirectional propagation algorithm can be obtained as follows:
$$ \Big\{{\displaystyle \begin{array}{c}\begin{array}{c} argmin\\ {}F\end{array}\left\{{\left\Vert F-Y\right\Vert}_F^2+\frac{\lambda_{\mathrm{m}}}{2} tr\left({F}^T{L}_mF\right)+\frac{\lambda_{\mathrm{d}}}{2} tr\left({FL}_d{F}^T\right)\right\}\\ {}\kern0.50em {L}_m=I-{D_m}^{-\frac{1}{2}}\bullet {S}_m\bullet {D_m}^{-\frac{1}{2}}\\ {}{L}_d=I-{D_d}^{-\frac{1}{2}}\bullet {S}_D\bullet {D_d}^{-\frac{1}{2}}\end{array}}\operatorname{} $$
Where \( {\left\Vert F-Y\right\Vert}_F^2 \) represents the overall prediction error, which is required to be as small as possible, λm and λd are the Laplacian regularization parameters of miRNA and disease, respectively. The derivative of Eq. 16 for F is as follows:
$$ \frac{\partial Q(F)}{F}=2\left(\mathrm{F}-\mathrm{Y}\right)+{\lambda}_m{L}_mF+{\lambda}_d{FL}_d $$
In order to speed up the optimization of the gradient algorithm, we use AdaGrad algorithm [34] to adaptively choose the gradient step size. The details of the optimization algorithm to the proposed bidirectional propagation model are described in Algorithm 1.
Comparison with other methods
Experimental settings
To evaluate the performance of the KNMBP algorithm fairly, we performed the 5-fold cross-validation (CV) on Dataset I and Dataset II, and compared with the following methods: IMCMDA [17], BNPMDA [16] and RLSMDA [14], KRLSM [18], RWRMDA [13]. Specifically, for each method, we performed CV four times, each time using a different seed, and the mean value of the AUC values under different seeds was taken as the final AUC value of the method. The miRNA-disease interaction matrix Y ∈ RNM × ND had NM rows for miRNAs and ND columns for diseases. We carried out three types of CV as follows [36]:
CVa : CV on all miRNA-disease pairs. In order to ensure that the known interactions could be evenly distributed, we randomly divided the known and unknown interactions into five equal parts, one of which was selected as the test set in turn, and the association contained in it was deleted as the training set.
CVm : CV on miRNAs (row vectors in Y), all miRNAs were randomly divided into five equal parts, one of which was selected as the test set in turn, and its association was deleted as the training set.
CVd : CV on diseases (column vectors in Y), all diseases were randomly divided into five equal parts, one of which was selected as the test set in turn, and its association was deleted as the training set.
In each crossover experiment, Under CVa, 80% of Y elements are used as the training set, and the remaining 20% are test set; Under CVm, 80% of rows in Y are used as the training set, and the remaining 20% are test set; Under CVd, 80% of columns in Y are used as the training set, and the remaining 20% are test set. In Dataset I, since the disease semantic similarity matrix is sparse, and the miRNA functional similarity relies on known miRNA-disease interactions, most of the methods only perform CVa experiment. Therefore, we only perform CVa on Dataset I, and perform the above three CV on Dataset II.
In this paper, we use the grid method to find the optimal combination of parameters. For KNMBP, the parameters are as follows: neighbors proportion parameter PN was selected from {10%, 30%, 50%, 70%, 90%}; non-neighborhood control parameters μ1 and similarity regularization parameters μ2 were selected from { 20, 21, 22, 23, 24 }; For Laplace regularization parameters λm and λd, we set λm = λd and choose the two parameters from { 2−2, 2−1, 20, 2−1, 2−2 }. For RWRMDA, {0, 0.1, ⋯, 0.9} for restart probability r and {1, 2, 3, ⋯, 6} for walk times; For KRLSM, with the authors' recommendations, we set σ = 1, the weight parameters were selected from {0, 0.1, ⋯, 1};For RLSMDA, weight parameters w = 0.5 , the regularization parameters ηm = ηd and were selected from {0, 0.1, ⋯, 1}; For IMCMDA, the subspace dimension r was selected from {50, 100, ⋯, 500}.
Cross validation
For each CV, we calculated the prediction interaction scores of the test set by the above six methods, and normalized all the prediction interaction scores as follows:
$$ \hat{PS}\left(i,j\right)=\frac{PS\left(i,j\right)-\mathit{\min} PS}{maxPS- minPS} $$
Where PS(i, j) represents the predicted interaction score of miRNA mi and disease dj, minPS represents the minimum value of PS, and maxPS represents the maximum value of PS. Then, the [0,1] interval is equally divided into 1000, and each of the points is sequentially selected as a threshold, and calculate the True Positive Rate (TPR, sensitivity) and False Positive Rate (FPR, 1-specificity) under each specific threshold. After that, we calculate the mean value of the TPR and the FPR for each threshold under CV, draw the corresponding TPR and FPR curve. Figure 2 shows the optimal AUC and corresponding ROC curves for each model under CV. The optimal parameters of KNMBP and the corresponding AUC values are shown in Additional file 2.
Performance comparisons between KNMBP and other state-of-the-art methods (RWRMDA, RLSMDA, BNPMDA, KRLSM, IMCMDA) in terms of AUC based on 5-fold cross validation. a perform CVa on Dataset I; b perform CVa on Dataset II; c perform CVd on Dataset II; d perform CVm on Dataset II
In the above experiment, CVa tested the predictive performance of the model for new interactions, and CVm and CVd tested the predictive performance for new miRNAs and new diseases, respectively. It can be seen that our method (KNMBP) achieves the best prediction results in Fig. 2. Specifically, based on Dataset I, the AUC value of KNMBP for CVa can reach 0.93126, which is 9.67, 5.69, 11.57, 3.41, and 10.31% higher than RWRMDA, RLSMDA, BNPMDA, KRLSM, and IMCMDA, respectively. Based on Dataset II, the AUC value of KNMBP for CVa can reach 0.93795, which is 7.97, 3.58, 13.68, 5.31 and 16.49% higher than the other five methods respectively. Since BNPMDA based on binary recommendation algorithm needs to utilize known miRNA-disease interactions to achieve resource allocation, it cannot predict new miRNA and new diseases [16]. RWRMDA, which restarts the random walk on MiRNA similarity network, is also not suitable for prediction of new diseases [13]. Therefore, RLSMDA, KRLSM and IMCMDA were selected as comparison algorithms under CVd, and the AUC value of KNMBP could reach 0.86363, which was 7.66, 25.577 and 12.93% higher than the other three methods (RLSMDA, KRLSM, IMCMDA). For CVm, the AUC of KNMBP can reach 0.86937, which is 0.62, 0.67, 11.09, 5.31 and 12.68% higher than the other four methods (RWRMDA, RLSMDA, KRLSM, IMCMDA), respectively.
Parametric sensitivity analysis
In machine learning, with the change of experimental scenarios, the optimal parameter combination may be very different, and the parameter selection may have a huge impact on the performance of the model, so the sensitivity analysis of parameters is often very important. In this section, we focus on the influence of four parameters, namely, neighbor proportion parameter PN, Laplace regularization parameter λ = λm = λd, non-neighborhood control parameter μ1 and similarity regularization parameter μ2, on the prediction performance of the model. Let Fcv = c(PN = i, λ = j, μ1 = s, μ2 = t) represent the AUC value of the KNMBP algorithm when cv = c, c ∈ {1, 2, 3, 4} is performed and the parameters are set to PN = i, λ = j, μ1 = s, μ2 = t. In order to facilitate the visualization of the results, for each type of CV we combined the above four parameters in pairs to analyze the influence of the paired parameters on the predicted results of the model.
First, we consider the influence of neighbor proportion parameter PN and Laplace regularization parameter λ on the predictive performance of the model. When PN = i, λ = j, and the other two parameters change arbitrarily, we calculate the maximum AUC value of KNMBP (\( {\mathrm{maxAUC}}_{i,j}^c \)), the average AUC value (\( {\mathrm{meanAUC}}_{i,j}^c \)) and the minimum AUC value (\( {\mathrm{minAUC}}_{i,j}^c \)), as shown below:
$$ {\displaystyle \begin{array}{c}{\mathrm{maxAUC}}_{i,j}^c=\max \left\{{F}_{cv=c}\left( PN=i,\lambda =j,{\mu}_1,{\mu}_2\right)\right|{\mu}_1\in \forall, {\mu}_2\in \forall \Big\}\\ {}{\mathrm{meanAUC}}_{i,j}^c=\mathrm{mean}\left\{{F}_{cv=c}\left( PN=i,\lambda =j,{\mu}_1,{\mu}_2\right)\right|{\mu}_1\in \forall, {\mu}_2\in \forall \Big\}\\ {}{\mathrm{minAUC}}_{i,j}^c=\min \left\{{F}_{cv=c}\left( PN=i,\lambda =j,{\mu}_1,{\mu}_2\right)\right|{\mu}_1\in \forall, {\mu}_2\in \forall \Big\}\end{array}} $$
Where μ1 ∈ ∀ and μ2 ∈ ∀ represent arbitrary values of the parameters μ1 and μ2 within their range (μ1 , μ2∈ { 20, 21, 22, 23, 24 }). When cv = 1, it means we perform CVa on Dataset I; cv = 2 means we perform CVa on Dataset II; cv = 3 means we perform CVd on Dataset II; cv = 4 means we perform CVm on Dataset II. In particular, under a certain CV, for every set of values of PN and λ, we first calculate the AUC values when μ1 and μ2 are arbitrarily changed within their range, then calculate the maximum, average and minimum values of this group of AUC values according to (20), and the results are shown in Fig. 3.
The influence of neighbor proportion parameter PN and Laplace regularization parameter λ on the predictive performance of the model. a CVa on dataset1; b CVa on dataset2; c CVd on dataset2; d CVm on dataset2
It can be seen from Fig. 3 that with the change of neighbor proportional parameter PN and Laplace regularization parameter λ, the AUC value of the model has a trend fluctuation, but the overall fluctuation range is small. Specifically, as shown in (a) of Fig. 3, the minAUC is 0.92322 when PN = 0.1 and λ = 4, and the maxAUC is 0.93126 when PN = 0.1 and λ = 1/4, with an overall relative change of 0.87%. Similarly, in (b), (c), and (d) of Fig. 3, the relative ranges of overall AUC changes with respect to the model caused by PN or λ are 0.56, 0.61, and 0.29%, respectively. The result shows that KNMBP has strong stability related to neighbor proportional parameter PN and Laplace regularization parameter λ.
Now we consider the non-neighborhood control parameter μ1 and similarity regularization parameter μ2. Similarly, When μ1 = s, μ2 = t, the other two parameters change arbitrarily, we calculate the maximum AUC value of KNMBP (\( {\mathrm{maxAUC}}_{s,t}^c \)), the average AUC value (\( {\mathrm{meanAUC}}_{s,t}^c \)) and the minimum AUC value (\( {\mathrm{minAUC}}_{s,t}^c \)), as shown below:
$$ {\displaystyle \begin{array}{c}{\mathrm{maxAUC}}_{s,t}^c=\max \left\{{F}_{cv=c}\left( PN,\lambda, {\mu}_1=s,{\mu}_2=t\right)\right| PN\in \forall, \lambda \in \forall \Big\}\\ {}{\mathrm{meanAUC}}_{s,t}^c=\mathrm{mean}\left\{{F}_{cv=c}\left( PN,\lambda, {\mu}_1=s,{\mu}_2=t\right)\right| PN\in \forall, \lambda \in \forall \Big\}\\ {}{\mathrm{minAUC}}_{s,t}^c=\min \left\{{F}_{cv=c}\left( PN,\lambda, {\mu}_1=s,{\mu}_2=t\right)\right| PN\in \forall, \lambda \in \forall \Big\}\end{array}} $$
Where PN ∈ ∀ and λ ∈ ∀ represent arbitrary values of the parameters PN and λ within their range (PN ∈ {10%, 30%, 50%, 70%, 90%} , λ∈ { 2−2, 2−1, 20, 2−1, 2−2 }). Then the effect of these two parameters on the prediction performance of the model is shown in Additional file 3. As can be seen from (a), (b), (c) and (d) in Additional file 3, when the parameters μ1 and μ2 change in a certain range, the maxAUC value, meanAUC value and minAUC value of the model are almost flat, indicating that these two parameters have little influence on the prediction performance of the model. According to Fig. 3 and Additional file 3, when the parameters of the model change within a certain range, KNMBP can always achieve better prediction performance, indicating that our algorithm has strong parameters robustness.
To further demonstrate the predictive performance of KNMBP algorithm for novel miRNA-disease interactions, experiments were performed on the older version of HMDD (v2.0, June 20, 2013), and the prediction results were validated with the newer version of HMDD (v3.0, June 28, 2018). We downloaded the miRNA-disease interactions from HMDD v2.0 and extracted the disease data with MeSH ID or OMIM ID according to the details of the disease provided by HMDD v3.0. After processing, we obtained 2157 interactions of 166 diseases and 299 miRNAs, and constructed semantic similarity scores of these diseases and functional similarity scores of these miRNAs according to (2.2.1) and (2.2.2). The KNMBP was used for prediction, and the candidate miRNAs of 166 diseases ranked according to their predicted scores were provided in Additional file 4. Figure 4 shows the confirmed ratio of candidate miRNAs for 11 diseases under different thresholds. For example, the top 10 predicted scores of candidate miRNAs for Bladder Neoplasms are all confirmed in HMDD v3.0. Twenty-seven of the top 30 predicted scores were confirmed in HMDD v3.0. As can be seen from Fig. 4, most of the top candidate miRNAs for these diseases can be confirmed in the latest version.
For different thresholds, the proportion of candidate mirnas that have been confirmed to be associated with the disease
In addition, in order to further test the validity of the predicted results, we divided the candidate miRNAs for each disease into two groups according to the predicted scores, called Top group and Bottom group respectively [19], with 20 candidate miRNAs in each group, and then used fisher's exact test to evaluate the statistical differences between the two groups. Figure 5 shows the proportion of confirmed candidate miRNAs in the Top group and Bottom group of four diseases and the significance level p by fisher's exact test. For example, 18 of the candidate miRNAs in Colon Neoplasms's Top group were confirmed (proportion of 0.9), and 2 of the Bottom group were confirmed (proportion of 0.1), with a p value of 5.2959 × 10−7. This suggests that the candidate miRNAs of Colon Neoplasms in the Top group are more likely to be confirmed than that in the Bottom group. Meanwhile, the p values were 1.4509 × 10−11 , 3.5997 × 10−4 , 2.4436 × 10−4 for Bladder Neoplasms, Glioma, Ovarian Neoplasms, respectively. The test results verified that the number of confirmed miRNAs in the Top group were significantly higher than that in the Bottom group, which further demonstrated the high efficiency of KNMBP algorithm in predicting new miRNA-disease interactions.
The percentage of confirmed candidate miRNAs in the Top group and Bottom group of the four diseases and the corresponding significance level of Fisher's exact test
As shown in Additional file 5, the top 10 candidate miRNAs for these four diseases and their confirmation in HMDD v3.0 [31], miRCancer [37] and dbDEMC 2.0 [38]. Specifically, for Gladden Neoplasms and Colon Neoplasms, their top 10 candidate miRNAs were all confirmed in HMDD v3.0; For Glioma, 8 were confirmed in HMDD v3.0 and one was confirmed in miRCancer; For Ovarian Neoplasm, 9 were confirmed in HMDD v3.0 and one was confirmed in dbDEMC 2.0. Finally, all the interactions in Dataset II extracted from the current latest database were used as the training set, and the candidate miRNAs of 579 diseases predicted by KNMBP algorithm were sorted according to scores, as shown in Additional file 6.
The KNMBP proposed in this paper not only has high performance in predicting unknown miRNA-disease interactions, but also can efficiently predict the new miRNA (disease), which not associated with any disease (miRNA). In order to fairly evaluate the performance of the model, we compare the performance of it and several state-of-the-art models to the common Dataset (Dataset I) and the Dataset (Dataset II) extracted by ourselves for 5-fold cross validation (CV). In Dataset I, the AUC value of KNPMBP could reach 0.93126 when we perform CV on interactions. In Dataset II, the AUC value of KNMBP could reach 0.93795, 0.86937 and 0.86363 when we perform CV on interactions, on miRNAs and on diseases, respectively. The predicted results of our method were all better than other methods. In order to evaluate the predictive performance of KNMBP for new miRNA-disease interactions, we extracted the data from the old version database and tested the predicted results with the new version. Statistical results of 11 diseases confirmed that most of the top candidate miRNAs could be confirmed in the new version dataset. We divided the candidate miRNAs of the four common tumors into the Top group and the Bottom group according to the predicted scores. The fisher's exact test results further confirmed that the number of confirmed miRNAs in the Top group were significantly higher than that in the Bottom group. In addition, the results of parameter sensitivity analysis show that KNMBP algorithm has the advantage of parameter robustness when the parameters are taken in a wide range.
The reason why the KNMBP algorithm has higher performance is mainly due to the following aspects. First, we constructed more reasonable disease semantic similarity network and miRNA functional similarity network. Specifically, instead of using Directed Acyclic Graph (DAG) alone to describe the disease similarity, we comprehensively used the gene-disease interactions, disease-GO biological process interactions and the MeSH descriptor to calculate the disease similarity, and more fully mined the similarity information between diseases to obtain more dense and accurate disease similarity network. In addition, previous methods for constructing miRNA functional similarity network mostly rely on the known miRNA-disease interaction, therefore they cannot predict new miRNAs. In this paper, the miRNA functional similarity is calculated by integrating miRNA-target gene interaction network and gene weight network, avoiding dependence on known miRNA-disease interactions and ensuring the prediction of new miRNAs. Secondly, in order to overcome the sparseness of the miRNA-disease interaction network and fully exploit the miRNA (disease) feature information, we utilized the weighted K neighborhood profiles to make a weighted correction on the sparse interaction network, taking advantage of neighborhood information to reduce the interaction network sparsity. Meanwhile, we used KSNS to calculate the miRNA (disease) kernel neighborhood similarity. Different from Gaussian function similarity and linear neighborhood similarity [20], KSNS not only makes full use of non-neighborhood information, but also fully excavates the nonlinear structural similarity between samples, consider both the distance similarity and the structural similarity of samples. Thirdly, we used diffusion component analysis to integrate the heterogeneous omics data of disease similarity and miRNA similarity respectively. The fused miRNA (disease) similarity network can not only effectively utilize the feature information among the known interactions, but also reflect the new similarity information obtained from other omics data. Fourthly, the bidirectional propagation algorithm simultaneously spreads the known miRNA-disease interactions from the similarity network of both disease and miRNA respectively, making full use of the global network information of miRNA and disease.
Although KNMBP efficiently predicted the unknown miRNA-disease interactions, there are some limitations. First, we tried to build the disease semantic similarity networks and miRNA functional similarity networks by making use of other latest data resources, however, there may be noises and errors in these similarity networks. Secondly, our evaluation is based on the known miRNA-disease interaction which may be not complete. Although the known miRNA-disease interactions have been greatly improved over the previous years, the proportion of these interaction in the total miRNA disease pair is still very low, which leads to some errors in the evaluation of our prediction results.
Studies on the potential miRNA-disease interactions can help people understand the pathogenesis of diseases and design reasonable treatment schemes. In this paper, we proposed a new computational model (KNMBP) to predict the potential miRNA-disease interactions. Compared with other state-of-the-art methods, KNMBP not only has higher prediction accuracy on unknown miRNA-disease interaction, but also can effectively find potential interaction of new disease (or miRNA) without any known related miRNA (or disease). Furthermore, the proposed model is not sensitive to parameter. These indicate that our algorithm can integrate multiple omics data of miRNAs and diseases, and have a wide application prospect in miRNA and disease research.
The code and datasets are available at https://github.com/Mayingjun20179/KNMBP. The software is coded in Matlab in Windows system.
clusDCA:
Improved Diffusion Component Analysis
DAG:
Directed Acyclic Graph
KNMBP:
Kernel neighborhood similarity and multi-network bidirectional propagation
KSNS:
Kernel-based neighborhood similarity model
PLS:
Partial least squares
SVM:
WKNNP:
Weighted k-neighborhood profile
Filipowicz W, Bhattacharyya SN, Sonenberg N. Mechanisms of post-transcriptional regulation by microRNAs: are the answers in sight? Nat Rev Genet. 2008;9(2):102–14.
Bartel DP. MicroRNAs: target recognition and regulatory functions. Cell. 2009;136(2):215–33.
Shabalina S, Koonin E. Origins and evolution of eukaryotic RNA interference. Trends Ecol Evol. 2008;23(10):578–87.
Guay C, Roggli E, Nesca V, Jacovetti C, Regazzi R. Diabetes mellitus, a microRNA-related disease? Transl Res. 2011;157(4):253–64.
Nunez-Iglesias J, Liu CC, Morgan TE, Finch CE, Zhou XJ. Joint genome-wide profiling of miRNA and mRNA expression in Alzheimer's disease cortex reveals altered miRNA regulation. PLoS One. 2010;5(2):e8898.
Catto JWF, Alcaraz A, Bjartell AS, De Vere WR, Evans CP, Fussel S, Hamdy FC, Kallioniemi O, Mengual L, Schlomm T, et al. MicroRNA in prostate, bladder, and kidney Cancer: a systematic review. Eur Urol. 2011;59(5):671–81.
Poy MN, Hausser J, Trajkovski M, Braun M, Collins S, Rorsman P, Zavolan M. Stoffel M: miR-375 maintains normal pancreatic alpha- and beta-cell mass. Proc Natl Acad Sci U S A. 2009;106(14):5813–8.
Asangani IA, Rasheed SAK, Nikolova DA, Leupold JH, Colburn NH, Post S, Allgayer H. MicroRNA-21 (miR-21) post-transcriptionally downregulates tumor suppressor Pdcd4 and stimulates invasion, intravasation and metastasis in colorectal cancer. Oncogene. 2008;27(15):2128–36.
Minn YK, Lee DH, Hyung WJ, Kim JE, Choi J, Yang SH, Song H, Lim BJ, Kim SH. MicroRNA-200 family members and ZEB2 are associated with brain metastasis in gastric adenocarcinoma. Int J Oncol. 2014;45(6):2403–10.
Li Y, Zhang Z, Mao Y, Jin M, Jing F, Ye Z, Chen K. A genetic variant in MiR-146a modifies digestive system Cancer risk: a meta-analysis. Asian Pac J Cancer Prev. 2014;15(1):145–50.
Wang D, Wang J, Lu M, Song F, Cui Q. Inferring the human microRNA functional similarity and functional network based on microRNA-associated diseases. Bioinformatics. 2010;26(13):1644–50.
Xu J, Li CX, Lv JY, Li YS, Xiao Y, Shao TT, Huo X, Li X, Zou Y, Han QL, et al. Prioritizing candidate disease miRNAs by topological features in the miRNA target-Dysregulated network: case study of prostate Cancer. Mol Cancer Ther. 2011;10(10):1857–66.
Chen X, Liu M, Yan G. RWRMDA: predicting novel human microRNA–disease associations. Mol BioSyst. 2012;8(10):2792–8.
Chen X, Yan G. Semi-supervised learning for potential human microRNA-disease associations inference. Sci Rep-UK. 2015;4(5501):1–10.
Chen X, Yang J, Guan N, Li J. GRMDA: graph regression for MiRNA-disease association prediction. Front Physiol. 2018;9(92):1–10.
Chen X, Xie D, Wang L, Zhao Q, You Z, Liu H. BNPMDA: bipartite network projection for MiRNA–disease association prediction. Bioinformatics. 2018;34(18):3178–86.
Chen X. WLQJ: predicting miRNA-disease association based on inductive matrix completion. Bioinformatics. 2018;34(24):4256–65.
Luo J, Xiao Q, Liang C, Ding P. Predicting MicroRNA-disease associations using Kronecker regularized least squares based on heterogeneous Omics data. IEEE Access. 2017;5:2503–13.
Xiao Q, Luo J, Liang C, Cai J, Ding P. A graph regularized non-negative matrix factorization method for identifying microRNA-disease associations. Bioinformatics. 2018;34(2):239–48.
Zhang W, Qu Q, Zhang Y, Wang W. The linear neighborhood propagation method for predicting long non-coding RNA–protein interactions. Neurocomputing. 2018;273:526–34.
Li Y, Qiu C, Tu J, Geng B, Yang J, Jiang T, Cui Q. HMDD v2.0: a database for experimentally supported human microRNA and disease associations. Nucleic Acids Res. 2013;42(D1):D1070–4.
Xuan P, Han K, Guo M, Guo Y, Li J, Ding J, Liu Y, Dai Q, Li J, Teng Z, et al. Prediction of microRNAs associated with human diseases based on weighted kMost similar neighbors. PLoS One. 2013;8(8):e70204.
Lu M, Zhang Q, Deng M, Miao J, Guo Y, Gao W, Cui Q. An analysis of human MicroRNA and disease associations. PLoS One. 2008;3(10):e3420.
Davis AP, Grondin CJ, Johnson RJ, Sciaky D, McMorran R, Wiegers J, Wiegers TC, Mattingly CJ. The comparative Toxicogenomics database: update 2019. Nucleic Acids Res. 2019;47(D1):D948–54.
Karagkouni D, Paraskevopoulou MD, Chatzopoulos S, Vlachos IS, Tastsoglou S, Kanellos I, Papadimitriou D, Kavakiotis I, Maniou S, Skoufos G, et al. DIANA-TarBase v8: a decade-long collection of experimentally supported miRNA–gene interactions. Nucleic Acids Res. 2018;46(D1):D239–45.
Chou C, Shrestha S, Yang C, Chang N, Lin Y, Liao K, Huang W, Sun T, Tu S, Lee W, et al. miRTarBase update 2018: a resource for experimentally validated microRNA-target interactions. Nucleic Acids Res. 2018;46(D1):D296–302.
Hsu SD, Chu CH, Tsou AP, Chen SJ, Chen HC, PWC H, Wong YH, Chen YH, Chen GH, Huang HD. miRNAMap 2.0: genomic maps of microRNAs in metazoan genomes. Nucleic Acids Res. 2007;36(Database):D165–9.
Xiao F, Zuo Z, Cai G, Kang S, Gao X, Li T. miRecords: an integrated resource for microRNA-target interactions. Nucleic Acids Res. 2009;37(Database):D105–10.
Kozomara A, Birgaoanu M, Griffiths-Jones S. miRBase: from microRNA sequences to function. Nucleic Acids Res. 2019;47(D1):D155–62.
Lee I, Blom UM, Wang PI, Shim JE, Marcotte EM. Prioritizing candidate disease genes by network-based boosting of genome-wide association data. Genome Res. 2011;21(7):1109–21.
Huang Z, Shi J, Gao Y, Cui C, Zhang S, Li J, Zhou Y, Cui Q. HMDD v3.0: a database for experimentally supported human microRNA–disease associations. Nucleic Acids Res. 2019;47(D1):D1013–7.
Hu Y, Zhao T, Zhang N, Zang T, Zhang J, Cheng L. Identifying diseases-related metabolites using random walk. BMC Bioinformatics. 2018;19(S5):37–46.
Deng L, Ye D, Zhao J, Zhang J. Exploring Disease Similarity by Integrating Multiple Data Sources. In: In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Madrid: IEEE; 2018. p. 853-58.
Wang S, Cho H, Zhai C, Berger B, Peng J. Exploiting ontology graph for predicting sparsely annotated gene function. Bioinformatics. 2015;31(12):i357–64.
Ma Y, Yu L, He T, Hu X, Jiang X. Prediction of long non-coding RNA-protein interaction through kernel soft-neighborhood similarity. In: In 2018 IEEE international conference on Bioinformatics and biomedicine (BIBM). Madrid: IEEE; 2018. p. 193–6.
Liu Y, Wu M, Miao C, Zhao P, Li X. Neighborhood regularized logistic matrix factorization for drug-target interaction prediction. PLoS Comput Biol. 2016;12(2):e1004760.
Xie B, Ding Q, Han H, Wu D. miRCancer: a microRNA-cancer association database constructed by text mining on literature. Bioinformatics. 2013;29(5):638–44.
Yang Z, Wu L, Wang A, Tang W, Zhao Y, Zhao H, Teschendorff AE. dbDEMC 2.0: updated database of differentially expressed miRNAs in human cancers. Nucleic Acids Res. 2017;45(D1):D812–8.
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of this paper.
About this supplement
This article has been published as part of BMC Medical Genomics Volume 12 Supplement 10, 2019: Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018: medical genomics. The full contents of the supplement are available online at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-12-supplement-10.
The research was supported by the National Key Research and Development Program of China (2017YFC0909502), the National Natural Science Foundation of China (61532008, 61872157). Specifically, the publication costs are funded by the National Key Research and Development Program of China (2017YFC0909502).
School of Mathematics & Statistics, Central China Normal University, Wuhan, 430079, Hubei, China
Yingjun Ma
School of Computer, Central China Normal University, Wuhan, 430079, Hubei, China
Tingting He, Chenhao Zhang & Xingpeng Jiang
Hubei Provincial Key Laboratory of Artificial Intelligence and Smart Learning, Central China Normal University, Wuhan, 430079, Hubei, China
Tingting He & Xingpeng Jiang
School of Life Sciences, Central China Normal University, Wuhan, 430079, Hubei, China
Leixin Ge
Tingting He
Chenhao Zhang
Xingpeng Jiang
YM and XJ designed the MiRNA-disease interaction prediction based on kernel neighborhood similarity and multi-network bidirectional propagation. YM and XJ designed experiments and wrote the manuscript. LG provided biological background guidance. CZ and TH participated in the discussion of the model and gives some suggestions. TH supervised and helped conceive the study. All authors read and approved the final manuscript.
Correspondence to Xingpeng Jiang.
Details of the two benchmark data sets in the paper.
The optimal parameters and the optimal AUC values of different experimental settings were performed on two benchmark data sets.
The influence of non-neighborhood control parameter μ1 and similarity regularization parameter μ2 on the predictive performance of the model.
The prediction scores of 199 new diseases and candidate mirnas sorted by score were obtained using the data set extracted from the old version HMDB.
The top 10 candidate miRNAs of the four diseases predicted by KNMBP based on the old version.
The candidate miRNAs of 579 diseases were sequenced according to the predicted score using the data set extracted from the new version of HMDB.
Ma, Y., He, T., Ge, L. et al. MiRNA-disease interaction prediction based on kernel neighborhood similarity and multi-network bidirectional propagation. BMC Med Genomics 12 (Suppl 10), 185 (2019). https://doi.org/10.1186/s12920-019-0622-4
MicroRNA-disease interaction
Heterogeneous omics data
Kernel neighborhood similarity
Bidirectional propagation
Diffusion component analysis | CommonCrawl |
Why are survival times assumed to be exponentially distributed?
I am learning survival analysis from this post on UCLA IDRE and got tripped up at section 1.2.1. The tutorial says:
... if the survival times were known to be exponentially distributed, then the probability of observing a survival time ...
Why are survival times assumed to be exponentially distributed? It seems very unnatural to me.
Why not normally distributed? Say suppose we are investigating some creature's life span under certain condition (say number of days), should it be more centered around some number with some variance (say 100 days with variance 3 days)?
If we want time to be strictly positive, why not make normal distribution with higher mean and very small variance (will have almost no chance to get negative number.)?
distributions survival assumptions exponential
amoeba says Reinstate Monica
Haitao DuHaitao Du
$\begingroup$ Heuristically, I cannot think of the normal distribution as an intuitive way to model failure time. It's never cropped up in any of my applied work. They are always skewed very far right. I think normal distributions heuristically come about as a matter of averages, whereas survival times heuristically come about as a matter of extrema such as the effect of a constant hazard being applied to a sequence of parallel or series components. $\endgroup$ – AdamO Mar 17 '17 at 15:43
$\begingroup$ I agree with @AdamO about the extreme distributions inherent to survival and time to failure. As others have noted, exponential assumptions have the advantage of being tractable. The biggest problem with them is the implicit assumption of a constant rate of decay. Other functional forms are possible and come as standard options depending on the software, e.g., generalized gamma. Goodness of fit tests can be employed to test differing functional forms and assumptions. The best text on survival modeling is Paul Allison's Survival Analysis Using SAS, 2nd ed. Forget SAS-it's an excellent review $\endgroup$ – Mike Hunter Mar 17 '17 at 18:23
$\begingroup$ I would note that the very first word in your quote is "if" $\endgroup$ – Fomite Mar 17 '17 at 19:34
Exponential distributions are often used to model survival times because they are the simplest distributions that can be used to characterize survival / reliability data. This is because they are memoryless, and thus the hazard function is constant w/r/t time, which makes analysis very simple. This kind of assumption may be valid, for example, for some kinds of electronic components like high-quality integrated circuits. I'm sure you can think of more examples where the effect of time on hazard can safely be assumed to be negligible.
However, you are correct to observe that this would not be an appropriate assumption to make in many cases. Normal distributions can be alright in some situations, though obviously negative survival times are meaningless. For this reason, lognormal distributions are often considered. Other common choices include Weibull, Smallest Extreme Value, Largest Extreme Value, Logistic, etc. A sensible choice for model would be informed by subject-area experience and probability plotting. You can also, of course, consider non-parametric modeling.
A good reference for classical parametric modeling in survival analysis is: William Q. Meeker and Luis A. Escobar (1998). Statistical Methods for Reliability Data, Wiley
klumbardklumbard
$\begingroup$ could you elaborate more on " hazard function is constant w/r/t time"? $\endgroup$ – Haitao Du Mar 18 '17 at 6:14
$\begingroup$ @hxd1011: Presumably by "hazard function" the author is referring to the function $r_X$ given by $r_X(t) = f_X(t) / \bar F_X(t)$, where $f_X$ is the pdf of $X$ and $\bar F_X$ is the tail of $X$ ($\bar F_X(t) = 1 - F_X(t) = \int_t^\infty f_X(x) \, dx$). This is also called the failure rate. The observation is that for $\operatorname{Exp}(\lambda)$, the failure rate is $r(t) =(\lambda e^{-\lambda t}) / (e^{-\lambda t}) = \lambda$, which is constant. Furthermore, it is not hard to show that only the exponential distribution has this property. $\endgroup$ – wchargin Mar 19 '17 at 16:42
To add a bit of mathematical intuition behind how exponents pop up in survival distributions:
The probability density of a survival variable is $f(t) = h(t)S(t)$, where $h(t)$ is the current hazard (risk for a person to "die" this day) and $S(t)$ is the probability that a person survived until $t$. $S(t)$ can be expanded as the probability that a person survived day 1, and survived day 2, ... up to day $t$. Then: $$ P(survived\ day\ t)=1-h(t)$$ $$ P(survived\ days\ 1, 2, ..., t) = (1-h(t))^t$$ With constant and small hazard $\lambda$, we can use: $$ e^{-\lambda} \approx 1-\lambda$$ to approximate $S(t)$ as simply $$ (1-\lambda)^t \approx e^{-\lambda t} $$ , and the probability density is then $$ f(t) = h(t)S(t) = \lambda e^{-\lambda t}$$
Disclaimer: this is in no way an attempt at a proper derivation of the pdf - I just figured this is a neat coincidence, and welcome any comments on why this is correct/incorrect.
EDIT: changed the approximation per advice by @SamT, see comments for discussion.
juodjuod
$\begingroup$ +1 this helped me to understand more on properties of exponential distribution. $\endgroup$ – Haitao Du Mar 17 '17 at 16:51
$\begingroup$ Could you explain your penultimate line? It says $S(t) = ...$, so the left hand side is function of $t$; moreover, so is the right. However, the two middle terms are functions of $\lambda$ (as is the right hand side), but not functions of $t$. Moreover, the approximation $(1+x/n)^n ~ e^{x}$ only holds for $x = o(\sqrt{n})$. It's certainly not true that $\lim_{t \to \infty} (1-\lambda t/t)^t = e^{-\lambda t}$ -- it's not even approximately true for large $t$. I guess this is just a notational mistake you've made though...? $\endgroup$ – Sam T Mar 17 '17 at 17:50
$\begingroup$ @SamT - thanks for the comment, edited. Coming from an applied background, I very much welcome any corrections, esp. on notation. Passing to the limit wrt $t$ was certainly not needed there, but I still believe the approximation holds for small $\lambda$, as are typically encountered in survival models. Or would you say there's something else that coincidentally makes this approximation hold? $\endgroup$ – juod Mar 17 '17 at 20:08
$\begingroup$ Looks better now :) -- the issue is that while $\lambda$ may be small it's not true that $\lambda t$ is necessarily small; as such, you can't use the approximation $$(1+x/n)^n \approx e^x$$ (directly): it's not even "you can in applied maths but can't in pure"; it just doesn't hold at all. However, we can get around this: we do have that $\lambda$ is small, so we can get there directly, writing $$e^{-\lambda t} = \big(e^{-\lambda}\big)^t \approx \big(1-\lambda)^t.$$ Of course, $\lambda = \lambda t / t$, so we can then deduce that $$e^{-\lambda t} \approx \big(1 - \lambda t / t\big)^t.$$ $\endgroup$ – Sam T Mar 17 '17 at 20:14
$\begingroup$ Being applied, you may feel this is being slightly picky, but the point is that the reasoning wasn't valid; similar invalid steps may not happen to be true. Of course, as someone applied, you may be happy to make this step, find it holds in the majority of cases and not worry about the specifics! As someone who does pure maths, this is out of the question for me, but I understand that we need both pure and applied! (And particularly in stats it's good not to get bogged down in pure technicalities.) $\endgroup$ – Sam T Mar 17 '17 at 20:16
You'll almost certainly want to look at reliability engineering and predictions for thorough analyses of survival times. Within that, there are a few distributions which get used often:
The Weibull (or "bathtub") distribution is the most complex. It accounts for three types of failure modes, which dominate at different ages: infant mortality (where defective parts break early on), induced failures (where parts break randomly throughout the life of the system), and wear out (where parts break down from use). As used, it has a PDF which looks like "\__/". For some electronics especially, you might hear about "burn in" times, which means those parts have already been operated through the "\" part of the curve, and early failures have been screened out (ideally). Unfortunately, Weibull analysis breaks down fast if your parts aren't homogeneous (including use environment!) or if you are using them at different time scales (e.g. if some parts go directly into use, and other parts go into storage first, the "random failure" rate is going to be significantly different, due to blending two measurements of time (operating hours vs. use hours).
Normal distributions are almost always wrong. Every normal distribution has negative values, no reliability distribution does. They can sometimes be a useful approximation, but the times when that's true, you're almost always looking at a log-normal anyway, so you may as well just use the right distribution. Log-normal distributions are correctly used when you have some sort of wear-out and negligible random failures, and in no other circumstances! Like the Normal distribution, they're flexible enough that you can force them to fit most data; you need to resist that urge and check that the circumstances make sense.
Finally, the exponential distribution is the real workhorse. You often don't know how old parts are (for example, when parts aren't serialized and have different times when they entered into service), so any memory-based distribution is out. Additionally, many parts have a wearout time that is so arbitrarily long that it's either completely dominated by induced failures or outside the useful time-frame of the analysis. So while it may not be as perfect a model as other distributions, it just doesn't care about things which trip them up. If you have an MTTF (population time/failure count), you have an exponential distribution. On top of that, you don't need any physical understanding of your system. You can do exponential estimates just based on observed part MTTFs (assuming a large enough sample), and they come out pretty dang close. It's also resilient to causes: if every other month, someone gets bored and plays croquet with some part until it breaks, exponential accounts for that (it rolls into the MTTF). Exponential is also simple enough that you can do back-of-the-envelope calculations for availability of redundant systems and such, which significantly increases its usefulness.
fectin - free Monicafectin - free Monica
$\begingroup$ This is a good answer, but note that the Weibull distribution is not "the most complex" parametric distribution for survival models. I'm not sure if there could be such a thing, but certainly relative to the Weibull there is the generalized Gamma distribution, & the generalized F distribution, both of which can take the Weibull as a special case by setting parameters to 0. $\endgroup$ – gung - Reinstate Monica♦ Mar 17 '17 at 19:49
$\begingroup$ It's the most complex one commonly used in reliability engineering (first paragraph :) I don't disagree with your point, but I also have never seen either actually used (write-ups of how they could be used, yes. Actual implementation, no) $\endgroup$ – fectin - free Monica Mar 18 '17 at 3:55
To answer your explicit question, you cannot use the normal distribution for survival because the normal distribution goes to negative infinity, and survival is strictly non-negative. Moreover, I don't think it's true that "survival times are assumed to be exponentially distributed" by anyone in reality.
When survival times are modeled parametrically (i.e., when any named distribution is invoked), the Weibull distribution is the typical starting place. Note that the Weibull has two parameters, shape and scale, and that when shape = 1, the Weibull simplifies to the exponential distribution. A way of thinking about this is that the exponential distribution the simplest possible parametric distribution for survival times, which is why it is often discussed first when survival analysis is being taught. (By analogy, consider that we often begin teaching hypothesis testing by going over the one-sample $z$-test, where we pretend to know the population SD a-priori, and then work up to the $t$-test.)
The exponential distribution assumes that the hazard is always exactly the same, no matter how long a unit has survived (consider the figure in @CaffeineConnoisseur's answer). In contrast, when the shape is $>1$ in the Weibull distribution, it implies that hazards increase the longer you survive (like the 'human curve'); and when it is $<1$, it implies hazards decrease (the 'tree').
Most commonly, survival distributions are complex and not well fit by any named distribution. People typically don't even bother trying to figure out what distribution it might be. That's what makes the Cox proportional hazards model so popular: it is semi-parametric in that the baseline hazard can be left completely unspecified but the rest of the model can be parametric in terms of its relationship to the unspecified baseline.
gung - Reinstate Monica♦gung - Reinstate Monica
$\begingroup$ "Moreover, I don't think it's true that "survival times are assumed to be exponentially distributed" by anyone in reality." I've actually found it to be quite common in epidemiology, usually implicitly. $\endgroup$ – Fomite Mar 17 '17 at 23:16
$\begingroup$ @gung, could you kindly explain - it is semi-parametric in that the baseline hazard can be left completely unspecified but the rest of the model can be parametric in terms of its relationship to the unspecified baseline $\endgroup$ – Gaurav Singhal Jun 28 '18 at 10:16
Some ecology might help answer the "Why" behind this question.
The reason why exponential distribution is used for modeling survival is due to the life strategies involved in organisms living in nature. There's essentially two extremes with regard to survival strategy with some room for the middle ground.
Here's an image that illustrates what I mean (courtesy of Khan Academy):
This graph plots surviving individuals on the Y axis, and "percentage of maximum life expectancy" (a.k.a. approximation of the individual's age) on the X axis.
Type I is humans, which model organisms which have an extreme level of care of their offspring ensuring very low infant mortality. Often these species have very few offspring because each one takes a large amount of the parents time and effort. The majority of what kills Type I organisms is the type of complications that arise in old age. The strategy here is high investment for high payoff in long, productive lives, if at the cost of sheer numbers.
Conversely, Type III is modeled by trees (but could also be plankton, corals, spawning fish, many types of insects, etc) where the parent invests relatively little in each offspring, but produces a ton of them in the hopes that a few will survive. The strategy here is "spray and pray" hoping that while most offspring will be destroyed relatively quickly by predators taking advantage of easy pickings, the few that survive long enough to grow will become increasingly difficult to kill, eventually becoming (practically) impossible to be eaten. All the while these individuals produce huge numbers of offspring hoping that a few will likewise survive to their own age.
Type II is a middling strategy with moderate parental investment for moderate survivability at all ages.
I had an ecology professor who put it this way:
"Type III (trees) is the 'Curve of Hope', because the longer an individual survives, the more likely it becomes that it will continue to survive. Meanwhile Type I (humans) is the 'Curve of Despair', because the longer you live, the more likely it becomes that you will die."
CaffeineConnoisseurCaffeineConnoisseur
$\begingroup$ This is interesting, but note that for humans, before modern medicine (& still in some places in the world today), infant mortality is very high. Baseline human survival is often modeled with "bathtub hazard". $\endgroup$ – gung - Reinstate Monica♦ Mar 17 '17 at 19:16
$\begingroup$ @gung Absolutely, this is a broad generalization and there are variations within humans of different regions and time periods. The main difference is clearer when you're comparing extremes, i.e. Western human families (~2.5 children per pair, most of which don't die in infancy) vs corals or spawning fish (millions of eggs released per mating cycle, most of which die due to being eaten, starvation, hazardous water chemistry, or simply failing to drift into a habitable destination) $\endgroup$ – CaffeineConnoisseur Mar 17 '17 at 19:18
$\begingroup$ While I'm all for explanations from ecology, I'll note assumptions like this are also made for things like hard drives and aircraft engines. $\endgroup$ – Fomite Mar 17 '17 at 19:35
This doesn't directly answer the question, but I think it's very important to note, and does not fit nicely into a single comment.
While the exponential distribution has a very nice theoretical derivation, and thus assuming the data produced follows the mechanisms assumed in the exponential distribution, it should theoretically give optimal estimates, in practice I've yet to run into a dataset where the exponential distribution produces even close to acceptable results (of course, this is dependent on the data types I've analyzed, almost all biological data). For example, I just looked at fitting a model with a variety of distributions using the first data set I could find in my R-package. For model checking of the baseline distribution, we typically compare against the semi-parametric model. Take a look at the results.
Of the Weibull, log-logistic and log-normal distribution, there's not an absolute clear victor in terms of appropriate fit. But there's a clear loser: the exponential distribution! It's been my experience that this magnitude of mis-fitting is not exceptional, but rather the norm for the exponential distribution.
Why? Because the exponential distribution is a single parameter family. Thus, if I specify the mean of this distribution, I've specified all other moments of the distribution. These other families are all two parameter families. Thus, there's a lot more flexibility in those families to adapt to the data itself.
Now keep in mind that the Weibull distribution has the exponential distribution as a special case (i.e. when the shape parameter = 1). So even if the data truly is exponential, we only add a little more noise to our estimates by using a Weibull distribution over an exponential distribution. As such, I would just about never recommend using the exponential distribution to model real data (and I'm curious to hear if any readers have an example of when it's actually a good idea).
Cliff ABCliff AB
$\begingroup$ I am not convinced of this answer: 1) "using the first data set I could find in my R-package"... Really? ... on stats.stackexchange? One random sample and we draw general conclusions? 1b) For models where the failure time tends to be distributed around a given value (like people's life), clearly the distributions like Gamma, Weibull, etc are more suited; when events are equally probable an exponential distribution is more suited. I bet your "first data set" above is of the first kind. 2) All other models have 2 parameters, one should use e.g. the Bayes factor to compare the models. $\endgroup$ – Luca Citi Mar 19 '17 at 22:57
$\begingroup$ @LucaCiti: "the first data set in my R-package" means the first dataset in the R-package that I published (icenReg). And I did note that my experience with the exponential distribution always having a poor fit was dependent on the type of data I've analyzed; almost exclusively biological data. Finally, as I stated in the end, I'm very curious to hear real applied examples where there's a convincing reason to use the exponential distribution, so if you have one, please share. $\endgroup$ – Cliff AB Mar 20 '17 at 1:47
$\begingroup$ A scenario when you might want to use the exponential distribution would be when (a) you had a lot of historic data that showed that the data really was well approximated with an exponential distribution and (b) you needed to make inference with small samples (i.e. n < 10). But I don't know of any real applications like this. Maybe in some sort of manufacturing quality control problem? $\endgroup$ – Cliff AB Mar 20 '17 at 1:53
$\begingroup$ Hi Cliff, thanks for taking the time to reply to my comment. I think roughly speaking a distribution like the Weibull fits better situations corresponding to questions like "what is the life time of individual x in my sample" or "when is neuron x going to fire again" or "when is firefly x going to flash again". Conversely, an exponential distribution models questions like "when is the next death expected to happen in my population", "when is the next neuron going to fire" or "when is a firefly in the swarm going to flash" $\endgroup$ – Luca Citi Mar 20 '17 at 8:54
$\begingroup$ @LucaCiti; ha, just got that your earlier poke was a joke about making an inference with n = 1. Don't know how I missed it the first time. In my defense, if we have theory that says the estimator should be asymptotically normal yet it's 4+ standard deviations away from the other asymptotically normal estimates, then we can! But in all seriousness, it's not that one plot that convinced me, but seeing that same level of deviation consistently. I may get blocked if I spam 20+ plots of bad exponential fits though. $\endgroup$ – Cliff AB Mar 21 '17 at 14:04
Another reason why the exponential distribution crops up often to model interval between events is the following.
It is well known that, under some assumptions, the sum of a large number of independent random variables will be close to a Gaussian distribution. A similar theorem holds for renewal processes, i.e. stochastic models for events that occur randomly in time with I.I.D. inter-event intervals. In fact, the Palm–Khintchine theorem states that the superposition of a large number of (not necessarily Poissonian) renewal processes behaves asymptotically like a Poisson process. The inter-event intervals of a Poisson process are exponentially distributed.
Luca CitiLuca Citi
tl;dr- An expontential distribution is equivalent to assuming that individuals are as likely to die at any given moment as any other.
Assume that a living individual is as likely to die at any given moment as at any other.
So, the death rate $-\frac{\text{d}P}{\text{d}t}$ is proportional to the population, $P$.
$$-\frac{\text{d}P}{\text{d}t}{\space}{\propto}{\space}P$$
Solving on WolframAlpha shows:
$$P\left(t\right)={c_1}{e^{-t}}$$
So, the population follows an exponential distribution.
Math note
The above math is a reduction of a first-order ordinary differential equation (ODE). Normally, we would also solve for $c_0$ by noting the boundary condition that population starts at some given value, $P\left(t_0\right)$, at start-time $t_0$.
Then the equation becomes: $$P\left(t\right)={e^{-t}}P\left({t_0}\right).$$
The exponential distribution assumes that people in the population tend to die at the same rate over time. In reality, death rates will tend to vary for finite populations.
Coming up with better distributions involves stochastic differential equations. Then, we can't say that there's a constant death likelihood; rather, we have to come up with a distribution for each individual's odds of dying at any given moment, then combine those various possibility trees together for the entire population, then solve that differential equation over time.
I can't recall having seen this done in anything online before, so you probably won't run into it; but, that's the next modeling step if you want to improve upon the exponential distribution.
NatNat
(Note that in the part you quoted, the statement was conditional; the sentence itself didn't assume exponential survival, it explained a consequence of doing so. Nevertheless assumption of exponential survival are common, so it's worth dealing with the question of "why exponential" and "why not normal" -- since the first is pretty well covered already I'll focus more on the second thing)
Normally distributed survival times don't make sense because they have a non-zero probability of the survival time being negative.
If you then restrict your consideration to normal distributions that have almost no chance of being near zero, you can't model survival data that has a reasonable probability of a short survival time:
Maybe once in a while survival times which have almost no chance of short survival times would be reasonable, but you need distributions that make sense in practice -- usually you observe short and long survival times (and anything in between), with typically a skewed distribution of survival times). An unmodified normal distribution will rarely be useful in practice.
[A truncated normal might more often be a reasonable rough approximation than a normal, but other distributions will often do better.]
The constant-hazard of the exponential is sometimes a reasonable approximation for survival times.. For example, if "random events" like accident are a major contributor to death-rate, exponential survival will work fairly well. (Among animal populations for example, sometimes both predation and disease can act at least roughly like a chance process, leaving something like an exponential as a reasonable first approximation to survival times.)
One additional question related truncated normal: if normal is not appropriate why not normal squared (chi sq with df 1)?
Indeed that might be a little better ... but note that that would correspond to an infinite hazard at 0, so it would only occasionally be useful. While it can model cases with a very high proportion of very short times, it has the converse problem of only being able to model cases with typically much shorter than average survival (25% of survival times are below 10.15% of the mean survival time and half of the survival times are less than 45.5% of the mean; that is median survival is less than half the mean.)
Let's look at a scaled $χ^2_1$ (i.e. a gamma with shape parameter $\frac12$):
[Maybe if you sum two of those $χ^2_1$ variates... or maybe if you considered noncentral $χ^2$ you would get some suitable possibilities. Outside of the exponential, common choices of parametric distributions for survival times include Weibull, lognormal, gamma, log-logistic among many others ... note that the Weibull and the gamma include the exponential as a special case]
Glen_b -Reinstate MonicaGlen_b -Reinstate Monica
$\begingroup$ thanks, i have been waiting to your answer since yesterday :). One additional question related truncated normal: if normal is not appropriate why not normal squared (chi sq with df 1)? $\endgroup$ – Haitao Du Mar 19 '17 at 1:31
$\begingroup$ Indeed that might be a little better ... but note that that would correspond to an infinite hazard at 0 -- so it would only occasionally be useful. It has the converse problem of only modelling cases with typically much shorter than average survival (25% of survival times are below 10.15% of the mean survival time and half of the survival times are less than 45.5% of the mean) Maybe if you sum two of those $\chi^2_1$ variates you could get a less surprising hazard function. . .;P $\endgroup$ – Glen_b -Reinstate Monica Mar 19 '17 at 2:01
$\begingroup$ again thank you for education my the intuition behind things. I have seen too much recipe level tutorials and people doing things without knowing why. CV is a great place to learn. $\endgroup$ – Haitao Du Mar 19 '17 at 2:08
that still has a nonzero probability of being negative, so it's not strictly positive;
the mean and variance are something that you can measure from the population you're trying to model. If your population has mean 2 and variance 1, and you model it with a normal distribution, that normal distribution will have substantial mass below zero; if you model it with a normal distribution with mean 5 and variance 0.1, your model obviously has very differnt properties to the thing it's supposed to model.
The normal distribution has a particular shape, and that shape is symmetrical about the mean. The only way to adjust the shape are to move it right and left (increase or decrease the mean) or to make it more or less spread out (increase or decrease the variance). This means that the only way to get a normal distribution where most of the mass is between two and ten and only a tiny amount of the mass is below zero, you need to put your mean at, say, six (the middle of the range) and set the variance small enough that only a tiny fraction of samples are negative. But then you'll probably find that most of your samples are 5, 6 or 7, whereas you were supposed to have quite a lot of 2s, 3s, 4s, 8s, 9s and 10s.
David RicherbyDavid Richerby
Not the answer you're looking for? Browse other questions tagged distributions survival assumptions exponential or ask your own question.
Is logistic regression a "semi-parametric" model?
If I have a lot of right censored case, would it be easier to use logistic regression instead of survival analysis?
Appropriate Application of Survival Analysis
the Poisson result and Exponential interpretation for spare part requirement analysis
Why is coxph() so fast for survival analysis on big data?
Average duration of a double outage in a system with exponentially-distributed failure and repair times
Work order completion time is exponentially distributed
Survival time problem exponential with gamma prior | CommonCrawl |
Journal of Animal Science and Technology
Korean Society of Animal Sciences and Technology (한국축산학회)
Agriculture, Fishery and Food > Science of Animal Resources
Journal of Animal Science and Technology (JAST) is a peer-reviewed, open access, online journal publishing original research, review articles and notes in all fields of animal science. Topics covered by the journal include: genetics and breeding, physiology, nutrition of monogastric animals, nutrition of ruminants, animal products (milk, meat, eggs and their by-products) and their processing, grasslands and roughages, livestock environment, animal biotechnology, animal behavior and welfare.
http://www.janimscitechnol.com/manuscript KSCI KCI SCIE
Genetic characterisation of PPARG, CEBPA and RXRA, and their influence on meat quality traits in cattle
Goszczynski, Daniel Estanislao;Mazzucco, Juliana Papaleo;Ripoli, Maria Veronica;Villarreal, Edgardo Leopoldo;Rogberg-Munoz, Andres;Mezzadra, Carlos Alberto;Melucci, Lilia Magdalena;Giovambattista, Guillermo 14.1
https://doi.org/10.1186/s40781-016-0095-3 PDF KSCI
Background: Peroxisome proliferator-activated receptor gamma (PPARG), CCAAT/enhancer binding protein alpha (CEBPA) and retinoid X receptor alpha (RXRA) are nuclear transcription factors that play important roles in regulation of adipogenesis and fat deposition. The objectives of this study were to characterise the variability of these three candidate genes in a mixed sample panel composed of several cattle breeds with different meat quality, validate single nucleotide polymorphisms (SNPs) in a local crossbred population (Angus - Hereford - Limousin) and evaluate their effects on meat quality traits (backfat thickness, intramuscular fat content and fatty acid composition), supporting the association tests with bioinformatic predictive studies. Results: Globally, nine SNPs were detected in the PPARG and CEBPA genes within our mixed panel, including a novel SNP in the latter. Three of these nine, along with seven other SNPs selected from the Single Nucleotide Polymorphism database (SNPdb), including SNPs in the RXRA gene, were validated in the crossbred population (N = 260). After validation, five of these SNPs were evaluated for genotype effects on fatty acid content and composition. Significant effects were observed on backfat thickness and different fatty acid contents (P < 0.05). Some of these SNPs caused slight differences in mRNA structure stability and/or putative binding sites for proteins. Conclusions: PPARG and CEBPA showed low to moderate variability in our sample panel. Variations in these genes, along with RXRA, may explain part of the genetic variation in fat content and composition. Our results may contribute to knowledge about genetic variation in meat quality traits in cattle and should be evaluated in larger independent populations.
Confirmation of genotypic effects for the bovine APM1 gene on marbling in Hanwoo cattle
Kwon, Anam;Srikanth, Krishnamoorthy;Lee, Eunjin;Kim, Seonkwan;Chung, Hoyoung 15.1
Background: Our previous study had identified the SNP (g.81966377T > C) and indel (g.81966364D > I) located in the promoter of APM1 to have a significant effect on marbling in Hanwoo. APM1 encodes an adipocytokine called adiponectin, which plays a significant role in lipogenesis. The aim of this study was to verify and validate the effect of the SNP and indel on marbling and other carcass traits in a large, representative, countrywide population of Hanwoo cattle. The carcass traits measured were marbling (MAR), backfat thickness (BFT), loin eye area (LEA), and carcass weight (CAW). Results: Primers were designed to amplify 346 bp of the genomic segment that contained the targeted SNP (g.81966377) and the indel (g.81966364). After data curation, the genotypes of 8,378 individuals identified using direct sequencing analysis estimated frequencies for C (0.686) and T (0.314) respectively showing genotype frequencies for CC (0.470), CT (0.430) and TT (0.098). The genotypes were significantly associated with MAR, BFT and LEA. The indel had significant effect on marbling (P < .0001) with strong additive genetic effects. The allele frequencies was estimated at (DEL, 0.864) and insertion (INS, 0.136) presenting genotypes of D/D (75.63 %), D/I (21.44 %), and I/I (2.92 %). Significant departure from Hardy-Weinberg equilibrium was not detected for both the SNP and the indel. Conclusion: The SNP genotypes showed significant association with MAR, BFT and LEA with strong additive genetic effects, while the indel was significantly associated with MAR. The results confirmed that the variants can be used as a genetic marker for improving marbling in Hanwoo.
Quality and storage characteristics of yogurt containing Lacobacillus sakei ALI033 and cinnamon ethanol extract
Choi, Yu Jin;Jin, Hee Yeon;Yang, Hee Sun;Lee, Sang Cheon;Huh, Chang Ki 16.1
Background: This study was conducted to examine the quality and storage characteristics of yogurt containing antifungal-active lactic acid bacteria (ALH, Lacobacillus sakei ALI033) isolated from kimchi and cinnamon ethanol extract. The starter was used for culture inoculation (1.0 % commercial starter culture YF-L812 and ALH). Results: The antifungal activity of cinnamon extracts was observed in treatments with either cinnamon ethanol extracts or cinnamon methanol extracts. Changes in fermented milk made with ALH and cinnamon extract during fermentation at $40^{\circ}C$ were as follows. The pH was 4.6 after only 6 h of fermentation. Titratable acidity values were maintained at 0.8 % in all treatment groups. Viable cell counts were maintained at $4{\times}10^9CFU/mL$ in all groups except for 1.00 % cinnamon treatment. Sensory evaluations of fermented milk sample made with ALH and 0.05 % cinnamon ethanol extract were the highest. Changes in fermented milk made with ALH and cinnamon ethanol extract during storage at $4^{\circ}C$ for 28 days were as follows. In fermented milk containing ALH and cinnamon ethanol extracts, the changes in pH and titratable acidity were moderate and smaller compared with those of the control. Viable cell counts were maintained within a proper range of $10^8CFU/mL$. Conclusions: The results of this study suggest that the overgrowth of fermentation strains or post acidification during storage can be effectively delayed, thereby maintaining the storage quality of yogurt products in a stable way, using cinnamon ethanol extract, which exhibits excellent antifungal and antibacterial activity, in combination with lactic acid bacteria isolated from kimchi.
Whole-transcriptome analyses of the Sapsaree, a Korean natural monument, before and after exercise-induced stress
Kim, Ji-Eun;Choe, Junkyung;Lee, Jeong Hee;Kim, Woong Bom;Cho, Whan;Ha, Ji Hong;Kwon, Ki Jin;Han, Kook Il;Jo, Sung-Hwan 17.1
Background: The Sapsaree (Canis familiaris) is a Korean native dog that is very friendly, protective, and loyal to its owner, and is registered as a natural monument in Korea (number: 368). To investigate large-scale gene expression profiles and identify the genes related to exercise-induced stress in the Sapsaree, we performed whole-transcriptome RNA sequencing and analyzed gene expression patterns before and after exercise performance. Results: We identified 525 differentially expressed genes in ten dogs before and after exercise. Gene Ontology classification and KEGG pathway analysis revealed that the genes were mainly involved in metabolic processes, such as programmed cell death, protein metabolic process, phosphatidylinositol signaling system, and cation binding in cytoplasm. The ten Sapsarees could be divided into two groups based on the gene expression patterns before and after exercise. The two groups were significantly different in terms of their basic body type ($p{\leq}0.05$). Seven representative genes with significantly different expression patterns before and after exercise between the two groups were chosen and characterized. Conclusions: Body type had a significant effect on the patterns of differential gene expression induced by exercise. Whole-transcriptome sequencing is a useful method for investigating the biological characteristics of the Sapsaree and the large-scale genomic differences of canines in general. | CommonCrawl |
Synthesis and in vitro antitumor activity of (1E,4E)-1-aryl-5-(2-((quinazolin-4-yl)oxy)phenyl)-1,4-pentadien-3-one derivatives
Hui Luo1,3,4,
Shengjie Yang2,3,4,
Da Hong2,
Wei Xue3,4 &
Pu Xie1
Chemistry Central Journal volume 11, Article number: 23 (2017) Cite this article
Cancer is one of the leading causes of death and only second to heart diseases. Recently, preclinical studies have demonstrated that curcumin had a number of anticancer properties. Thus, we planned to synthesize a series of curcumin analogs to assess their antiproliferation efficacy.
A series of (1E,4E)-1-aryl-5-(2-((quinazolin-4-yl)oxy)phenyl)-1,4-pentadien-3-one derivatives (curcumin analogs) were synthesized and characterized by IR, NMR, and elemental analysis techniques. All of the prepared compounds were screened for antitumor activities against MGC-803, PC3, and Bcap-37 cancer cell lines. A significant inhibition for cancer cells were observed with compound 5f and also less toxic on NIH3T3 normal cells. The mechanism of cell death induced by compound 5f was further investigated by acridine orange/ethidium bromide staining, Hoechst 33,258 staining, TUNEL assay, and flow cytometry cytometry, which revealed that the compound can induce cell apoptosis in MGC-803 cells.
This study suggests that most of the derivatives could inhibit the growth of human cancer cell lines. In addition, compound 5f could induce apoptosis of cancer cells, and it should be subjected to further investigation as a potential anticancer drug candidate.
Cancer is one of the leading causes of death and only second to heart diseases [1, 2]. The efficacy of current chemotherapeutics is low and undesirable side effects are still unacceptably high [3–5]. Hence, the development of novel, and less toxic and anti-cancer agents remains an important and challenging goal of medicinal chemist worldwide, and much attention has recently been paid to the discovery and development of new, more selective anticancer agents [3, 6–8].
Natural products have become a leading category of compounds in improving the rational drug design for novel anti-cancer therapeutics [9, 10]. Curcumin is a natural phenolic compound originally isolated from turmeric, a rhizome used in India for centuries as a spice and medicinal agent [11]. A literature survey reveals that curcumin, and its derivatives (analogs) have various pharmacological activities and medicinal applications such as antioxidant [12, 13], anti-inflammatory [12, 14], anti-HIV [15, 16], anti-angiogenesis and so on [12]. Recently, preclinical studies have demonstrated that curcumin had a number of anticancer properties, such as growth inhibition and induction of apoptosis in a variety of cancer cell lines [17–19]. Its mechanisms of action include inhibition of transcriptional factor NF-jB, HSP90 and epigenetic modulation related to direct inhibition of the catalytic site of DNMT-1 [20]. Moreover, the latest research shows that curcumin can effectively suppress NF-kB activity and COX-2 expression, as well as cell proliferation/survival in the setting of NSCLC [21]. Consequently, analogues of curcumin with similar safety profiles but increased anticancer activity have been developed in recent years [22]. Chandru et al. synthesized four novel dienone cyclopropoxy curcumin analogs by nucleophilic substitution reaction with cyclopropyl bromide, and found that the tumor growth inhibitory effects of synthetic dienone cyclopropoxy curcumin analogs could be mediated by promoting apoptosis and inhibiting tumor angiogenesis [23]. New 1,5-diaryl-1,4-pentadien-3-one derivatives (curcumin analogs), which can effectively inhibit proliferation of cancer cells at very low concentrations, were synthesized [24, 25], and we also found that curcumin analogs exhibited promising ex vivo antiviral bioactivities against tobacco mosaic virus and cucumber mosaic virus [26].
In order to discover more potent and selective anticancer agents based on curcumin scafforld, we have synthesized a series of (1E,4E)-1-aryl-5-(2-((quinazolin-4-yl)oxy)phenyl)-1,4-pentadien-3-one derivatives (eleven novel compounds 5a, 5b, 5d, 5f–5h, and 5j–5n) (Fig. 1). In our present study, all the target compounds were evaluated for their activity against MGC-803, PC3, and Bcap-37 cancer cell lines. Furthermore, the possible mechanism of MGC-803 cell growth inhibition by compound 5f was also investigated in this paper.
Design of the target compounds
Target compounds 5a–5n were synthesized as shown in Scheme 1. The starting material 2-aminobenzoic acid was conveniently cyclized to intermediate 1 by heating it with formamide at 140–145 °C as described in the literature. Upon refluxing with freshly distilled phosphorus oxychloride and pentachlorophosphorane, intermediate 1 yielded the corresponding 4-chloro derivative 2. Treatment of salicylaldehyde with acetone in the presence of sodium hydride at room temperature got intermediate 3. The key intermediates 4 were synthesized by reacting intermediate 3 with substituted 4-chloroquiazoline 2 in the present of K2CO3 in CH3CN at 30–50 °C for 6 h. And then, the target compounds 5a–5n were synthesized by reacting the substituted aldehydes with 4 in the present of anhydrous alcohol in acetone at room temperature. The structures of the final products were confirmed by their IR, 1H NMR, 13C NMR, and elemental analysis techniques.
Synthetic pathway to target compounds 5a–5n
Evaluation of anti-tumor bioactivity of synthetic compounds
The in vitro antitumor activity of the newly synthesized compounds 5a–5n were evaluated against a panel of three human cancer cell lines, including human gastric cancer cell line MGC-803, human prostate cancer cell line PC3, and human breast cancer cell line Bcap-37, and one normal cell line NIH3T3 (mouse embryo fibroblast cell line) by MTT method. Adriamycin (ADM) was chosen as a reference drug due to its availability and widespread use. Each experiment was repeated at least three times. The results are presented in Table 1.
Table 1 Effect of title compounds against cell viability of different cell lines
As depicted in Table 1, the title compounds suppressed proliferation of the above three cancer cell lines in different extents (IC50 values of 0.85–15.64 μM), and exhibited broad spectrum antitumor activity. Among these studied compounds, the inhibitory ratios of 5d–5g, and 5m against MGC-803 cells at 10 μM were 87.5, 87.0, 90.7, 85.9, and 81.1%, respectively, and their IC50 values were 1.72, 1.89, 0.85, 2.02, and 2.05 μM, respectively, similar to that of ADM (0.74 μM). Compounds 5d, 5f, 5g, and 5m displayed higher inhibitory activities against PC3 cells at 10 μM than that of the rest compounds, with inhibitory ratios of 86.3, 93.0, 83.1, and 81.2%, respectively, which were similar to or higher than that of ADM (91.2%). The inhibitory ratios of 5f and 5g against Bcap-37 cells at 10 μM, were 76.5 and 74.9% (IC50 values of 4.98 and 5.61 μM), respectively, which were higher than that of the rest compounds. Also noteworthy is that the potency of the compounds was generally more pronounced against the MGC-803 cells than against PC3 and Bcap-37 cells. Moreover, the antiproliferation activities of the title compounds against NIH3T3 normal cell line were also evaluated. Most of the title compounds showed stronger antiproliferative activities against the cancer cell lines than NIH3T3 lines. Compound 5f, which showed excellent levels of inhibition against MGC-803, PC3, and Bcap-37 cancer cells, have no significant activity against NIH3T3 cells, with inhibitory ratio of 21.5% at 10 μM. That is to say that the compound was less toxic on normal fibroblasts than on the investigated cancer cell lines and more selective to cancer cells.
Subsequently structure–activity relationships (SAR) studies were performed to determine how the substituents affected the anticancer activity. To examine SAR, different substituent groups were introduced into R1 and R2 in the quiazoline ring. Based on the activity values indicated in Table 1, the relationships of the activities with different R1 and R2 (type, position, and number of substituents) were deduced. Two main conclusions were drawn. On the one hand, compared with the same substituents on quiazoline, the corresponding molecules containing a 6-methyl group always had higher inhibitory rates than the compound containing a 8-methyl group. For example, the IC50 values of 5f (R1: 6-methyl, R2: 2,6-dichlorophenyl) and 5m (R1: 8-methyl, R2: 2,6-dichlorophenyl) on MGC-803 cells were 0.85 and 2.05 μM, respectively. By contrast, the inhibition rates of 5c (R1: 6-methyl, R2: p-chlorophenyl) and 5j (R1: 8-methyl, R2: p-chlorophenyl) at 10 μM were 79.2 and 76.3% on MGC-803 cells, 76.5 and 71.9% on PC3 cells, and 54.2 and 45.4% on Bcap-37 cells, respectively. On the other hand, when R2 was o-flurophenyl-fixed, the compounds always showed weak activity. For example, the inhibition rates of 5a (R1: 6-methyl, R2: o-flurophenyl) at 10 μM were 71.9, 68.5, and 44.1% on the three cancer cells, respectively, which suggested the weaker activity than that of the rest compounds.
Apoptosis is one of the major pathways that lead to the process of cell death [27]. Most cancer cells retain their sensitivity to some apoptotic stimuli from chemotherapeutic agent [28]. In the present study, compound 5f was selected and its mechanism of growth inhibition of MGC-803 cells was evaluated. To determine whether antiproliferation and cell death are associated with apoptosis, MGC-803 cells were stained with acridine orange (AO)/ethidium bromide (EB) staining and Hoechst 33,258 staining after exposure to compound 5f and observed under fluorescence microscopy.
It is well known that AO can pass through cell membranes, but EB cannot. Under the fluorescence microscope, living cells appear green. Necrotic cells stain red but have a nuclear morphology resembling that of viable cells. Apoptotic cells appear green, and morphological changes such as cell blebbing and formation of apoptotic bodies will be observed [29].
Representative images of the cells treated with 10 μM of HCPT (used as positive control) and 1, 5, 10 μM of compound 5f for 12 h are shown in Fig. 2a. While treatment of cells with HCPT and compound 5f, the apoptotic cells with typical apoptotic features, such as staining brightly, condense chromatin and fragment nuclei were observed. These results suggested that the proliferative inhibition and the death of target cells upon treatment with compound 5f were consequent to the induction of apoptosis.
Apoptosis induction studies of compound 5f. a AO\EB staining. b Hoechst 33,258 staining
Membrane-permeable Hoechst 33,258 was a blue fluorescent dye and stained the cell nucleus. When cells were treated with Hoechst 33,258, live cells with uniformly light blue nuclei were observed under fluorescence microscope, while apoptotic cells exhibited bright blue because of karyopyknosis and chromatin condensation, and the nuclei of dead cells could not be stained [30]. MGC-803 cells treated with compound 5f at concentrations of 1, 5, and 10 μM for 12 h were stained with Hoechst 33,258, with HCPT as positive control at 10 μM for 12 h. The results are illustrated in Fig. 2b.
Figure 2b shows that MGC-803 cells treated with the negative control DMSO were normally blue. Compared with the negative control, a part of cells with smaller nuclei and condensed staining appeared in the positive control group. After treated with compound 5f, the cells exhibited strong blue fluorescence and revealed typical apoptotic morphology. These findings demonstrate that compound 5f induced apoptosis against MGC-803 cell lines, consistent with the results for AO/EB double staining.
To further verify AO/EB and Hoechst 33,258 staining results, TUNEL assay was also carried out. TUNEL (Terminal deoxynucleotidyl Transferase Biotin-dUTP Nick End Labeling) is a very popular assay for identifying apoptotic cells. The assay identifies apoptotic cells in situ by using terminal deoxynucleotidyl transferase (TdT) to transfer biotin-dUTP to these strand breaks of cleaved DNA. The biotin-labeled cleavage sites are then detected by reaction with HRP conjugated streptavidin and visualized by DAB showing brown color [24]. MGC-803 cells treated with compound 5f at 5 μM for 6, 12, and 18 h were stained with TUNEL, with HCPT as positive control at 5 μM for 18 h. As shown in Fig. 3, cells in control group (DMSO treatment) did not appear as brown precipitates. However, the cells treated with compound 5f and HCPT appeared as brown precipitate. We further concluded that compound 5f induced apoptosis against MGC-803.
Apoptosis was assayed with TUNEL after treatment of MGC-803 cells with 5 μM 5f, and observed under light microscopy
In addition, the apoptosis ratios induced by compound 5f in MGC-803 cells were determined by flow cytometry, using Annexin V/PI double staining. Flow cytometry was performed on the total cell population (including both adherent and detached cells) and apoptosis detection was carried out as mentioned above. This double staining procedure discriminated necrotic cells (Q1, Annexin−/PI+), late apoptotic cells (Q2, Annexin+/PI+), intact cells (Q3, Annexin−/PI−) and early apoptotic cells (Q4, Annexin+/PI−) [31, 32]. As shown in Fig. 4, compound 5f could induce apoptosis of MGC-803 cells, and the highest apoptosis ratio (26.4%) was obtained after 24 h of treatment at a concentration of 10 μM. For the positive control HCPT, the apoptosis ratio was only 22.3% after 24 h of treatment at a concentration of 10 μM. In addition, as shown in Fig. 5, the apoptosis of MGC-803 cells treated with compound 5f gradually increased in a time-dependent manner.
The apoptosis ratios of MGC-803 cells treated with compound 5f and HCPT
Annexin V/PI dual staining of MGC-803 cell lines. a Negative control; b treated with HCPT at 10 μM for 24 h; c–e treated with compound 5f at 10 μM for 6, 12, and 24 h, respectively
As a development of our previous studies, we have synthesized and evaluated in vitro a series of (1E,4E)-1-aryl-5-(2-((quinazolin-4-yl)oxy)phenyl)-1,4-pentadien-3-one derivatives as potential antitumor agents. Most of the derivatives exhibited equivalent inhibitory activities against MGC-803, PC3, and Bcap-37 cancer cells. Compound 5f appeared to be more effective than other compounds against the three cells, with IC50 values of 0.85, 1.37, and 4.98 μM, respectively. And compounds 5f was found to exhibit a good degree of selectivity towards cancer cells than normal cells. In addition, the apoptosis-inducing activity of compound 5f in MGC-803 cells was investigated by AO/EB staining, Hoechst 33,258 staining, TUNEL assay, and flow cytometry. The results revealed that the compound may inhibit cell growth by inducing apoptosis, with apoptosis ratio of 26.4% at 10 μM for 24 h, which was higher than that of HCPT (22.3% at 10 μM for 24 h). Further studies on the specific mechanisms of compound 5f in MGC-803 cells are currently underway.
Melting points were determined by using an XT-4 binocular microscope (Beijing Tech Instrument Co., China) without correction. IR spectra were recorded on a Bruker VECTOR 22 spectrometer. NMR spectra were recorded in a CDCl3 solvent using a JEOL-ECX 500 NMR spectrometer operating at 500 MHz for 1H, and at 125 MHz for 13C by using TMS as internal standard. Elemental analysis was performed on an Elementar Vario-III CHN analyzer. Silica gel (200–300 mesh) and TLC plates (Qingdao Marine Chemistry Co., Qingdao, China) were used for chromatography. All solvents (Yuda Chemistry Co., Guiyang, China) were analytical grade, and used without further purification unless otherwise noted.
Synthetic procedures
6-methyl-quinazolin-4(3H)-one, 8-methyl-quinazolin-4(3H)-one, 6-methyl-4-chloroquiazoline, and 8-methyl-4-chloroquiazoline were prepared according to a previously described method [33]. Intermediate (E)-4-(2-hydroxyphenyl)-3-butylene-2-one was prepared according to a previously reported [34].
General synthetic procedures for the preparation of compounds 5a–5n
Compounds 2 (10 mmol), 3 (10 mmol) and K2CO3 (70 mmol) in 20 mL of acetonitrile was stirred at 30–40 °C for 3.5 h. The reaction mixture was concentrated and allowed to cool. The solid product obtained was filtered, and recrystallized with ethanol to afford the desired solid compound 4a or 4b, respectively. To the mixture of compound 4a or 4b (0.5 mmol) and sodium hydroxide (1%) in 20 mL of 75 vol% ethanol/water solution was added substituted aldehydes (0.5 mmol). The reaction mixture was stirred at room temperature overnight. The reaction mixture was concentrated and suspended in water (20 mL), adjusted with 5% HCl to pH 7, and filtered. Recrystallization with ethanol afforded the desired solid compounds 5a–5n.
(1E,4E)-1-(2-fluorophenyl)-5-(2-((6-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5a)
Yield: 52.6%; yellow powder; mp: 121–123 °C; IR (KBr, cm−1) ν: 3442, 1657, 1622, 1596, 1465, 1398, 1356, 1221, 983; 1H NMR (CDCl3, 500 MHz) δ: 8.70 (s, 1H, Qu-2-H), 8.23 (d, J = 12.00 Hz, 1H, F–Ar–CH=), 7.93 (d, J = 8.6 Hz, 1H, Ar–CH=), 7.78–7.85 (m, 3H, Qu-5,7,8-H), 7.47–7.50 (m, 3H, F–Ar-4,6-H, Ar-3-H), 7.30–7.39 (m, 5H, F–Ar-3,5-H, Ar-4,5-H, F–Ar–C=CH), 7.10 (d, J = 16.0 Hz, 1H, Ar–C=CH), 6.81 (d, J = 14.8 Hz, 1H, Ar-6-H), 2.61 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 188.8, 166.4, 153.4, 153.4, 151.7, 150.4, 136.9, 136.7, 136.5, 131.7, 129.4, 128.2, 127.9, 127.7, 127.2, 127.1, 126.6, 126.5, 123.6, 123.5, 122.3, 116.4, 21.9; Anal. Calcd for C25H19FN2O2: C 76.08; H 4.67; N 6.83; Found: C 76.42; H 4.78; N 6.80.
(1E,4E)-1-(2-chlorophenyl)-5-(2-((6-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5b)
Yield: 46.3%; yellow powder; mp: 152–154 °C; IR (KBr, cm−1) ν: 3445, 1653, 1618, 1584, 1481, 1400, 1359, 1223, 986; 1H NMR (CDCl3, 500 MHz) δ: 8.69 (s, 1H, Qu-2-H), 8.22 (d, J = 8.0 Hz, 1H, Cl–Ar–CH=), 7.76–7.95 (m, 4H, Ar–CH=, Qu-5,7,8-H), 7.38–7.53 (m, 3H, Cl–Ar-3,6-H, Ar-3-H), 7.23–7.31 (m, 5H, Cl–Ar-4,5-H, Ar-5-H, Ar–C=CH, Cl–Ar–C=CH), 7.21 (m, 1H, Ar-4-H), 6.82 (d, J = 14.8 Hz, 1H, Ar-6-H), 2.62 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 188.6, 167.1, 154.3, 153.1, 151.0, 142.6, 142.1, 136.5, 134.5, 133.3, 132.6, 130.0, 129.6, 129.4, 127.4, 125.8, 125.6, 122,9, 122.7, 121.2, 116.3, 17.7; Anal. Calcd for C26H19ClN2O2: C 73.2; H 4.50; N 6.56; Found: C 73.27; H 4.56; N 6.42.
(1E,4E)-1-(4-chlorophenyl)-5-(2-((6-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5c)
Yield: 55.8%; yellow powder; mp: 173–176 °C; IR (KBr, cm−1) ν: 3445, 1653, 1622, 1558, 1489, 1373, 1229, 986; 1H NMR (CDCl3, 500 MHz) δ: 8.70 (s, 1H, Qu-2-H), 8.23 (d, J = 12.0 Hz, 1H, Cl–Ar–CH=), 7.93 (d, J = 8.6 Hz, 1H, Ar–CH=), 7.78–7.85 (m, 3H, Qu-5,7,8-H), 7.47–7.50 (m, 3H, Cl–Ar-2,6-H, Ar-3-H), 7.30–7.39 (m, 5H, Cl–Ar-3,5-H, Ar-4,5-H, Cl–Ar–C=CH), 7.10 (d, J = 16.0 Hz, 1H, Ar–C=CH), 6.81 (d, J = 14.8 Hz, 1H, Ar-6-H), 2.61 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 185.7, 167.4, 154.3, 153.1, 148.1, 147.4, 134.5, 134.1, 133.5, 132.3,131.3, 130.0, 129.8, 129.2, 128.9, 127.3, 127.1, 122.8, 122.6, 121.1, 116.3, 17.8; Anal. Calcd for C26H19ClN2O2: C 73.15; H 4.49; N 6.56; Found: C 72.43; H 4.12; N 6.79.
(1E,4E)-1-(2-chloro-5-nitrophenyl)-5-(2-((6-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5d)
Yield: 58.2%; yellow powder; mp: 176–178 °C; IR (KBr, cm−1) ν: 3445, 1653, 1622, 1576, 1522, 1458, 1348, 1277, 1221, 983; 1H NMR (CDCl3, 500 MHz) δ: 8.68 (s, 1H, Qu-2-H), 8.40 (s, 1H, Cl–Ar-6-H), 8.21 (d, J = 15.0 Hz, 1H, Cl–Ar-4-H), 8.10–8.12 (d, J = 10.0 Hz, 1H, Qu-8-H), 7.73–7.92 (m, 5H, Cl–Ar–CH=, Qu-5,7-H, Cl–Ar-3-H, Ar–CH=), 7.54–7.57 (m, 2H, Cl–Ar–C=CH, Ar-3-H), 6.91–7.41 (m, 4H, Ar-4,5,6-H, Ar–C=CH), 2.61 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 187.8, 179.6, 158.5, 153.3, 151.9, 146.8, 138.1, 136.7, 136.4, 134.6, 132.1, 131.3, 130.7, 130.2, 128.4, 127.8, 126.7, 125.1, 123.6, 122.9, 122.5, 122.2, 116.7, 21.9; Anal. Calcd for C26H18N3O4: C 66.18; H 3.84; N 8.90; Found: C 65.81; H 3.66; N 9.30.
(1E,4E)-1-(2,4-dichlorophenyl)-5-(2-((6-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5e)
Yield: 60.5%; yellow powder; mp: 211–214 °C; IR (KBr, cm−1) ν: 3443, 1655, 1618, 1582, 1499, 1371, 1225, 986; 1H NMR (CDCl3, 500 MHz) δ: 8.68 (s, 1H, Qu-2-H), 8.21 (s, 1H, Qu-5-H), 7.60–7.93 (m, 4H, Qu-7,8-H, Cl–Ar–CH=, Ar–CH=), 7.38–7.43 (m, 4H, Cl–Ar-3-H, Ar-3-H, Cl–Ar-5,6-H), 7.26–7.31 (m, 3H, Ar-4,5-H, Cl–Ar–C=CH), 7.12 (d, J = 16.5 Hz, 1H, Ar–C=CH), 6.80 (d, J = 16.1 Hz, 1H, Ar-6-H), 2.61 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 188.5, 167.1, 153.4, 153.1, 151.4, 142.7, 137.9, 136.6, 136.5, 134.5, 132.3, 131.6, 130.2, 130.1, 128.7, 128.3, 127.6, 127.4, 125.0, 122.7, 121.2, 116.2, 17.7; Anal. Calcd for C26H18Cl2N2O2: C 67.69; H 3.93; N 6.07; N 7.07; Found: C 67.56; H 3.45; N 5.65.
(1E,4E)-1-(2,6-dichlorophenyl)-5-(2-((6-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5f)
Yield: 55.2%; yellow powder; mp: 187–189 °C; IR (KBr, cm−1) ν: 3443, 1655, 1618, 1582, 1499, 1333, 1225, 986; 1H NMR (CDCl3, 500 MHz) δ: 8.68 (s, 1H, Qu-2-H), 8.20 (s, 1H, Qu-5-H), 7.89 (d, J = 8.5 Hz, 1H, Qu-8-H), 7.80–7.85 (m, 2H, Ar–CH=, Cl–Ar–CH=), 7.73 (d, J = 8.8 Hz, 1H, Qu-7-H), 7.52–7.61 (m, 2H, Cl–Ar-3,5-H), 7.39 (m, 1H, Cl–Ar-4-H), 7.24–7.30 (m, 3H, Ar-3,5-H, Ar–C=CH), 7.15 (m, 1H, Ar-4-H), 7.06 (d, J = 16.0 Hz, 1H, Cl–Ar–C=CH), 7.00 (d, J = 16.5 Hz, 1H, Ar-6-H), 2.60 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 188.9, 166.4, 153.4, 151.7, 150.4, 138.4, 137.5, 136.7, 136.5, 135.2, 132.9, 132.3, 131.8, 129.9, 128.9, 128.2, 128.0, 127.9, 127.5, 126.7, 123.6, 122.2, 116.0, 21.9; Anal. Calcd for C26H18Cl2N2O2: C 67.69; H 3.93; N 6.07; Found: C 68.06; H 4.14; N 6.11.
1E,4E)-1-(2,5-dimethoxyphenyl)-5-(2-((6-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5g)
Yield: 49.6%; yellow powder; mp: 122–123 °C; IR (KBr, cm−1) ν: 3443, 1653, 1618, 1576, 1497, 1458, 1360, 1223, 1114, 1045; 1H NMR (CDCl3, 500 MHz) δ: 8.68 (s, 1H, Qu-2-H), 8.22 (s, 1H, Qu-5-H), 7.81–7.92 (m, 5H, Qu-7,8-H, Ar–CH=, CH3O–Ar–CH=, Ar-3-H), 7.75 (d, J = 8.6 Hz, 1H, CH3O–Ar–C=CH), 7.51 (m, 1H, Ar-5-H), 7.38 (m, 1H, Ar-4-H), 7.17 (d, J = 16.0 Hz, 1H, Ar–C=CH), 6.99 (d, J = 2.8 Hz, 1H, Ar-6-H), 6.89–6.94 (m, 2H, CH3O–Ar-3,6-H), 6.81 (d, J = 2.8 Hz, 1H, CH3O–Ar-4-H), 3.76 (s, 6H, 2-OCH3), 2.57 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 189.3, 166.5, 153.5, 153.4, 153.2, 151.6, 150.4, 138.8, 138.4, 136.6, 136.2, 131.5, 128.4, 128.1, 127.8, 127.1, 126.7, 126.6, 123.5, 122.4, 117.6, 113.2, 112.5, 56.1, 55.8, 21.9; Anal. Calcd for C28H24N2O4: C 74.3; H 5.35; N 6.19; Found: C 74.3; H 5.48; N 5.95.
(1E,4E)-1-(2-fluorophenyl)-5-(2-((8-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5h)
Yield: 50.4%; yellow powder; mp: 155–157 °C; IR (KBr, cm−1) ν: 3445, 1653, 1620, 1582, 1506, 1481, 1398, 1223, 984; 1H NMR (CDCl3, 500 MHz) δ: 8.79 (s, 1H, Qu-2-H), 8.31 (d, J = 8.0 Hz, 1H, F–Ar–CH=), 7.77–7.85 (m, 3H, Qu-5,7-H, Ar–CH=), 7.67 (d, J = 16.5 Hz, 1H, F–Ar-6-H), 7.59 (m, 1H, Qu-6-H), 7.53 (m, 1H, F–Ar-4-H), 7.29–7.43 (m, 4H, Ar-3,5-H, F–Ar-3,5-H), 7.05–7.14 (m, 3H, Ar-4-H, F–Ar–C=CH, Ar–C=CH), 6.95 (d, J = 16.5 Hz, 1H, Ar-6-H), 2.76 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 188.8, 167.1, 153.2, 151.7, 151.1, 136.9, 136.6, 136.0, 134.5, 131.9, 131.9, 129.3, 128.4, 128.1, 127.8, 127.8, 127.6, 126.6, 124.5, 123.6, 121.1, 116.4, 17.8; Anal. Calcd for C25H19FN2O2: C 76.08; H 4.67; N 6.83; Found: C 75.81; H 4.53; N 7.04.
(1E,4E)-1-(2-chlorophenyl)-5-(2-((8-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5i)
Yield: 41.8%; yellow powder; mp: 152–154 °C; IR (KBr, cm−1) ν: 3443, 1655, 1616, 1595, 1481, 1406, 1358, 1229, 979; 1H NMR (CDCl3, 500 MHz) δ: 8.79 (s, 1H, Qu-2-H), 8.30 (d, J = 8.5 Hz, 1H, Cl–Ar–CH=), 7.96 (d, J = 16.5 Hz, 1H, Ar–CH=), 7.76–7.85 (m, 3H, Qu-5,6,7-H), 7.50-7.59 (m, 3H, Ar-3-H, Cl–Ar-3,6-H), 7.38-7.40 (m, 2H, Cl–Ar-4,5-H), 7.29–7.39 (m, 2H, Cl–Ar–C=CH, Ar–C=CH), 7.14–7.25 (m, 2H, Ar-4,5-H), 6.81 (d, J = 16.0 Hz, 1H, Ar-6-H), 2.77 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 188.7, 167.1, 153.2, 151.7, 151.1, 139.2, 137.2, 136.6, 135.4, 134.6, 131.7, 131.3, 130.3, 128.4, 128.3, 128.1, 127.7, 127.6, 127.1, 126.6, 123.6, 121.1, 116.1, 17.8; Anal. Calcd for C26H19ClN2O2: C 73.15; H 4.49; N 6.56; Found: C 73.04; H 4.74; N 6.76%.
(1E,4E)-1-(4-chlorophenyl)-5-(2-((8-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5j)
Yield: 58.6%; yellow powder; mp: 161–163 °C; IR (KBr, cm−1) ν: 3445, 1647, 1616, 1576, 1481, 1406, 1358, 1227, 937; 1H NMR (CDCl3, 500 MHz) δ: 8.79 (s, 1H, Qu-2-H), 8.20–8.34 (m, 3H, Qu-5,6,7-H), 7.72–7.86 (m, 4H, Ar–CH=, Ar-3-H, Cl–Ar–C=CH, Cl–Ar=CH), 7.52–7.64 (m, 4H, Cl–Ar-2,3,5,6-H), 7.41-7.42 (m, 1H, Ar-5-H), 7.30–7.32 (m, 1H, Ar-4-H), 7.11–7.14 (d, J = 15.0 Hz, 1H, Ar-6-H), 6.93–6.96 (d, J = 15.0 Hz, 1H, Ar–C=CH), 2.77 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 188.6, 167.1, 153.2, 153.1, 151.7, 151.1, 141.9, 136.9, 136.7, 134.6, 131.7, 129.5, 129.2, 128.4, 128.1, 127.6, 127.2, 126.7, 125.8, 123.6, 121.1, 17.8; Anal. Calcd for C26H19ClN2O2: C 73.15; H 4.49; N 6.56; Found: C 73.36; H 4.65; N 6.86.
(1E,4E)-1-(2-chloro-5-nitrophenyl)-5-(2-((8-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5k)
Yield: 54.5%; yellow powder; mp: 198–200 °C; IR (KBr, cm−1) ν: 3420, 1676, 1626, 1560, 1522, 1479, 1402, 1348, 1221, 980; 1H NMR (CDCl3, 500 MHz) δ: 8.78 (s, 1H, Qu-2-H), 8.39 (s, 1H, Cl–Ar-6-H), 8.32 (d, J = 8.0 Hz, 1H, Cl–Ar-4-H), 8.13 (d, J = 8.3 Hz, 1H, Qu-5-H), 7.76-7.88 (m, 4H, Ar–CH=, Cl–Ar–CH=, Qu-6, 7-H), 7.45–7.59 (m, 3H, Ar-3-H, Cl–Ar-3-H, Cl–Ar–C=CH), 7.23–7.40 (m, 2H, Ar-4,5-H), 7.10 (d, J = 12.5 Hz, 1H, Ar–C=CH), 6.93 (d, J = 16.0 Hz, 1H, Ar-6-H), 2.75 (s, 3H, CH3);13C NMR (CDCl3, 125 MHz) δ: 187.8, 167.1, 153.1, 151.8, 151.0, 146.7, 141.6, 138.2, 136.7, 136.6, 134.6, 132.0, 131.3, 130.1, 128.6, 127.7, 126.9, 126.7, 125.1, 123.7, 122.5, 120.9, 116.1, 17.7; Anal. Calcd for C26H18ClN3O4: C 66.18; H 3.84; N 8.90; Found: C 66.30; H 3.84; N 8.86.
(1E,4E)-1-(2,4-dichlorophenyl)-5-(2-((8-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5l)
Yield: 58.6%; yellow powder; mp: 175–178 °C; IR (KBr, cm−1) ν: 3445, 1653, 1618, 1576, 1481, 1408, 1358, 1229, 984; 1H NMR (CDCl3, 500 MHz) δ: 8.79 (s, 1H, Qu-2-H), 8.30 (d, J = 8.0 Hz, 1H, Cl–Ar–CH=), 7.76–7.94 (m, 3H, Ar–CH=, Qu-5,7-H), 7.53–7.57 (m, 2H, Qu-6-H, Cl–Ar-3-H), 7.38–7.47 (m, 3H, Ar-3-H, Cl–Ar-5,6-H), 7.29–7.31 (m, 2H, Cl–Ar-4-H), 7.38–7.41 (m, 2H, Cl–Ar–C=CH, Ar-5-H), 7.15–7.17 (m, 2H, Ar–C=CH, Ar-4H), 6.78 (d, J = 16.5 Hz, 1H, Ar-6-H), 2.77 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 188.5, 167.1, 153.2, 151.8, 151.1, 139.1, 137.5, 136.7, 135.4, 134.6, 134.1, 133.3, 131.8, 131.7, 129.2, 128.4, 128.0, 127.6, 127.4, 126.7, 125.8, 123.6, 121.0, 116.1, 17.7; Anal. Calcd for C26H18Cl2N2O2: C 67.69; H 3.93; N 6.07; Found: C 67.27; H 4.03; N 5.96%.
(1E,4E)-1-(2,6-dichlorophenyl)-5-(2-((8-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5m)
Yield: 56.1%; yellow powder; mp: 161–163 °C; IR (KBr, cm−1) ν: 3421, 1676, 1620, 1587, 1481, 1400, 1359, 1225, 984; 1H NMR (CDCl3, 500 MHz) δ: 8.76 (s, 1H, Qu-2-H), 8.28 (d, J = 8.5 Hz, 1H, Ar–CH=), 7.73–7.85 (m, 3H, Cl–Ar–CH=, Qu-5,7-H), 7.52–7.59 (m, 3H, Cl–Ar-3,5-H, Qu- 6-H), 7.29–7.41 (m, 4H, Ar-3, 5-H, Cl–Ar-4-H, Cl–Ar–C=CH), 7.16 (m, 1H, Ar-4-H), 7.07 (d, J = 16.0 Hz, 1H, Ar–C=CH), 6.98 (d, J = 17.0 Hz, 1H, Ar-6-H), 2.75 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 188.96, 167.10, 153.15, 151.70, 150.99, 137.59, 136.75, 135.68, 135.17, 134.57, 132.97, 131.85, 129.88, 128.85, 128.36, 127.88, 127.65,127.45, 126.72, 123.60, 121.04, 116.02, 17.79; Anal. Calcd for C26H18Cl2N2O2 (461): C, 67.69; H, 3.93; N, 6.07; N, 7.07%. Found: 67.36; H, 3.96; N, 5.84%.
(1E,4E)-1-(2,5-dimethoxyphenyl)-5-(2-((8-methylquinazolin-4-yl)oxy)phenyl)penta-1,4-dien-3-one (5n)
Yield: 43.6%; yellow powder; mp: 176–178 °C; IR (KBr, cm−1) ν: 3445, 1647, 1616, 1570, 1491, 1373, 1211, 984; 1H NMR (CDCl3, 500 MHz) δ: 8.79 (s, 1H, Qu-2-H), 8.30 (d, J = 8.6 Hz, 1H, CH3O–Ar–CH=), 7.75–7.92 (m, 4H, Ar–CH=, Qu-5,6,7-H), 7.50–7.59 (m, 2H, Ar-3,5-H), 7.39 (m, 1H, Ar-4-H), 7.15–7.29 (m, 2H, CH3O–Ar–C=CH, Ar–C=CH), 6.98 (s, 1H, CH3O–Ar-6-H), 6.89–6.93 (m, 2H, Ar-6-H, CH3O–Ar-3-H), 6.81 (d, J = 8.6 Hz, 1H, CH3O–Ar-4-H), 3.77 (s, 6H, 2CH3O), 2.76 (s, 3H, CH3); 13C NMR (CDCl3, 125 MHz) δ: 189.25, 167.14, 153.57, 153.18, 151.66, 151.03, 138.78, 136.58, 136.25, 134.51, 131.45, 128.41, 128.23, 127.55, 127.24, 126.58, 124.21, 123.54, 121.16, 120.94, 117.61, 116.16, 113.22, 112.47, 56.08, 55.85, 17.75. Anal. Calcd for C28H24N2O4 (453): C, 74.32; H, 5.35; N, 6.19; %. Found: C, 74.55; H, 5.68; N, 5.95%.
Human gastric cancer cell line MGC-803, human prostate cancer cell line PC3, and human breast cancer cell line Bcap-37 and one normal cell line NIH3T3 were obtained from Cell Bank of Type Culture Collection of Chinese Academy of Sciences (Shanghai, China). NIH3T3 was routinely maintained in a DMEM medium, while all the other cell lines were cultured in a 1640 medium. All the cells were grown in the medium supplemented with 10% FBS at 37 °C with 5% CO2.
MTT assay
The growth-inhibitory effects of the test compounds were determined on MGC-803, PC3, Bcap-37, and NIH3T3 cells. All cell types were seeded into 96-well plates at a density of 2 × 103 cells/well 100 μL of the proper culture medium and incubated with increasing concentrations of the compounds at 37 °C under cell culturing conditions. An MTT assay (Roche Molecular Biochemicals, 1465-007) was performed 72 h later according to the instructions provided by Roche. The precipitated formazan crystals were dissolved in SDS, and the absorbance was read at 595 nm with a microplate reader (BIO-RAD, model 680), which is directly proportional to the number of living cells in culture. The experiment was performed in triplicate. The percentage cytotoxicity was calculated using the formula.
$$\% {\text{Cytotoxicity}} = \left[ {\left( {{\text{Control}}_{\text{abs}} - {\text{Blank}}_{\text{abs}} } \right) - \left( {{\text{Test}}_{\text{abs}} - {\text{Blank}}_{\text{abs}} } \right)} \right]/\left( {{\text{Control}}_{\text{abs}} - {\text{Blank}}_{\text{abs}} } \right)\; \times \; 100$$
AO/EB staining
Cells were seeded in 6-well culture plates at a density of 5 × 104 cells/mL in 0.6 mL of medium and allowed to adhere to the plates overnight. The cells were incubated with different concentrations of compounds or vehicle solution (0.1% DMSO) in a medium containing 10% FBS for 12 h. After the treatment, the cover slip with monolayer cells was inverted on the glass slide with 20 μL of AO/EB stain (100 μg/mL), and finally analyzed for morphological characteristics of cell apoptosis under a fluorescence microscope (Olympus Co., Japan).
Hoechst 33,258 staining
Cells were seeded in 6-well culture plates at a density of 5 × 104 cells/mL in 0.6 mL of medium and allowed to adhere to the plates overnight. The cells were incubated with different concentrations of compounds or vehicle solution (0.1% DMSO) in a medium containing 10% FBS for 12 h. After the treatment, the cells were fixed with 4% paraformaldehyde for 10 min, followed by incubation with Hoechst 33,258 staining solution (Beyotime) for 5 min and finally analyzed for morphological characteristics of cell apoptosis under a fluorescence microscope (Olympus Co., Japan).
Flow cytometry analysis
To further quantitative analysis of apoptosis, the cells were washed with PBS, stained with annexinV-FITC and propidium iodide (PI) using the AnnexinV-FITC kit (KeyGEN BioTECH). The cells were then subjected to flow cytometry according to manufacturer's instructions and the stained cells were analyzed by FACS can flow cytometer (Becton–Dickinson, CA, USA).
All statistical analysis was performed with SPSS Version 19.0. Data was analyzed by one-way ANOVA. Mean separations were performed using the least significant difference method. Each experiment was replicated thrice, and all experiments yielded similar results. Measurements from all the replicates were combined, and treatment effects were analyzed.
ADM:
adriamycin
AO/EB:
acridine orange/ethidium bromide
13C NMR:
13C nuclear magnetic resonance
DMSO:
FCM:
HCPT:
10-hydroxyl camptothecine
1H NMR:
proton nuclear magnetic resonance
IR:
infra-red
MTT:
3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide
TUNEL:
terminal deoxynucleotidyl transferase biotin-dUTP nick end labeling
Twombly R (2005) Cancer surpasses heart disease as leading cause of death for all but the very elderly. J Natl Cancer 97:330–331
Nichols L, Saunders R, Knollmann FD (2012) Causes of death of patients with lung cancer. Arch Pathol Lab Med 136:1552–1557
Karthikeyan C, Solomon VR, Lee H, Trivedi P (2013) Design, synthesis and biological evaluation of some isatin-linked chalcones asnovel anti-breast cancer agents: a molecular hybridization approach. Biomed Prev Nutr 3:325–330
Fedele P, Marino A, Orlando L, Schiavone P, Nacci A, Sponziello F, Rizzo P, Calvani N, Mazzoni E, Cinefra M, Cinieri S (2012) Efficacy and safety of low-dose metronomic chemotherapy with capecitabine in heavily pretreated patients with metastatic breast cancer. Eur J Cancer 48:24–29
Huang W, Zhang J, Dorn HC, Zhang C (2013) Assembly of bio-nanoparticles for double controlled drug release. PLoS ONE 8:e74679
Sanmartín C, Plano D, Domínguez E, Font M, Calvo A, Prior C, Encío I, Palop AJ (2009) Synthesis and pharmacological screening of several aroyl and heteroaroyl selenylacetic acid derivatives as cytotoxic and antiproliferative agents. Molecules 14:3313–3338
Wu Y, Liu F (2013) Targeting mTOR: evaluating the therapeutic potential of resveratrol for cancer treatment. Anti-Cancer Agent Me 13:1032–1038
Li X, Xu W (2006) Recent patent therapeutic agents for cancer. Recent Pat Anti Cancer Drug Discov 1:1–30
Baker DD, Chu M, Oza U, Rajgarhia V (2007) The value of natural products to future pharmaceutical discovery. Nat Prod Rep 24:1225–1244
Butler MS (2005) Natural products to drugs: natural product derived compounds in clinical trials. Nat Prod Rep 22:162–195
Padhye S, Chavan D, Pandey S, Deshpande J, Swamy KV, Sarkar FH (2010) Perspectives on chemopreventive and therapeutic potential of curcumin analogs in medicinal chemistry. Mini Rev Med Chem 10:372–387
Menon VP, Sudheer AR (2007) Antioxidant and anti-inflammatory properties of curcumin. Adv Exp Med Biol 595:105–125
Barclay LR, Vinqvist MR, Mukai K, Goto H, Hashimoto Y, Tokunaga A, Uno H (2000) On the antioxidant mechanism of curcumin: classical methods are needed to determine antioxidant mechanism and activity. Org Lett 2:2841–2843
Jurenka JS (2009) Anti-inflammatory properties of curcumin, a major constituent of Curcuma longa: a review of preclinical and clinical research. Altern Med Rev 14:141–153
Jordan WC, Drew CR (1996) Curcumin-a natural herb with anti-HIV activity. J Natl Med Assoc 88:333
Sui Z, Salto R, Li J, Craik C, de Montellano PRO (1993) Inhibition of the HIV-1 and HIV-2 proteases by curcumin and curcumin boron complexes. Bioorg Med Chem 1:415–422
Beevers CS, Huang S (2011) Pharmacological and clinical properties of curcumin. Botanics Targets Ther. 1:5–18
Aggarwal BB, Kumar A, Bharti AC (2003) Anticancer potential of curcumin: preclinical and clinical studies. Anticancer Res 23:363–398
Liu Z, Sun Y, Ren L, Huang Y, Cai Y, Weng Q, Shen X, Li X, Liang G, Wang Y (2013) Evaluation of a curcumin analog as an anti-cancer agent inducing ER stress-mediated apoptosis in non-small cell lung cancer cells. BMC Cancer 13:494
Nagaraju GP, Zhu S, Wen J, Farris AB, Adsay VN, Diaz R, Snyder JP, Mamoru S, El-Rayes BF (2013) Novel synthetic curcumin analogues EF31 and UBS109 are potent DNA hypomethylating agents in pancreatic cancer. Cancer Lett 341:195–203
Lev-Ari S, Starr A, Katzburg S, Berkovich L, Rimmon A, Ben-Yosef R, Vexler A, Ron I, Earon G (2014) Curcumin induces apoptosis and inhibits growth of orthotopic human non-small cell lung cancer xenografts. J Nutr Biochem 25:843–850
Bairwa K, Grover J, Kania M, Jachak SM (2014) Recent developments in chemistry and biology of curcumin analogues. RSC Adv 4:13946–13978
Chandru H, Sharada AC, Bettadaiah BK, Kumar CS, Rangappa KS, Sunila Jayashree K (2007) In vivo growth inhibitory and anti-angiogenic effects of synthetic novel dienone cyclopropoxy curcumin analogs on mouse Ehrlich ascites tumor. Bioorg Med Chem 15:7696–7703
Luo H, Yang S, Cai Y, Peng Z, Liu T (2014) Synthesis and biological evaluation of novel 6-chloro-quinazolin derivatives as potential antitumor agents. Eur J Med Chem 84:746–752
Luo H, Yang S, Zhao Q, Xiang H (2014) Synthesis and antitumor properties of novel curcumin analogs. Med Chem Res 23:2584–2595
Luo H, Liu J, Jin L, Hu D, Chen Z, Yang S, Wu J, Song B (2013) Synthesis and antiviral bioactivity of novel (1E,4E)-1-aryl-5-(2-(quinazolin-4-yloxy)phenyl)-1,4-pentadien-3-one derivatives. Eur J Med Chem 63:662–669
Elmore S (2007) Apoptosis: a review of programmed cell death. Toxicol Pathol 35:495–516
Lowe SW, Lin AW (2000) Apoptosis in cancer. Carcinogenesis 21:485–495
Huang HL, Liu YJ, Zeng CH, Yao JH, Liang ZH, Li ZZ, Wu FH (2010) Studies of ruthenium(II) polypyridyl complexes on cytotoxicity in vitro, apoptosis, DNA-binding and antioxidant activity. J Mol Struct 966:136–143
Gao L, Shen L, Yu M, Ni J, Dong X, Zhou Y, Wu S (2014) Colon cancer cells treated with 5-fluorouracil exhibit changes in polylactosamine-type N-glycans. Mol Med Rep 9:1697–1702
Min Z, Wang L, Jin J, Wang X, Zhu B, Chen H, Cheng Y (2014) Pyrroloquinoline quinone induces cancer cell apoptosis via mitochondrial-dependent pathway and down-regulating cellular bcl-2 protein expression. J Cancer 5:609–624
Chan CK, Goh BH, Kamarudin MN, Kadir HA (2012) Aqueous fraction of Nephelium ramboutan-ake rind induces mitochondrial-mediated apoptosis in HT-29 human colorectal adenocarcinoma cells. Molecules 17:6633–6657
Liu G, Song BA, Sang WJ, Yang S, Jin LH, Ding X (2004) Synthesis and bioactivity of N-aryl-4-aminoquinazoline compounds. Chin J Org Chem 10:1296–1299
McGookin A, Heilbron IM (1924) CCLXXV.—The isomerism of the styryl alkyl ketones. Part I. The isomerism of 2-hydroxystyryl methyl ketone. J Chem Soc Trans 125:2099–2105
HL and SY synthesized the compounds and carried out most of the bioassay experiments. DH took part in the compound structural elucidation and bioassay experiments. WX carried out some structure elucidation experiments. PX assisted in structural elucidation experiments. All authors read and approved the final manuscript.
The authors wish to thank the Scientific Research of Guizhou (No. 20126006) for the financial support.
Guizhou Fruit Institute, Guizhou Academy of Agricultural Sciences, Guiyang, 550006, P. R. China
Hui Luo & Pu Xie
R&D Center, Sinphar Tian-Li Pharmaceutical Co., Ltd, Hangzhou, 311100, P. R. China
Shengjie Yang & Da Hong
State Key Laboratory Breeding Base of Green Pesticide and Agricultural Bioengineering, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Guizhou University, Guiyang, 550025, P. R. China
Hui Luo, Shengjie Yang & Wei Xue
Ctr for R&D of Fine Chemicals, Guizhou University, Guiyang, 550025, P. R. China
Hui Luo
Shengjie Yang
Da Hong
Wei Xue
Pu Xie
Correspondence to Hui Luo.
Hui Luo and Shengjie Yang contributed equally to this work
Luo, H., Yang, S., Hong, D. et al. Synthesis and in vitro antitumor activity of (1E,4E)-1-aryl-5-(2-((quinazolin-4-yl)oxy)phenyl)-1,4-pentadien-3-one derivatives. Chemistry Central Journal 11, 23 (2017). https://doi.org/10.1186/s13065-017-0253-9
Asymmetric curcumin analogs
Quinazoline derivatives of curcumin
Antitumor activity
MGC-803 | CommonCrawl |
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Is there a complex structure on the 6-sphere?
I don't know who first asked this question, but it's a question that I think many differential and complex geometers have tried to answer because it sounds so simple and fundamental. There are even a number of published proofs that are not taken seriously, even though nobody seems to know exactly why they are wrong.
The latest published proof to the affirmative: http://arxiv.org/abs/math/0505634 Even though the preprint is old it was just published in Journ. Math. Phys. 56, 043508-1-043508-21 (2015)
dg.differential-geometry complex-geometry open-problems
DanielFetchinson
$\begingroup$ A topical preprint has been posted on ArXiv (asserting that $S^6$ has a complex structure): front.math.ucdavis.edu/0505.5634 $\endgroup$ – Ramsay Dec 7 '10 at 19:33
$\begingroup$ And there is a new version out: arxiv.org/abs/math/0505634 claiming to completely overhaul the proof. Did anyone take a look with expertise in this area? $\endgroup$ – Daniel Apr 30 '11 at 10:28
$\begingroup$ I think you'll find that very few experts are willing to study the 4th revision, if the first 3 had serious flaws. $\endgroup$ – Deane Yang Apr 30 '11 at 12:53
$\begingroup$ Curiously enough, I cannot find any serious discussions and comments by experts on this recently published paper in the online math community. I would like to see some. $\endgroup$ – Alex Fok May 20 '15 at 15:28
$\begingroup$ Gabor Etesi posted a new paper to the arXiv on 8Sep15: "Complex structure on the six dimensional sphere explained": arXiv abstract. $\endgroup$ – Joseph O'Rourke Sep 9 '15 at 11:53
Of course, I'm not about to answer this question one way or the other, but there are at least a couple of interesting things one might point out. Firstly, it has been shown (although I forget by whom) that there is no complex structure on S6 which is also orthogonal with respect to the round metric. The proof uses twistor theory. The twistor space of S6 is the bundle whose fibre at a point p is the space of orthogonal almost complex structures on the tangent space at p. It turns out that the total space is a smooth quadric hypersurface Q in CP7. If I remember rightly, an orthogonal complex structure would correspond to a section of this bundle which is also complex submanifold of Q. Studying the complex geometry of Q allows you to show this can't happen.
Secondly, there is a related question: does there exist a non-standard complex structure on CP3? To see the link, suppose there is a complex structure on S6 and blow up a point. This gives a complex manifold diffeomorphic to CP3, but with a non-standard complex structure, which would seem quite a weird phenomenon. On the other hand, so little is known about complex threefolds (in particular those which are not Kahler) that it's hard to decide what's weird and what isn't.
Finally, I once heard a talk by Yau which suggested the following ambitious strategy for finding complex structures on 6-manifolds. Assume we are working with a 6-manifold which has an almost complex structure (e.g. S6). Since the tangent bundle is a complex vector bundle it is pulled back from some complex Grassmanian via a classifying map. Requiring the structure to be integrable corresponds to a certain PDE for this map. One could then attempt to deform the map (via a cunning flow, continuity method etc.) to try and solve the PDE. I have no idea if anyone has actually tried to carry out part of this program.
Joel Fine
$\begingroup$ It was recently realized that the theorem that there is no complex structure on the 6-sphere that is orthogonal with respect to the standard metric was actually proved much earlier than in Lebrun's paper (which dates from the 1980s). The earliest proof I know is in a 1953 paper by André Blanchard: Recherche de structures analytiques complexes sur certaines variétés. C. R. Acad. Sci. Paris, t. 236 (1953), 657–659. MR0052870 $\endgroup$ – Robert Bryant Jan 11 '17 at 11:47
$\begingroup$ Blanchard's paper can be freely read and downloaded online here: gallica.bnf.fr/ark:/12148/bpt6k3188h/f657.item $\endgroup$ – YangMills Nov 1 '17 at 13:02
Michael Atiyah posted a short paper "The Non-Existent Complex 6-Sphere" https://arxiv.org/abs/1610.09366 with a claimed negative solution to the problem.
2 revs
Nick Ulyanov
$\begingroup$ A claimed negative solution? It's not been refereed yet, though big names in geometry gave seen and discussed the details, apparently. $\endgroup$ – David Roberts Oct 31 '16 at 7:50
$\begingroup$ Does anyone know if there is a public discussion of this paper somewhere online? I imagine many people would be curious to hear experts opinions, if they are written somewhere. $\endgroup$ – Peter Samuelson Oct 31 '16 at 16:44
$\begingroup$ I'm not sure that there's consensus but I'd certainly upvote (and vote reopening, if applicable) such a question. (By the way, I'd rather like a question sounding like "Can somebody explain the argument in ...?" than "Is this paper correct?") $\endgroup$ – YCor Oct 31 '16 at 17:58
$\begingroup$ According to some experts I've consulted and who have discussed the proof with Atiyah himself, there is a lot of skepticism of Atiyah's proof. Given Atiyah's stature in our community, I doubt you'll find much public discussion about this. But there is plenty of indirect evidence of the skepticism. $\endgroup$ – Deane Yang Oct 31 '16 at 18:06
$\begingroup$ If there's skepticism about the proof, it should be fairly easy for experts to point out where the problems might lie, because it's so short. For example: does the problem arise in the step Atiyah writes in bold-face? $\endgroup$ – John Baez Nov 1 '16 at 21:46
A little more detail to Joel's first paragraph (I can't see how to add a comment to it, sorry!).
The argument that there is no orthogonal complex structure on the 6-sphere is due to Claude Lebrun and the point is that such a thing, viewed as a section of twistor space, has as image a complex submanifold. Now, on the one hand, this submanifold is Kaehler, and so has non-trivial second cohomology, since the twistor space is Kaehler. On the other hand, the section itself provides a diffeomorphism of our submanifold with the 6-sphere which has trivial second cohomology. Neat, huh?
Fran Burstall
$\begingroup$ This is interesting. Doesn't this bear some similarity to the argument used by Adler (who is or was a colleague of LeBrun) in his published "proof" of this conjecture? My recollection is that Adler tried to show that a Riemannian metric compatible with a complex structure on S^6 could be deformed into a Kahler metric, leading to the same contradiction. By the way, I never found anyone who was able to identify exactly why Adler's proof is wrong. $\endgroup$ – Deane Yang Oct 26 '09 at 1:13
$\begingroup$ That's right. He has a continuity argument involving a notion of "distinguished metric" for an almost complex structure that I have some difficulty making sense of: it requires embedding yr almost complex manifold in some high dimensional sphere. $\endgroup$ – Fran Burstall Oct 26 '09 at 21:10
$\begingroup$ Yes, I got completely lost when he embedded the sphere into a high dimensional space. I couldn't see why that would help at all and the calculations become a total mess. $\endgroup$ – Deane Yang Aug 8 '13 at 17:29
Here is a philosophical idea. exploit the following asymmetry in our state of knowledge about closed orientable manifolds: whereas almost complex is equivalent to almost symplectic: symplectic entails a further homological condition while being complex entails no further known homological condition.
This potential unknown further condition to be a closed complex manifold must reduce to no condition in complex dimension one as does the symplectic condition reduce to no condition in complex dimension one. Looking at known examples one can ask whether for a closed complex manifold above complex dimension one must the sum of the betti numbers necessarily be at least three. Note complex projective two space realizes three and above complex dimension two one has the circle cross odd spheres with total betti number four. So the guess is close to being sharp if true, and if true proves only the two sphere among manifolds with the betti numbers of the even sphere can be a closed complex manifold. Note: For manifolds of even complex dimension the last statement has been known since the work of rene thom on cobordism, the signature and the euler characteristic showing there is not even an almost complex structure.
Dennis Sullivan
$\begingroup$ Dennis, many thanks for this. I did not know the last statement regarding even complex dimensions. $\endgroup$ – Deane Yang Apr 4 '16 at 10:33
There was a workshop on this problem at Univ. Marburg, in March 2017:
https://www.mathematik.uni-marburg.de/~agricola/Hopf2017/
It resulted in a special issue of the Journal of Differential Geometry and its Applications (April 2018):
https://www.sciencedirect.com/journal/differential-geometry-and-its-applications/vol/57/suppl/C
Most of the papers from that issue are on the ArXiv. The introductory one is a good historical overview:
Ilka Agricola, Giovanni Bazzoni, Oliver Goertsches, Panagiotis Konstantis, Sönke Rollenske, On the history of the Hopf problem, arXiv:1708.01068
The pdf of that paper has arxiv links to most of the other papers (or see below). It mentions the papers by Etesi and Atiyah, saying about them that "the community of experts does not seem to find unity" and linking these MO threads.
By all appearances though, the problem is still open ;-).
The rest of the papers are:
Panagiotis Konstantis, Maurizio Parton, Almost complex structures on spheres, arXiv:1707.03883
Cristina Draper, Notes on $G_2$: The Lie algebra and the Lie group, arXiv:1704.07819
Ilka Agricola, Aleksandra Borówka, Thomas Friedrich, $S^6$ and the geometry of nearly Kähler $6$-manifolds, arXiv:1707.08591
Ana Cristina Ferreira, Non-existence of orthogonal complex structures on the round 6-sphere, arXiv:1906.02062
Boris Kruglikov, Non-existence of orthogonal complex structures on 6-sphere with a metric close to the round one, arXiv:1708.07297
Daniele Angella, Hodge numbers of a hypothetical complex structure on $S^6$, arXiv:1705.10518
Christian Lehn, Sönke Rollenske, Caren Schinko, The complex geometry of a hypothetical complex structure on $S^6$, ResearchGate (request only)
Aleksy Tralle, Markus Upmeier, Chern's contribution to the Hopf problem: an exposition based on Bryant's paper, arXiv:1708.02904
If such a complex structure exists, it would weird indeed! For example, as shown by Campana, Demailly and Peternell (Compositio 112, 77-91), if such a thing exists, then $S^6$ would have no non-constant meromorphic functions. In particular, $S^6$ can't be Moishezon, let alone algebraic.
$\begingroup$ It should be possible to show that majority of complex 3-folds are not Moishezon. So, I would not say that this remark is a real argument against existsing of a complex strucutre on S^6. There is a nice phrase in the aricle of Gromov. ihes.fr/~gromov/topics/SpacesandQuestions.pdf Page 30. "How much do we gain in global understanding of a compact (V, J) by assuming that the structure J is integrable (i.e. complex)? It seems nothing at all: there is no single result concerning all compact complex manifolds" $\endgroup$ – Dmitri Panov Jan 21 '10 at 23:25
$\begingroup$ I'm not sure I understand this remark by Gromov. In the complex analytic case we have the Dolbeault resolution -- one of the ways to state the integrability condition is precisely that Dolbeault complex is a complex. This leads to topological statements, e.g. the alternating sum of the Euler characteristics of $\Omega^i$'s (computed using the Chern classes) is the Euler characteristic of the manifold itself. This may or may not be true in the almost-complex case, but I don't see how to prove it. $\endgroup$ – algori Jan 22 '10 at 2:02
$\begingroup$ I think, the remark of Gromov is quite clear, it is quite hard to belive this remark, but the message is clear. As for Euler characteristics, David gave a correct explanation mathoverflow.net/questions/12601/… $\endgroup$ – Dmitri Panov Jan 22 '10 at 9:05
$\begingroup$ What I meant was precisely that: this is hard to believe. The Euler characteristic is just the first thing that comes to mind. $\endgroup$ – algori Jan 22 '10 at 17:41
Continuing Joel Fine and Fran Burstall's answer about, indeed "neat", Lebrun's result. Just want to recall that the "orthogonal" twistor space of any $2n$-dimensional pseudo-sphere $SO(2p+1,2q)/SO(2p,2q)$ can be written as $SO(2p+2,2q)/U(p+1,q)$. So the Kähler manifold in question, in case of the 6-sphere, is $SO(8)/U(4)$. One should think of each $j:T_xS^6\rightarrow T_xS^6$ as a linear map on $R^8$ with $j(x)=-1$ and $j(1)=x$. Well, proofs have been rewritten of LeBrun's result. I wish I had more opinion on this:
R. Albuquerque, Isabel M.C. Salavessa, On the twistor space of pseudo-spheres Differential Geometry and its Applications, 25 (2007), pp. 207-219, doi:10.1016/j.difgeo.2006.08.004, arXiv:math/0509442.
Here is a shot in the dark (Disclosure: I really know nothing about this problem).
Let $G:=\mathsf{SU}(2)$ act on $G^3$ by simultaneous conjugation; namely, $$g\cdot(a,b,c)=(gag^{-1},gbg^{-1},gcg^{-1}).$$ Then the quotient space is homeomorphic to $S^6$ (see Bratholdt-Cooper).
The evaluation map shows that the character variety $\mathfrak{X}:=\mathrm{Hom}(\pi_1(\Sigma),G)/G$ is homeomorphic to $G^3/G,$ where $\Sigma$ is an elliptic curve with two punctures.
Fixing generic conjugation classes around the punctures, by results of Mehta and Seshadri (Math. Ann. 248, 1980), gives the moduli space of fixed determinant rank 2 degree 0 parabolic vector bundles over $\Sigma$ (where we now think of the punctures are marked points with parabolic structure). In particular, these subspaces are projective varieties.
Letting the boundary data vary over all possibilities gives a foliation of $\mathfrak{X}\cong G^3/G\cong S^6$. Therefore, we have a foliation of $S^6$ where generic leaves are projective varieties; in particular, complex.
Moreover, the leaves are symplectic given by Goldman's 2-form; making them Kähler (generically). The symplectic structures on the leaves globalize to a Poisson structure on all of $\mathfrak{X}$.
Is it possible that the complex structures on the generic leaves also globalize?
Here are some issues:
As far as I know, the existence of complex structures on the leaves is generic. It is known to exist exactly when there is a correspondence to a moduli space of parabolic bundles. This happens for most, but perhaps not all, conjugation classes around the punctures (or marked points). So I would first want to show that all the leaves of this foliation do in fact admit a complex structure. Given how explicit this construction is, if it is true, it may be possible to establish it by brute force.
Assuming item 1., then one needs to show that the structures on the leaves globalize to a complex structure on all of $\mathfrak{X}$. Given that in this setting, the foliation is given by the fibers of the map: $\mathfrak{X}\to [-2,2]\times [-2,2]$ by $[\rho]\mapsto (\mathrm{Tr}(\rho(c_1)),\mathrm{Tr}(\rho(c_2)))$ with respect to a presentation $\pi_1(\Sigma)=\langle a,b,c_1,c_2\ |\ aba^{-1}b^{-1}c_1c_2=1\rangle$, it seems conceivable that the structures on the leaves might be compatible.
Moreover, $\mathfrak{X}$ is not a smooth manifold. It is singular despite being homeomorphic to $S^6$. So lastly, one would have to argue that everything in play (leaves, total space and complex structure) can by "smoothed out" in a compatible fashion. This to me seems like the hardest part, if 1. and 2. are even true.
Anyway, it is a shot in the dark, probably this is not possible...just the first thing I thought of when I read the question.
Sean Lawton
$\begingroup$ Well, certainly the leaves can't be smoothed to define a smooth foliation of $S^6$ because the tangent bundle of $S^6$ (more generally, any even dimensional sphere) is irreducible, so it has no nontrivial foliations. $\endgroup$ – Robert Bryant Nov 2 '14 at 16:09
$\begingroup$ Nice! Thanks Robert! And certainly, since the leaves are real dimension 4, the foliation cannot be trivial upon smoothing. Great! I still wonder if 1. and 2. can be accomplished, but now it seems less interesting (but still interesting enough I think for a potential student project one day). $\endgroup$ – Sean Lawton Nov 2 '14 at 16:59
Here is another paper by Gabor Etesi claiming to contain a different proof of existence of the complex structure on 6-sphere, by describing an explicit diffeomorphism to a conjugate orbit in $G_2$: front.math.ucdavis.edu/1509.02300.
Peter Michor
$\begingroup$ However, the basic claim in this paper, namely that this conjugacy class orbit in $\mathrm{G}_2$ is a complex submanifold with respect to (one of) Samelson's left-invariant complex structures on $\mathrm{G}_2$, is easily seen to be false by direct computation. $\endgroup$ – Robert Bryant Jan 11 '17 at 11:37
Personally, I do not think that that proof is correct. This is a simple question of a compact homogeneous spaces. Any even dimensional compact Lie group is a (homogeneous) complex torus bundle over a projective rational homogeneous space (which is also simply connected---K\"ahler-Einstein with positive Ricci curvature) and therefore is complex. The paper basically said that the complex structure J_H comes down to S^6 is integrable. His reason was that J_H is the restriction of J_{G_2} to H. However, H is not closed under the Lie bracket. That is why J_H can not simply come down to S^6.
$\begingroup$ Which proof are you referring to in this answer? $\endgroup$ – j.c. Sep 19 '13 at 12:24
$\begingroup$ It sounds like the paper described in this talk: math.bme.hu/~etesi/s6.renyi.pdf. $\endgroup$ – Brendan Murphy Sep 21 '13 at 1:28
This is a famous open-problem. It is still unknown.
Chris Schommer-Pries
$\begingroup$ Yeah, I know. But I think it's a great question. Am I supposed to post only questions for which I believe an answer is already known? $\endgroup$ – Deane Yang Oct 22 '09 at 23:25
$\begingroup$ Well, for problems that you know are open, there's already a site for collecting them: the open problem garden garden.irmacs.sfu.ca $\endgroup$ – Charles Siegel Oct 23 '09 at 0:06
$\begingroup$ Thanks for the link. I didn't know about it. But it seems like a less active site? I don't see any differential geometry there at all. $\endgroup$ – Deane Yang Oct 23 '09 at 1:26
$\begingroup$ I propose that we move future meta-discussion regarding open problems to this thread on meta. $\endgroup$ – Scott Morrison♦ Oct 23 '09 at 3:05
$\begingroup$ @Charles Siegel: You mean like the famous open problem in Geometry "Heavy raw boundary Final Fantasy 14 CD Key Generator"? Doesn't seem like a very respectable page^^ $\endgroup$ – Matthias Ludewig Aug 8 '13 at 17:44
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry complex-geometry open-problems or ask your own question.
A paper to the question, if the six dimensional sphere is a complex manifold
Complex structure on $S^6$ gets published in Journ. Math. Phys
Atiyah's May 2018 paper on the 6-sphere
Atiyah's paper on complex structures on $S^6$
What is the current understanding regarding complex structures on the 6-sphere?
Atiyah's paper "Non-existent complex 6-sphere"
Diffeomorphism group of the unit sphere of complex n-space
Pushing Complex Structure Forward
Which almost complex manifolds admit a complex structure?
The adjoint operators as elliptic operators
Complex Structure on Manifold of Maps
Almost complex structure and intrinsic torsion | CommonCrawl |
Over 3 years (37)
Yearbook for Traditional Music (3)
Bulletin of the Australian Mathematical Society (2)
The Journal of Agricultural Science (2)
Canadian Journal of Neurological Sciences (1)
Canadian Mathematical Bulletin (1)
International Journal of Technology Assessment in Health Care (1)
Invasive Plant Science and Management (1)
Boydell & Brewer (28)
Cambridge University Press (1)
International Council for Traditional Music (3)
Australian Mathematical Society Inc (2)
Canadian Mathematical Society (1)
Canadian Neurological Sciences Federation (1)
Health Technology Assessment International (1)
Royal College of Psychiatrists / RCPsych (1)
Warfare in History (27)
Journal of Medieval Military History (1)
FIXED POINTS OF POLYNOMIALS OVER DIVISION RINGS
Division rings and semisimple Artin rings
Rings and algebras arising under various constructions
Smooth dynamical systems: general theory
Arithmetic and non-Archimedean dynamical systems
ADAM CHAPMAN, SOLOMON VISHKAUTSAN
Journal: Bulletin of the Australian Mathematical Society / Volume 104 / Issue 2 / October 2021
Published online by Cambridge University Press: 01 March 2021, pp. 256-262
We study the discrete dynamics of standard (or left) polynomials $f(x)$ over division rings D. We define their fixed points to be the points $\lambda \in D$ for which $f^{\circ n}(\lambda )=\lambda $ for any $n \in \mathbb {N}$ , where $f^{\circ n}(x)$ is defined recursively by $f^{\circ n}(x)=f(f^{\circ (n-1)}(x))$ and $f^{\circ 1}(x)=f(x)$ . Periodic points are similarly defined. We prove that $\lambda $ is a fixed point of $f(x)$ if and only if $f(\lambda )=\lambda $ , which enables the use of known results from the theory of polynomial equations, to conclude that any polynomial of degree $m \geq 2$ has at most m conjugacy classes of fixed points. We also show that in general, periodic points do not behave as in the commutative case. We provide a sufficient condition for periodic points to behave as expected.
Exploring the opportunities for alignment of regulatory postauthorization requirements and data required for performance-based managed entry agreements
Hans-Georg Eichler, Roisin Adams, Einar Andreassen, Peter Arlett, Marc van de Casteele, Suzannah J. Chapman, Wim G. Goettsch, Jonathan Lind Martinsson, Jordi Llinares-Garcia, Anna Nachtnebel, Elias Pean, Guido Rasi, Tove Ragna Reksten, Lonneke Timmers, Rick A. Vreman, Inneke van de Vijver, Martin Wenzl
Journal: International Journal of Technology Assessment in Health Care / Volume 37 / Issue 1 / 2021
Published online by Cambridge University Press: 23 August 2021, e83
Print publication: 2021
Performance-based managed entry agreements (PB-MEAs) might allow patient access to new medicines, but practical hurdles make competent authorities for pricing and reimbursement (CAPR) reluctant to implement PB-MEAs. We explored if the feasibility of PB-MEAs might improve by better aligning regulatory postauthorization requirements with the data generation of PB-MEAs and by active collaboration and data sharing. Reviewers from seven CAPRs provided structured assessments of the information available at the European Medicines Agency (EMA) Web site on regulatory postauthorization requirements for fifteen recently authorized products. The reviewers judged to what extent regulatory postauthorization studies could help implement PB-MEAs by addressing uncertainty gaps. Study domains assessed were: patient population, intervention, comparators, outcomes, time horizon, anticipated data quality, and anticipated robustness of analysis. Reviewers shared general comments about PB-MEAs for each product and on cooperation with other CAPRs. Reviewers rated regulatory postauthorization requirements at least partly helpful for most products and across domains except the comparator domain. One quarter of responses indicated that public information provided by the EMA was insufficient to support the implementation of PB-MEAs. Few PB-MEAs were in place for these products, but the potential for implementation of PB-MEAs or collaboration across CAPRs was seen as more favorable. Responses helped delineate a set of conditions where PB-MEAs may help reduce uncertainty. In conclusion, PB-MEAs are not a preferred option for CAPRs, but we identified conditions where PB-MEAs might be worth considering. The complexities of implementing PB-MEAs remain a hurdle, but collaboration across silos and more transparency on postauthorization studies could help overcome some barriers.
Essential Dimension, Symbol Length and $p$-rank
Linear algebraic groups and related topics
Higher algebraic $K$-theory
General commutative ring theory
Adam Chapman, Kelly McKinnie
Journal: Canadian Mathematical Bulletin / Volume 63 / Issue 4 / December 2020
Published online by Cambridge University Press: 04 February 2020, pp. 882-890
We prove that the essential dimension of central simple algebras of degree $p^{\ell m}$ and exponent $p^{m}$ over fields $F$ containing a base-field $k$ of characteristic $p$ is at least $\ell +1$ when $k$ is perfect. We do this by observing that the $p$-rank of $F$ bounds the symbol length in $\text{Br}_{p^{m}}(F)$ and that there exist indecomposable $p$-algebras of degree $p^{\ell m}$ and exponent $p^{m}$. We also prove that the symbol length of the Kato-Milne cohomology group $\text{H}_{p^{m}}^{n+1}(F)$ is bounded from above by $\binom{r}{n}$ where $r$ is the $p$-rank of the field, and provide upper and lower bounds for the essential dimension of Brauer classes of a given symbol length.
Mycobacterium chimaera encephalitis following cardiac surgery in three adult immunocompetent patients: first detailed neuropathological report
SK Das, D Lau, R Cooper, J Chen, VL Sim, JA McCombe, GJ Tyrrell, R Bhargavi, B Adam, E Chapman, C Garady, K Antonation, S Ip, L Saxinger, FKH van Landeghem
Journal: Canadian Journal of Neurological Sciences / Volume 46 / Issue s2 / September 2019
Published online by Cambridge University Press: 05 September 2019, pp. S65-S66
Print publication: September 2019
Non-tuberculous mycobacterium encephalitis is rare. Since 2013, a global outbreak of Mycobacterium chimaera infection has been attributed to point-source contamination of heater cooler units used in cardiac surgery. Disseminated M. chimaera infection has presented many unique challenges, including non-specific clinical presentations with delays in diagnosis, and a high mortality rate among predominantly immunocompetent adults. Here, we describe three patients with fatal disseminated Mycobacterium chimaera infection showing initially non-specific, progressively worsening neurocognitive decline, including confusion, delirium, depression and apathy. Autopsy revealed widespread granulomatous encephalitis of the cerebrum, brain stem and spinal cord, along with granulomatous chorioretinitis. Cerebral involvement and differentiation between mycobacterial granulomas and microangiopathic changes can be assessed best on MRI with contrast enhancement. The prognosis of M. chimaera encephalitis appears to be very poor, but might be improved by increased awareness of this new syndrome and timely antimicrobial treatment.
This presentation will enable the learner to:
1. Describe the clinical, radiological and neuropathological findings of Mycobacterium chimaera encephalitis
2. Be aware of this rare form of encephalitis, and explain its diagnosis, prognosis and management
COMMON SLOTS OF BILINEAR AND QUADRATIC PFISTER FORMS
Forms and linear algebraic groups
ADAM CHAPMAN
Journal: Bulletin of the Australian Mathematical Society / Volume 98 / Issue 1 / August 2018
We show that over any field $F$ of characteristic 2 and 2-rank $n$ , there exist $2^{n}$ bilinear $n$ -fold Pfister forms that have no slot in common. This answers a question of Becher ['Triple linkage', Ann. $K$ -Theory, to appear] in the negative. We provide an analogous result also for quadratic Pfister forms.
Book: Welsh Soldiers in the Later Middle Ages, 1282–1422
Published by: Boydell & Brewer
Print publication: 20 August 2015, pp 151-152
Print publication: 20 August 2015, pp 9-10
Print publication: 20 August 2015, pp xv-xvi
Warfare in History
2 - Edward of Caernarfon
Print publication: 20 August 2015, pp 36-56
Of all England's fourteenth-century kings, Edward II was the most dependent upon his Welsh subjects. As the first English prince of Wales he had been lord of the shires of both North and West Wales. Aside from being born in the midst of the building site that was Caernarfon Castle, he had maintained a number of Welshmen in his household as prince. Nevertheless it is likely that the connection with the uchelwyr, the class of the native elite that had deserted Llywelyn ap Gruffudd, was acquired on campaign. The loyalty displayed towards Edward II by this group right until the end of his reign is remarkable and the importance of this support has been underappreciated by many historians. Edward II's military machine was very much the same as that developed by Edward I but was extended above and beyond sustainable levels, and the number of Welshmen employed by Edward II in his campaigns to Scotland was larger even than those deployed by his father in the 1290s. The machine was found wanting most tellingly at Bannockburn in 1314 and again in 1322, but Edward did little to change the nature of his armies and few of his campaigns enjoyed conspicuous success. Enormous levies of Welshmen, generally led by their fellow countrymen, were essential to Edward's ability to wage war and also for the ability of his barons to oppose him. Most of Edward's opponents held substantial Marcher lordships so it was natural that, in their struggles with their king, Edward's barons deployed their Welsh tenants to buttress their causes. In the dispute over the ordinances intended to limit Edward's power in 1312, in the war fought against Hugh Despenser the younger in 1321 and in support of Thomas of Lancaster during the winter of 1321–22, the power of the lords of the March was measured in the numbers of men their estates could supply.
This chapter will consider the role of Welsh military resources in the political narrative of Edward II's early reign before turning its attention to the continuation of the wars against the Scots as far as Bannockburn. Next, it will discuss the political fallout from this cataclysmic defeat as it related to Wales: the revolt in Glamorgan led by Llywelyn Bren in 1316 through to the campaigns that led to the defeat of the king's enemies at Boroughbridge in 1322.
3 - The Wars of Edward III: Scotland and France 1327–1360
The reign of Edward III was, in military terms, a period of transition. At the beginning of his reign, the military systems inherited from Edward I were largely unchanged and their failures, obvious by the time of Bannockburn in 1314, had not been addressed. In 1327 the armies recruited in the young king's name for service in Scotland were dominated by foot soldiers levied on counties and liberties by commissions of array. The men-at-arms accompanying them were drawn primarily from the royal household and the households of the king's barons. By the time the first phase of the French war was concluded by the Treaty of Brétigny in 1360, English armies had started to assume a different character. The foot soldier had almost wholly given way to the mounted archer serving in mixed retinues with mounted men-at-arms. Although mounted, and thus able to travel swiftly on horseback, both men-at-arms and mounted archers generally fought on foot. The means by which they were recruited also changed: commissions of array gradually gave way to recruitment by military indenture, and paid service was the norm. Effectively, this privatised military recruitment: captains were responsible for gathering both men-at-arms and archers, usually in approximately equal numbers, for fixed periods in a clearly defined contractual arrangement. This change was gradual; although indentures had been known in the reign of Edward I, they were then only for garrison service, but the effects on the military participation of men from the shires and March of Wales were marked. The ill-equipped foot soldier was obsolescent by the 1340s and, whether they were levied from Welsh shires, Marcher lordships or English counties, their decline was a result of their inflexibility. Although foot soldiers were recruited after 1360, they were generally employed, as we shall see, in specialist roles.
Wales was subject to other changes. The generation which had witnessed Edward I's conquest, that had bolstered Edward II's authority, and that had led armies far greater than those ever assembled by any Welsh prince, came to the end of their careers and their lives. By the 1340s there was an identifiable change in attitude among the Welsh elite toward fighting in English wars. The leaders of Welsh society had always defined their position by military leadership. By the second third of the fourteenth century this tied Welshmen, militarily, to their lords.
4 - Before Glyndŵr: 1360–1400
Print publication: 20 August 2015, pp 78-108
Welsh historians have tended to view the second half of the fourteenth century in the context of two attempts at Welsh self-determination. These were the claims of the last descendant of the princes of Gwynedd, Owain Lawgoch (d. 1378), to be prince of Wales and then later, the rebellion led by Owain Glyndŵr in pursuit of the same title during the first decade of the fifteenth century. The one was concentrated in France and attracted a sympathetic response in Wales and from Welshmen seeking service against the English in France. The other was a revolt fomented among men who had made their careers as soldiers and administrators of the English Crown. War determined the pattern of relationships between Welsh and English because, between 1360 and 1400, there was scarcely a year that did not witness some military engagement or defensive activity. That said, foreign expeditions led by the king in person were relatively unusual. There were none, for example, between 1359 and 1385. There was a further gap then to 1394 and no more before Richard II's fateful campaign to Ireland in 1399. The Irish campaigns marked a return of war to the lands of Wales in that they offered a departure point. Richard II was the first English king to visit Ireland since John campaigned there in 1210.
This chapter begins in the years after the Anglo-French peace settlement in 1360. English soldiers continued to have a presence in France and the first fruits of the ideas of national self-determination that bloomed under Owain Glyndŵr were harvested under the leadership of the last descendant, in the male line, of the princes of Gwynedd. The leadership of Owain Lawgoch (Owain of the Red Hand) meant that Welshmen served on both sides after the failure of the Brétigny settlement. The records available to us also reveal that Welshmen played their part in conflict at sea and in Iberia, Scotland and Ireland.
In this period, the military aspect of Marcher lordship expressed itself as forcefully as before, but, in common with the scale of armies after 1359, the scale of Welsh involvement was much reduced. The general experience of those living in the principality of Wales and the March was a peaceful one, although significant tensions lay beneath the surface. Peace and stability, combined with proactive management, meant that Marcher revenues greatly increased in the second half of the fourteenth century.
A Note on the Welsh Language
Print publication: 20 August 2015, pp xiv-xiv
Print publication: 20 August 2015, pp v-v
6 - War and Welsh Society: Military Obligation and Organisation
The period discussed in this book was one of significant change in the nature of military service in both England and Wales. The conquest of Wales by Edward I completed a process of expansion by the English Crown into Wales which had lasted centuries. It also marked the beginning of the end of another process, the transition from armies with a feudal component that held land in return for military service to armies recruited by various methods in return for pay. The settlement imposed upon Edward's newly conquered territories in Gwynedd – the statutes of Rhuddlan of 1284 – imposed legal and administrative conditions upon the Welsh. In common with similar provisions in England, they also imposed conditions upon men to serve in arms against the king's enemies. This chapter and that which follows will consider the theoretical implications of this settlement: military obligation in law and custom, and then their practical application in the practices and processes of recruitment, payment and deployment.
The nature of military obligation, as it existed in the English realm in the late thirteenth and early fourteenth centuries, was in flux. English kings had long found the apparently simple demands of the feudal summons a severe constraint on their ability to wage war. For this reason they had employed mercenaries and sought ways around its restrictions since at least the early twelfth century. Welshmen, paid for their service were often part of the solution. Edward I's Welsh wars changed the state of military obligation in England. The duty of all free men in England to possess the arms and military equipment appropriate to their status and wealth had been set out by Henry II in the Assize of Arms in 1181. In their laws, the Welsh princes invoked similar obligations, although these were not generally tied to land until Llywelyn ab Iorwerth made attempts to establish a military elite based on tenure of land in the thirteenth century. In November 1282 all free men with at least twenty liberates of land who were not serving in the Welsh war were summoned to appear at Northampton early in the next year, together with shire and borough representatives. Edward's objective was doubtless to secure financial grants in return for the service they were not doing. If so, he was unsuccessful and the obligation of military service inherent in the summons was not recognised.
Appendix 2 - Important Welsh Figures
Dafydd ap Gwilym A poet (fl. c. 1330–c. 1360), best known for his cywyddau. Traditionally he was credited with transforming Welsh poetry and popularising the cywydd form, but he was one of several poets in the period. Highly prolific and extremely popular, Dafydd ap Gwilym wrote traditional poetry in traditional metres, but a large corpus of love poetry and poems to women and lovers is attributed to him, mostly in the cywydd metre. For his work see www.dafyddapgwilym. net.
Ednyfed Fychan (d. 1246) The son of Cynwrig ab Iorwerth ap Gwrgant; the family came from the cantref of Rhos in north-east Wales. He was the distain of Llywelyn ab Iorwerth (q.v.) from c.1220, retaining the office until his death. The value of Ednyfed's service was reflected in the lands and privileges granted to him by Llywelyn. His descendants held these lands by a tenure described as that of Wyrion Eden ('the grandsons of Ednyfed'), which involved exemption from all rents and obligations except suit to the prince's court and military service. His numerous descendants came to dominate the government of the English principality of Wales and included the Tudor kings and queens of England.
Glendower, Owen See Owain Glyndŵr
Iolo Goch (fl. 1345–97), poet, of Llechryd in the parish of Llanefydd in the Marcher lordship of Denbigh. His most famous patron was Owain Glyndŵr, to whom he addressed three poems in the 1380s but he also addressed poems to Roger Mortimer, earl of March (d. 1398), and Edward III.
Llywelyn ap Gruffudd (d. 1282), prince of Wales, grandson of Llywelyn ab Iorwerth. His rise to power re-established Gwynedd's political and military power over other Welsh lords in the 1250s. Attempts to consolidate this via a feudal relationship with the English Crown resulted in war in 1277 and 1287. His death in battle, probably in the vicinity of Irfon Bridge, near Builth, confirmed Edward I's conquest of Gwynedd.
Llywelyn ab Iorwerth (c. 1173–1240), prince of Gwynedd and, from 1230, prince of Wales. He stands out as one of the greatest rulers of independent Wales and he is remembered as Llywelyn Fawr or Llywelyn the Great; the title seems first to have been used by the English chronicler Matthew Paris. Having started from nothing, he ended his days as prince of Wales in all but name, having achieved this position entirely through his political and military ability.
Print publication: 20 August 2015, pp vii-viii
1 - The Reign of Edward I
In the British Isles, the reign of Edward I saw the reach of the English Crown expand. The last areas of native-ruled Wales were eventually brought under English control and attempts, ultimately unsuccessful, were made to repeat the process in Scotland.
This chapter will summarise the final decades of these wars before considering their consequences for the Welsh and for Edward I's military machine. The process of settlement after Edward's conquest in 1282–83 was met, in some quarters, by hostility and rebellion. The character of these rebellions was not national in the modern sense but a reaction to repressive government by the invader, and personal grievance. It is remarkable, however, that there was significant Welsh involvement in the suppression of these rebellions and both the reasons for this and its scale will be analysed. Finally, it is necessary to consider the effects of the conquest of Wales on Edward I's military capabilities. What was the scale of Welsh participation in wars beyond Wales? How did this compare to the contributions from other parts of the English realm? How did Welshmen adapt to being soldiers of a distant and powerful king rather than subjects of an ambitious native prince? There are a number of other questions addressed in this chapter that will recur later in this book. Was there a notable 'Welsh' effect on the way in which the king's wars were fought after 1282? How did Welshmen adapt themselves to the conditions of English service?
First, it is necessary to provide a brief outline of the conflict that brought about Edward I's victory over the Welsh. The struggle for independence or supremacy of Pura Wallia (Wales under native rule) was conducted as much between the princes themselves as between Welshmen and Anglo-Normans or Englishmen. Unlike Scotland, there was no nation or state that bound the peoples of Wales together. Such hegemony as was achieved by an individual prince such as Lord Rhys of Deheubarth (d. 1197) or Llywelyn ab Iorwerth (Llywelyn Fawr) (d. 1240) was transitory and personal. In the thirteenth century Llywelyn ab Iorwerth and his grandson, Llywelyn ap Gruffudd, came not only to dominate Wales but also to seek to expand the bounds of their influence. Edward I's involvement with Wales began in 1254 with the grant of the earldom of Chester on the occasion of his marriage to Eleanor of Castile.
The military importance of Wales and its March changed markedly between the late thirteenth century and the early years of the fifteenth century. Welsh soldiers served every English king in this period but did so in ways that followed the needs of England. Contrary to myth, the Welsh archers in Henry V's army did not win the battle of Agincourt for him and it was rare that Welsh soldiers performed truly notable service. Welshmen feature only occasionally in chronicle accounts of warfare in the period and, of the great battles of the Hundred Years War, it was only at Crécy that their role was noted directly. In truth, by 1346, the 'golden age' of the Welsh soldier had passed. In Edward I's reign, the ability of the English Crown to raise enormous armies of Welshmen from the newly conquered lands in Wales was transformative.
Edward I's great military achievement was the integration of the men of the lands of Wales into the English war machine he and his officials had created to fight them. In truth this task was made easier by the nature of the Welsh wars. These were primarily conflicts between Welshmen in which the English became involved. As a result, members of the Welsh elite were familiar with the English court through diplomatic missions, as hostages or as exiles. Welsh warriors had guided and fought alongside English forces. As overlord of all the shires and March of Wales after 1282, Edward seems to have met only occasional resistance to recruiting from the Marcher lordships in his arrays of men. On occasions the lords of the March allowed royal officials to supervise their tenants when they served as soldiers, but this co-operation as not to be taken for granted.
Edward I's 'infantry revolution', in which the king of England was able to routinely raise and sustain armies over 10,000 men, would not have been possible without the resources of the men of Wales that the conquest of Gwynedd provided. While he would not have been able to raise infantry armies of the scale he fielded at Falkirk without 10,000 Welshmen, it should be remembered that just as many Englishmen served in Edward's infantry as Welsh, albeit drawn from a much larger population. | CommonCrawl |
police incident knutsford today
Here are 24 examples of positive attitude: 1. So how can we do more than merely survive? For example, RDAP participants hold fellow inmates accountable for standing on the compound (instead of walking, which is required). While up to one year of halfway house placement is now permitted for all BOP inmates, most inmates receive far less halfway house time. Our representatives walk you through the process of applying and entering into the RDAP Recovery Program (500 Hour Residential Drug Abuse Program). You get to go to work. by Justin Paperny | Feb 4, 2016 | Justin Paperny | 0 comments. For FY 2014, President Obamas proposed budget includes a staggering $6.9 billion for operating the Federal Bureau of Prisons. Common Sense Wisdom: Thoughts to Live By - Pepper de Callier shares memorable quotes and anecdotes to inspire us as we start each week. Indeed, the therapeutic community model has been outright rejected in many parts of the country. You get to wake up early and exercise. Something went wrong with your request. The Attitude Check 3. At some institutions, there is a separate housing unit for inmates waiting to start RDAP. "Where is your release plan?" Attempts to address this well-documented correlation have resulted in untold millions in taxpayer funds consumed by prison-based drug treatment programs. Priority is given to those who have an earlier release date. 8 Positive Attitudes with Brittany Getha "Willingness" Julie Metzger 2022-03-28T19:38:07+00:00 March 28th, 2022 | 8 Positive Attitudes | Read More Discovery Book Palace Private Limited - Offering 8 Positive Attitudes Book , book publishing service, at Rs 199/piece in Chennai, Tamil Nadu. A positive attitude is an optimistic way of thinking about the world. The crown jewel of the BOPs drug treatment program is its Residential Drug Abuse Program (RDAP). Beyond the individual treatment plans, RDAP operations rely heavily on therapeutic activities that include some traditional therapeutic community tactics. If you are ready to make a change in your life, contact us today! Residential Drug Abuse Program must become a higher budgetary priority for the BOP, which is required to do so under the terms of the statute. 8 positive attitude for success - Free download as PDF File (.pdf), Text File (.txt) or view presentation slides online. This page examines how and why RDAP should be deemed an expensive failure. ribbons down my back backing track; can vibram soles be replaced. This manual comes under the category Keyboards and has been rated by 20 people with an average of a 8.8. When this happens the facilitator will see words in the past tense (was, did, could have etc.). Residential Drug Abuse Program 2. Federal law allows the BOP to reduce the sentences of non-violent offenders who complete the RDAP program by up to one year. Merely reading this information does not create an attorney-client relationship. All participants must agree in writing to comply with program rules. The Guidelines themselves have interpreted the term as meaning the use of force or threat of force against the person. Developed in collaboration with the Federal Bureau of Prisons, the Residential Drug Abuse Program (RDAP) addresses criminogenic risk factors and substance using behaviors to meet the needs of residential substance use treatment programs in any justice setting local, state or federal. BOP officials have claimed success by pointing out that RDAP graduates recidivate 16 percent less than the BOP population as a whole. The Federal Bureau of Prisons has since changed its regulations to eliminate any disagreement. All rights reserved. It has failed to do so on a consistent, universal basis. Additionally, felony convictions of rape, assault, robbery or child sexual-abuse will disqualify you. Inmates unwilling to participate in the informing process are often removed from the program. The Law Office of Grant Smaldone is headquartered in Charleston, South Carolina. Phase 1 and Phase 2 take place in prison. "A positive attitude is not something that you acquire but instead is the active mental process of pointing your thoughts . The federal prison system also has 65 satellite prison camps located adjacent to many of these federal prisons. This empowers the federal prison system with vast discretion. Participants will develop a Readiness Statement as the starting point for life changes. Positive Attitude Lesson Plans Worksheets Reviewed By An inmate with multiple convictions for driving while intoxicated can also suffice. | | Yes | No | Total | RDAP participation takes place in the following three-stage process: Inmates are required to participate in activities in the prisons dedicated treatment unit. Likewise, many federal statutory recidivist provisions do not allow for this. 66. Practicing gratitude in all aspects of your life, including your relationships, sports, school, and home, creates a pleasant attitude. This benefit is reserved until the successful completion of RDAPs various components. | 20-40 | 0.24 | 0.10 | 0.34 | Not all federal prisons have RDAP. | > 40 | 0.12 | 0.24 | 0.36 | Not everyone admitted to RDAP is eligible for a sentence reduction. These psychologists are known as Drug Treatment Specialists. Anyone with such a conviction is deemed ineligible for early release. Some of the worksheets displayed are Residential drug abuse program, The attitude check, The attitude check, The attitude check, Check your attitude essentials, Career check your attitude are negative emotions putting, Sexual attitude work 2020, Positive attitude lesson plans work reviewed by. Failure to complete the TDAT phase precludes RDAP completion, as well. The Federal Bureau of Prisons basis for this conclusion was that while Es new sentence resulted from drug use, the supervised release sanction was imposed in connection with his previous firearms case. Rick Singer, the mastermind of the very popular college admissions scam will finally be sentenced today in federal court. Short-term situations tend to be heavy on emotion. Our professionals work hard to help you get to where you need to be because we believe in your capabilities. The Drug Treatment Specialist discusses program acceptance or denial during this interview based on individual case factors. If your sentence is 37 or more months, youll earn a 12-month sentence reduction, Yes. Any offense that by its nature or conduct involves sexual abuse offenses committed on a minor., the actual, attempted, or threatened use of physical force against the person or property of another, the carrying, possession, or use of a firearm or other dangerous weapon, even a serious potential risk of physical force. Yes, if staff feels you are not fully participating and embracing the RDAP precepts they can hold you back and require you to retake a phase. 3. Finally, inmates must also participate in traditional drug abuse treatment (TDAT) to complete the Residential Drug Abuse Program and receive an early release benefit. Get contact details, address, map on IndiaMART| ID: 10709441262 . Our team can advise you how best to seek drug abuse treatment placement, appropriate methods for showing a drug abuse history, and what to expect from this in-depth program. c) What is the possibility that a survey respondent who is older than 40 shops at the store? However, who is eligible and how the program is completed are left to the Bureau. Indeed, as the Government Accountability Office determined, only 15 percent of all inmates even made it into the program in time to have an opportunity to receive a full one-year sentence reduction. The Core Treatment Phase uses treatment journals and facilitator guides purchased from The Change Companies. For that reason, the wait list is not as long as it used to be. As the research suggests, positive self-talk is important for a number of reasons. At the same time, amendments to the statute have on occasion earmarked extra funds to do so. Learn about each prisons location, security level, educational and recreational offerings, and much more. In 2013, the last available public reporting, the BOP spent $109,313,000 on drug treatment, which includes RDAP. Congress amended the law, stating, The period a prisoner convicted of a nonviolent offense remains in custody after completing a treatment program may be reduced by the Bureau of Prisons, but such reduction may not be more than one year from the term the prisoner must otherwise serve. Pub. Read about company. The Bureau of Prisons generally keeps a mix of Residential Drug Abuse Program participants in each unit. The bad news can be turned into good news. This programming usually occurs Monday through Friday for at least half each day. $$. Likewise, we can challenge RDAP eligibility and one-year off denials. This includes all men and women who are non-U.S. citizens if the Immigration and Customs Enforcement (ICE) agency has lodged a detainer against them. a choo movie ending explained jueves 28, enero 2021 - 3:11 am . Create Your Release Plan Before Going To Federal Prison, NBC Universal Films- Justin Paperny Movie. Write in a journal. They also receive a recommendation for maximum halfway house placement. 7. JT Special Prison Report 8: Inmate Transfers $ 9.95 Add to cart; JT Special Prison Report 9: Attorney/Inmate Telephone Calls $ 9.95 Add to cart; JT Special Prison Report 10: Good Conduct Time $ 9.95 Add to cart; JT Special Prison Report 11: Prison's Unwritten Rules $ 9.95 Add to cart; JT Special Prison Report 12: Inmate Compassionate Release . They are also expelled from the program for failure to complete a program component successfully (e.g., holding others accountable). Anyfederal criminal defendant savvy enough to self-report an alcohol problem before sentencing, in a presentence report, can be deemed eligible for 3621(e) early release, excepting, of course, those on the automatic exclusion list. The Attitude Check 5. Never forget the judge is simply making a recommendation. BREAKING NEWS! Honesty, Responsibility, Open-mindedness, Caring, Objectivity, Humility, Willingness, Gratitude. You can stop participating in classes, but you will not be officially removed until you have met with staff. These include homicide, forcible rape, robbery, and others. Until 2009, inmates housed in or sentenced in the area covered by the U.S. Court of Appeals for the Ninth Circuit could receive a reduction in such cases. The tranquility that a positive attitude provides gives you the mental space necessary to be at your best each day. When life becomes too stressful, our ability to be successful is hampered. Required fields are marked *. Note that just because an inmate may not qualify for the year off doesnt mean they dont qualify for the RDAP program in federal prison. They are considered to support the Bureaus treatment protocols.Id. Of course, it may require the BOP to fully fund RDAP from its own $6.9 billion annual budget. 8 positive attitudes rdapthe death of richie. Congresss intent in this regard was not unclear. A "Positive Attitude" can be applied to every area of life. Making decisions is critical to any job, but it can be incredibly challenging when you get bogged down in details. ", Emotions Worksheets for Adults | Therapist Aid, unilineal evolution vs historical particularism. Betters the affinity and gains customer support. Likewise, inmates whose current offense involves such conduct are also excluded. As a gender-responsive program, it is a perfect fit for comprehensive substance abuse programs in justice settings, including Residential Substance Abuse Treatment (RSAT) and similar program models. Congress amended 3621 less than four years later to add the up to one year early release incentive in almost unheard of speed. Further, besides the therapeutic model, three groups of treatment are covered: small, module and self help groups. In doing so, Congress left the BOP with broad discretion in implementing this obvious intent. 4913. The unit component of the federal prison drug program is divided into three phases: The Orientation Phase involves a psychosocial assessment of each inmate and general indoctrination of the inmate. Without this proof, RDAP eligibility will be in grave doubt. The Sentencing Reform Act in 1984 eliminated parole in the federal prison system. conducting daily community meetings, etc. Life can be tough in this high . If we are serious about reducing recidivism and shrinking the number of federal prisoners, then 3621 should be amended to allow a greater range of inmates to receive the early release incentive. Our consultants know what it takes to give you every opportunity to recover. At times, the federal prison system has removed such violent crime determinations from the local level, relegating all decisions to a single national location. Displaying all worksheets related to - Attitude Check Rdap. Significantly, the congressional mandate for all eligible inmates is not tied to funding. Day In The Life Of A Prisoner: Practicing Religion, Searches, Shakedowns, and Contraband in Prison, Trust Fund Limited Inmate Communication System (Trulincs), A Day In The Life Of A Prisoner: Inmate Recreation, What To Expect In Federal Prison: The Black Market. 16 Jan 2021 ; samuel ward uniform [41] Nick Fury in the Marvel Cinematic Universe is the second live-action incarnation of the character, 5:16. Life can be tough in this high-powered, fast-paced wo As justification for this exclusion, the BOP points out that most inmates with ICE detainers (i.e., those subject to possible deportation) are generally excluded from halfway house placement. In a front-page article inUSA Today,Bureau of Prisons spokesman Ed Ross told the press, To the extent the budget allows, we will continue to add treatment staff to meet the needs of the increasing inmate population, and in the future, we expect to reduce the amount of time an inmate is wait-listed for treatment. Ross continued, Reducing the time spent waiting to enter treatment will allow for longer sentence reductions at the back end for nonviolent eligible inmates.. The second case study illustrates a case where an inmate probably should have qualified for the RDAP program in federal prison sentence reduction. \frac{5}{8}-\frac{5}{12}= Contact us at (800) 382-0868 or 954-740-2253 for more information. As defined by PS 5162.05, crimes of violence preclude certain federal prisoners from being awarded the year off after completing the Residential Drug Abuse Program. A positive attitude makes you more resilient to the common cold. If your sentence is 3136 months, youll earn a 9-month sentence reduction. The Arthritis Menace Reading Answer Worksheets, Kwentong May Klaster At Diptonggo Worksheets, Pangungusap Na May Magkatugmang Salita Worksheets, Pagpapangkat Ng Salitang Magkakaugnay Worksheets, Pagsunod Sunod Ng Mga Pangyayari Sa Kwento Worksheets, Mga Instrumentong May Mahina At Malakas Na Tunog Worksheets, Marathi Comprehension Passages Worksheets, Common Core ELA W 3 1c Grade 3 Writing Text Types and Purposes. If you do not understand the program structure and rules, you jeopardize your chance at completion. Assuming the inmate qualifies for program participation, the Drug Treatment Specialist will ask the inmate if they are still interested. Program Statement 5330.11, 2.5.10. Life can be tough in this high-powered, fast-paced world. Inmates are expected to complete workbooks supplied by The Change Companies to assist in the inmates acquisition of thought processes and pro-social skills required to live a substance-free, crime-free, and well-managed life. Core Treatment lasts no more than five months. The following requirements ordinarily need to be met to enroll in the Residential Drug Abuse Program: RDAP admission is extended to a far wider pool of inmates than those whose participation would make them eligible for a sentence reduction under 3621(e). Despite prisoners wanting the time off, many claim the rigid structure of RDAP and rules, like having to snitch on other prisoners, is too much. According to the Bureau, such programs are often powerful and important interventions in an inmates recovery, [but] they do not substitute for [non-residential] or residential treatment hours. 8 positive attitudes rdap. Size: 8 Positive Attitude u233064. The attorneys at the firm are licensed to practice law in only the jurisdictions listed in their biographies. Therefore, inmates must have at least twenty-four months remaining on their sentence to be considered. This program put on by the federal prison system is designed to provide solace and progress for inmates and others in need. All junior inmates are expected to view these appointed leaders as authoritative. the conspiracy or attempt to commit any of the above crimes. The Zoukis Consulting Group has assisted several clients with obtaining RDAP admission to obtain help with addictions, even though they didnt qualify for the year off. In the late 1990s and early 2000s, inmates confined or sentenced within the area governed by the United States Court of Appeals for the Ninth Circuit primarily the West Coast and Arizona were permitted early release for firearms possession. The good news is. Additional versions include: Topics Item # Page count Shipping details Designed and built by Earning Freedom Corp. ; ; ; aries spt ds 2461668 aries spt SP2808 92cm DIY 30m ! Three additional benefits include: 1. See Lopez v. Davis, 531 U.S. 230, 234 (2001)(setting forth the history of exclusions involving U.S.S.G. Would your answer change if your decision saved your company $\$ 1$ million? They are the final piece to understanding how we think about ourselves and others. Showing top 8 worksheets in the category - Attitude Check Rdap. A positive attitude always works like a catalyst to solve relationship problems. If youredenied acceptance into the program or the year off, you can always file anadministrative remedyseeking a review of the denial.Contact usfor more information or details about the Residential Drug Abuse Program. This definition excludes many from the early release benefit. Whether you have already received a sentence or not, our experts are always ready to help you. Inmates who complete the federal prison drug program may receive an early release benefit of up to one year. While 500 hours would amount to roughly 13 weeks of actual program time for participants engaged in it for 40 hours a week, the BOP has decided to carry out 3621(e) s six-month provision by requiring inmates to do only a half-day schedule of substance abuse programming, for nine months, not six months. Etika LLC, together with WhiteCollarAdvice.com, provides coaching programs, and products, including our Reputation Management Course, Blueprint Training Program and Sentencing Calculator to those facing struggles with the criminal justice system. See28 C.F.R. the 8 Positive Attitudes for successful treatment. What if I get sentenced to a prison that does not have RDAP? The Law Office of Grant Smaldone limits its practice to South Carolina state and cases involving federal law and procedure. In December 2013, BOP Director Samuels admitted that the average sentence reduction for those who completed the program was 7 to 9 months, not the one year authorized by the statute. Our firm affiliates with local counsel licensed in their respective jurisdictions on a case-by-case basis. Further, besides the therapeutic model, three groups of treatment are covered: small, module and self help groups. Inmates who qualify for this rigorous, residential nine-month drug treatment program receive up to a one-year sentence reduction. the future, how your cognitions cause you to act in the world; ignoring responsible action, eliminating sensitivity to consequences, criminal thinking errors: power orientation, asserting power, labeling people as wear or strong, self-serving acts of kindness, making yourself look good, criminal thinking errors: cognitive indolence, lack of persistence, getting side tracked, mentors are role models (teachers, bosses, sponsors), Subtract. The bad news can be turned into good news. Its expansive and illogical application of the term has created the fundamental flaw of depriving federal prison drug program treatment to countless offenders who could use it most, to the detriment of public safety. 3621(e). This field is for validation purposes and should be left unchanged. Key Learning Objectives: Honesty, Responsibility, Open-mindedness, Caring, Objectivity, Humility, Willingness, Gratitude. Click card to see definition . The most basic program is the 40-hour Drug Education Class. The program operates in more than 60 federal prisons, employing an army of psychologists and correctional staff. Additionally, a 301 Moved Permanently error was encountered while trying to use an ErrorDocument to handle the request. ]Id., 3621(e)(1). 3621(e). This is usually established via a presentence report or other official documentation. Men. References to any other city or state in any materials or anywhere on this website do not mean or otherwise indicate that the firm maintains an office in that location or has lawyers physically located in that city or state. Strategies that people use to avoid facing difficult issues, percentage of addicts that achieve long-term sobriety, thinking in extreme or over-generalized ways, making assumptions without knowing all the facts, thinking that other people events or things outside of you cause you to feel a certain way, using profane offensive words that are disrespectful, leaving yourself no options; saying need but really want, feels the need to constantly be in charge, unhealthy relationship styles: the neglector, puts his owns wants and needs first and ignores basic responsibilities in a relationship, unhealthy relationship styles: the manipulator, the past, how you grew up, family history, exposure, the present, how you see yourself and the word. Use it in your personal life. 3621(e)(2)(B). Different types of attitude. 2022 The Change Companies | All Right Reserved | FAQ & Refund Policy | Privacy Policy. How Should Sam Bankman-Fried Begin Preparing For Sentencing? Even now, almost three decades later, the BOP still does not offer drug treatment to all who warrant it. When you change your attitude! Best wishes to you. Please remember that when we reference our firms experience, this generally includes the combined expertise of both the firm and its frequently used local counsel. Although people with a positive attitude are generally optimistic, it doesn't mean they are happy all the time or that they ignore . As an obvious corollary, simply operating the Residential Drug Abuse Program as a six-month program instead of a nine-month program would allow up to 25 percent more eligible inmates to complete the program each year. RDAP is 9 months, and to earn the full time off you have to complete 500 hours worth of therapy. On the other hand, those sentenced or confined in the rest of the country were not eligible for early release. These programs are primarily unsuccessful in reducing imprisonment rates or the percentage of those imprisoned whose crimes relate to substance abuse. This is irrespective of the age of the open case or relative triviality. What are the 8 Core Attitudes of RDAP? Contact usat (800) 382-0868 or 954-740-2253 for more information. Look long-term instead of short-term. Best results occur the earlier you call. Due diligence is advised before enrolling in any prison program, including RDAP. This occurs when their failure to complete the program is not due to certaindisciplinary infractions (e.g., substance-related violations, violence, etc.). To start, eligible federal prisoners tend to possess attributes making them less likely to return to prison in the first place. The BOP should be forced to re-evaluate its regulations as to the definition of nonviolence set forth in 3621(e). In the CBT model, a persons feelings and behaviors are considered influenced by their perceptions and core beliefs. If you are thrown out of RDAP, you could face other sanctions, like loss of good time or a transfer. VINTAGE BUTTON DOWN FITTED EVENING/COCKTAIL/MARDI GRAS DRESS . Save my name, email, and website in this browser for the next time I comment. Such presumptions proved to be ill-fated: a review of the BOPs actions since 3621(e) requires the only conclusion that the BOP has failed to carry out all of 3621(e) s mandates at a considerable cost to taxpayers. The RDAP program is voluntary and takes 500-hours, nine- to twelve-months to complete. They must also engage in an intensive program consisting of individual and group therapy sessions, along with a problematicsystem of holding fellow RDAP participants accountable for misconductfor minor, non-disciplinary issues. Participants in this non-residential drug abuse program do not receive an early release benefit. So too is a purse-snatching committed by a junkie who needs a fix. When authorities bring people into the criminal justice system, they frequently rip away the humanity of the accused. The BOP follows this model with a goal of helping inmates perceive events objectively and modify their irrational beliefs, [so] that they may become more successful in achieving pro-social goals.Federal Bureau of Prisons Program Statement 5330.11, 1.2. Following that meeting if you would still like to be removed, you will be. The Residential Drug Abuse Program (RDAP) is the crown jewel of the Federal Bureau of Prisons' substance abuse rehabilitation programs. Additionally, more than 51,000 inmates were on a waiting list that year. OPEN-MINDEDNESS: Willing to try something new: Accepting feedback about your behavior and different way of doing things. | < 20 | 0.26 | 0.04 | 0.30 | Inmates need to speak with theirPsychology Departmentto sign up for the Residential Drug Abuse Treatment Program. emphasizing the positive things they have done. Who is Excluded from Residential Drug Abuse Program? Moreover, violence and use of a weapon or firearm will disqualify you. Naturally, it can be very disheartening when a defendant enters prison assuming he will get RDAP to then be turned down. This TDAT component is provided in a federally-contracted halfway house. So how can we do more than merely survive? The ability to focus on your contribution to a problem rather than looking for someone or something else to blame. During the Transitional Phase, inmates are tested by analyzing their behavior. The measurement or definition the BOP uses comes from the Diagnostic and Statistical Manual of the Mental Disorders (DSM IV). Sexual Attitude Worksheets 2020 8. | Total | 0.62 | 0.38 | 1.00 | Rick Singer, Mastermind of College Admissions Scandal, To Be Sentenced To Federal Prison. The Residential Drug Abuse Program is operated as a modified therapeutic community, as the BOP terms it. All participants must be able to complete all three components of the program, including the community-based portion, the Transitional Drug Abuse Treatment Program (TDAP). How much of a sentence reduction to apply to individual inmate. As the drug treatment requirement is not optional, the BOPs funding excuse cannot be accepted as valid. In fact, it is often possible to deal with negative people, situations and environments with a positive attitude. Worksheet will open in a new window. Once you find your worksheet, click on pop-out icon or print icon to worksheet to print or download. The TDAT requirement can prove troublesome for inmates with pending detainers or long-forgotten warrants. Notwithstanding the existing $109 million funding for RDAP and other drug treatment programs in federal prisons, the BOP said it would need an additional 10 percent or more to finally meet its 3621(e) obligation. Join now and receive a free digital copy ofEarning Freedom: Conquering a 45 Year Prison Term. The BOP should be encouraged to ensure that all eligible inmates are processed into the RDAP program in federal prison in time to receive the full one-year release incentive. The Be (Happy) Attitudes 8 Positive Attitudes That Can Transform Your Life! Related: Positive Leadership: 32 Traits of Positive Leaders 8. To view the RDAP facility list click here. A positive attitude is a state of mind that focuses on the good and potential in things, situations and people. The Federal Bureau of Prisons is the largest correctional agency in the Western world. 550.55. "As part of our BOP leadership webinar series, we spoke with Jon Gustin, a former leader of the Bureau of Prisons halfway houses and reentry program. It eases symptoms of depression and other mental health conditions. Covid 19 (Coronavirus) Are Inmates Safe in Federal and State Prisons? Federal inmates who cannot meet 18 U.S.C. A surprise lunch/morning tea. . An inmates participation constituted completion of the program. Inmates who do not complete the program may be deemed incomplete cases. They were so numerous that they became known as Los Desaparecidos or the disappeared.. 8 Positive Attitude For Success HAM JEETAIN GEY - Students Talent Hunt Follow Advertisement Recommended How to develop positive attitude Sarwan Singh 3.4k views 29 slides Positive Attitude Self Creation 7.8k views 25 slides Positive Attitude Kerala Land Revenue Department.and Jamesadhikaram land matter consultancy 2.6k views 58 slides In general, inmates with prior convictions for homicide, forcible rape, robbery, or another offense deemed a crime of violence are excluded. This disparity led to a sea of litigation, in which the various federal courts of appeals bickered over whether the BOP had the power to so interpret 3621(e). You must have a verifiable substance abuse disorder within 12 months of your indictment or arrestwhichever is earlier; you must demonstrate that you are capable of completing all three phases of RDAP; you have enough time left on your prison sentence; and you voluntarily enroll, review and sign all documents. Finally, a lack of English proficiency, mental health status, or serious medical conditions may exclude federal inmates from the Residential Drug Abuse Program. To place this policy argument into context, we present two relevant cases. Virtually every unarmed, note job bank robbery committed today is undertaken by a drug addict seeking a small sum of cash to support a substance abuse habit. Reach Complete Recovery Due to the harm and hurt that can come from drug and alcohol abuse, we are here to give you and your family a way to overcome hardships surrounding addiction. 1. In 2007, for example, the BOP enrolled only 80 percent of eligible inmates in drug treatment. Developed in collaboration with the Federal Bureau of Prisons, the Residential Drug Abuse Program (RDAP) addresses criminogenic risk factors and substance using behaviors to meet the needs of residential substance use treatment programs in any justice setting - local, state or federal. Even by government standards, the BOP is monolithic, consuming significant quantities of taxpayer funding. Start Now This site was designed with the .com. This is designed to control for inmates who are just trying to get the one-year sentence reduction. Those who fail to collect enough points may be deemed non-compliant with the programs rules themselves. Have you been struggling with drug or alcohol abuse and need help? In a 2013 article published by the well-respectedPrison Legal News,Brandon Sample and Derek Gilna posited that increasing the length of RDAP from 6 to 9 months costs taxpayers more than $126 million each year. This required the defendant to have received a two-point upward sentencing adjustment under the Sentencing Guidelines for possessing a dangerous weapon during the commission of the drug offense. Dr. Sam Akiba, a surgeon, complains that he must pay $\$ 100,000$ each year in premiums for adequate malpractice insurance. make available appropriate substance abuse treatment for each prisoner the Bureau determines has a treatable condition of substance addiction or abuse. Pub. Participants sometimes decide that they will report what they "should" have thought. Why do you think Dr. Akiba's insurance premium is so high. Residential Drug Abuse Program 2. While the RDAP federal prison drug program is the only meaningful way to obtaina post-sentencing sentence reduction, the RDAP program is not fast, easy, or simple. A closer examination of RDAPs policies reveals this expensive program is designed to exclude the very population that would benefit most from intensive drug treatment. RDAP Program | Residential Drug Abuse Program, RDAP: Drug Abuse Treatments Shortfallings, Treating the Dealers and Ignoring the Users, Ongoing RDAP Program Admissions Policy Failures, The Federal Bureau of Prisons, 18 U.S.C. WILLINGNESS: Daily effort. 1897, codified at 18 U.S.C. The Attitude Check 4. I have received distressed calls from families telling me that despite a history of substance abuse their loved one was denied access. The BOPs categorical exclusion policies reached the United States Supreme Court, which, inLopez v. Davis,supra, affirmed the BOPs essentially absolute discretionary power in such matters. L. 101-647, S. 2903, 104 Stat. In 1990, Congress amended 18 U.S.C. For some quick learning, I post the attitudes below. The BOPs unexplained insistence on a nine-month Residential Drug Abuse Program program should be discarded. This policy, called the code of honor in the military (and snitching or ratting in prison), often awards inmates points when informing on others. Make sure youre in the small percentage of prisoners who not only gets into RDAP, but also who completes it. Look for the Silver Linings to Develop a More Positive Attitude People who struggle to have a positive attitude are generally good at finding the downside of any situation, person, or thing. It helps to gain loyalty and rapport and adds value to relationships. One must not assume anything with the BOP. This 50 percentincrease in program time appears based on the Bureaus opinion that a minimum of 500contacthours are required to carry out 3621(e) s mandate.Id. Simply put, whether additional funds are earmarked, the BOP has the statutory responsibility to carry out 3621(e) s mandate by using some of the billions of dollars it receives annually to do so. On the other hand, the Federal Bureau of Prisons has decided that simple possession of such a weaponisa violent crime. And even what constitutes a nonviolent offense, within the meaning of the statute. Within the very text of 3621(e), Congress even presented a year-by-year expectation of what percentage of federal inmates would be offered drug treatment: The BOP did not meet the every prisoner goal in 1997. When you change your attitude! website builder. Ordinarily, the program takes nine to twelve months to complete and involves a follow-up period in community confinement (i.e., halfway house). Still, the BOP has continued to claim that the additional funds appropriated are insufficient. It can help you improve your relationship with your kids. Studies show that self-reflection can be an effective learning and teaching tool in school and work settings. How can we make our dreams come true and find real happiness in spite of disappointments, tough times, and our own weaknesses? Start Now Some Federal Inmates May Soon Leave For Home Confinement! Privacy concerns prevent me from commenting further, he said. Helps to Reduce Stress Also, note that Residential Drug Abuse Program participants are regularly disciplined, suspended, and kicked out of the program for misconduct. Life can be tough in this high-powered, fast-paced world. In most cases, a two-point sentencing enhancement for possession of a firearm is cause for exclusion. This is limited to conditions (such as the language spoken) that would prevent the offender from fully participating in all aspects of the RDAP program in federal prison. Sentencing judges cannot consider the possession of a dangerous weapon a violent crime for purposes of the United States Sentencing Guidelines. These tactics include requiring inmates to hold each other accountable by publicly informing the collective unit of others violations of RDAP or institutional rules. Es story is not uncommon. by Robert Schuller 1ST EDITON SPECIAL HOUR OF POWER GIFT EDITION (The Be (Happy) Attitudes 8 Positive Attitudes That Can Transform Your Life!, First Edition Special Hour of Power Gift Edition) [Robert Schuller, Robert Schuller] on Amazon.com. Positive Attitude Black Beaded Mesh Sheath Dress NWT 22W NWT $50 $148 Size: 22W Positive Attitude lulu348. Due to the harm and hurt that can come from drug and alcohol abuse, we are here to give you and your family a way to overcome hardships surrounding addiction. Logic suggests that if a prison term of more than 24 months is in your future, you should. Please try again later. Even a cursory analysis of the numbers and governing policies suggests that the Residential Drug Abuse Program fails on many levels. BOP spokesperson Chris Burke refused to comment, only saying that Garcias release date was calculated per 3621. The effective use of humor can release team members' creativity to resolve dilemmas, because they feel safe to "think outside the box." The appropriate use of fun can create and maintain positive attitudes in the workplace. *FREE* shipping on qualifying offers. Create your website today. In some jurisdictions, this may be considered attorney advertising. Attitude Determines Altitude RAVI RAJ 1.7k views POSITIVE THINKING RAMESHWAR MISTRY 66.3k views Water Coolers, Negativity And Conversation Cece 425 1.3k views Positive thinking Amlan Roychowdhury 14.2k views Positive attitude Mohit Singla 2.1k views Attitude: A matter of choice priya075 1.5k views Positive lifestyle 4.5k views As written and as regulated by the Federal Bureau of Prisons, the Residential Drug Abuse Program often carries out its treatment in an odd way: those who deal drugs are universally not excluded from early release, but those who commit street crimes to buy their drugs are excluded. Attitude is everything! The inmate should have at least 24 months remaining on his or her sentence. Mr. Garcia is the former superintendent of the El Paso, Texas, school district. This is a common occurrence in federal drug prosecutions. Writing down your feelings and thoughts can help you recognize your behaviors and responses. One whose offense of conviction includes the actual, attempted, or threatened use of physical force against the person or property of another is excluded. Bed space is filled by the nonviolent drug traffickers who sell the drugs. It also excludes many inmates whose recidivism takes a massive toll on their communities and society. In its place, a six-month program should be instituted that utilizes full-time programming. A survey of adult shoppers found the probabilities that an adult would shop at their new U.S. store classified by age is displayed below. While RDAPs entry requirements may appear expansive, its exclusions are widely applied. Other expansions on the classic definition of a violent crime are included and can determine eligibility as a fact-specific endeavor. As is anyone whose offense carries a serious potential risk of physical force, and those who attempt or conspire to commit such crimes. "Perpetual optimism is a force multiplier." Colin Powell. Rewire Pessimistic Thinking On occasion, we're all prone to pessimistic thinking. A European department store is developing a new advertising campaign for its new U.S. location, and its marketing managers need to understand its target market better. This state of mind brings light, hope and enthusiasm into the life of those who possess it. While the BOP has in recent years pledged to enroll 100 percent of eligible inmates, as required by 3621(e), recent figures indicate that only 80 percentof eligible inmates have been enrolled. Maikling Kwento Na May Katanungan Worksheets, Developing A Relapse Prevention Plan Worksheets, Kayarian Ng Pangungusap Payak Tambalan At Hugnayan Worksheets, Preschool Ela Early Literacy Concepts Worksheets, Third Grade Foreign Language Concepts & Worksheets. Now More Than Ever, Inmates Need an Advocate That Can Fight for Their Rights! One of the most expensive efforts in this regard is represented by the Federal Bureau of Prisons Residential Drug Abuse Program (RDAP). In an RDAP interview, inmates are asked numerous questions to determine if they sincerely want to participate. A positive attitude is essential to happiness, joy, and progress in life. Not so lucky was E, a 32-year-old inmate whose judge, imposing sentence for Es violation of his supervised release for using drugs, recommended he enroll in the Residential Drug Abuse Program. L. 103-322, S. 32001, 108 Stat. This recommendation is conferred upon program completion. Once at an RDAP facility, inmates are usually housed at a federal prison drug program treatment unit while awaiting formal entry into the program. These trappings allegedly reinforce drug abuse treatment goals. As mentioned earlier, the BOPs interpretation of the term nonviolent in 3621(e) carries similar language. In 2013, Charles E. Samuels, Jr., former Director of the Federal Bureau of Prisons, testified before a Senate committee seeking additional funding for RDAP. RDAP is only available at federal prisons. Many of the students were Mexican immigrants. Positivity eWorkbook: Train your brain to make positive memories (ebook) $9.99 $4.99 Buy Now How to make positivity stick 2. Section 3621 makes drug treatment a priority, one outlined in the very statute that empowers the agency to incarcerate prisoners in the first place. 32565 Golden Lantern, Suite B1206 Dana Point, CA 92629. Check Your Attitude Essentials 6. Over the objections of many in the El Paso community, he received an 11-month reduction of sentence for completing RDAP. please sign up In time, prison administrators may verify that the information is authentic. Having a positive attitude in life will help you have better relationships with your friends and family. This module will focus on what they are, why they are important - focusing on the predictive nature of attitudes and finally how our behavior can impact our attitudes. The Prison Commissary: What Can You Buy In Prison? GRATITUDE: Grateful for the opportunity to change. 65. RDAP embraces a comprehensive and intensive therapeutic community model focused on substance abuse, addiction, criminal thinking patterns, coping skills and more. promoting positive peer pressure and peer feedback, participants assisting one another in meeting their goal, changing negative attitudes to positive ones through activities such as attitude check. And entering into the RDAP program by up to a problem rather than looking for someone 8 positive attitudes rdap something else blame... Much of a weapon or firearm will disqualify you licensed in their biographies possession! 500 hours worth of therapy give you every opportunity to recover U.S. store classified by age displayed. Outright rejected in many parts of the mental Disorders ( DSM IV ) you jeopardize your at! Not offer drug treatment available public reporting, the BOP spent $ 109,313,000 on treatment..., holding others accountable ) staggering $ 6.9 billion for operating the prison... Bad news can be very disheartening when a defendant enters prison assuming he will get RDAP to be... Programs are primarily unsuccessful in reducing imprisonment rates or the percentage of who! Learning Objectives: honesty, Responsibility, Open-mindedness, Caring, Objectivity, Humility, Willingness Gratitude! Errordocument to handle the request focused on substance Abuse Transform your life as authoritative if you would still to... Years later to add the up to a problem rather than looking for or...: positive Leadership: 32 Traits of positive leaders 8 inmates who for... Psychologists and correctional staff most cases, a six-month program should be that... Purse-Snatching committed by a junkie who needs a fix a waiting list that year remaining on sentence! That Garcias release date to claim that the additional funds appropriated are insufficient months is in future. Life, contact us 8 positive attitudes rdap have interpreted the term nonviolent in 3621 e. Whether you have already received a sentence reduction assuming he will get RDAP to then be turned into good.. Where an inmate with multiple convictions for driving while intoxicated can also.... Their respective jurisdictions on a case-by-case basis such a conviction is deemed ineligible for early release benefit it! | 0.36 | not everyone admitted to RDAP is 9 months, youll earn a 9-month sentence reduction apply! Earlier release date was calculated per 3621 not only gets into RDAP, you could face other,... The category Keyboards and has been outright 8 positive attitudes rdap in many parts of the very popular college admissions Scandal to... Who do not allow for this rigorous, Residential nine-month drug treatment program up! And state Prisons plans Worksheets Reviewed by an inmate probably should have at least twenty-four months remaining on their to! Hold each other accountable by publicly informing the collective unit of others violations of RDAP or institutional rules ) 2... Site was designed with the programs rules themselves get to where you need to be we... Empowers the federal prison system is designed to provide solace and progress for inmates others... Firm affiliates with local counsel licensed in their biographies 1.00 | rick Singer the! Was calculated per 3621 $ 109,313,000 on drug treatment program receive up to one year on IndiaMART|:... Long-Forgotten warrants work settings Films- Justin Paperny movie a 301 Moved Permanently error encountered! Will finally be sentenced today in federal drug prosecutions parole in the federal Bureau of Prisons has decided that possession! Some traditional therapeutic community tactics attempt to commit any of the country from its own $ 6.9 billion budget. Collect enough points may be deemed an expensive failure case study illustrates a case where inmate! Writing down your feelings and thoughts can help you get bogged down details! 32 Traits of positive leaders 8 Texas, school district can you Buy in prison will be. Abuse their loved one was denied access than four years later to add the up to year. Possession of such a weaponisa violent crime are included and can determine eligibility as a endeavor! Treatment to all who warrant it considered to support the Bureaus treatment protocols.Id least 24 months remaining on his her! Collective unit of others violations of RDAP or institutional rules the small percentage of prisoners who not gets! Of disappointments, tough times, and home, creates a pleasant attitude include some traditional therapeutic community has! Do so attorney advertising your release Plan Before Going to federal prison system with discretion! Than the BOP has continued to claim that the additional funds appropriated are insufficient 51,000 inmates on! Learning and teaching tool in school and work settings eligible for early release benefit each other accountable by informing... Have you been struggling with drug or alcohol Abuse and need help of non-violent offenders who complete the Phase. 8 Worksheets in the federal Bureau of Prisons has since changed its regulations as to the definition nonviolence. Western world in federal and state Prisons program by up to one.! Official documentation must agree in writing to comply with program rules the measurement or definition the BOP as. Term nonviolent in 3621 ( e ) carries similar language Paso community, as the research suggests, positive is. | not all federal Prisons have RDAP list that year when a defendant enters prison assuming will. Merely reading this information does not create an attorney-client relationship heavily on therapeutic activities that include some traditional therapeutic model. Bop is monolithic, consuming significant quantities of taxpayer funding in fact it... Not eligible for a sentence reduction $ 109,313,000 on drug treatment to all who warrant it point life... Program rules contribution to a one-year sentence reduction to apply to individual inmate operations rely heavily on therapeutic that! Offerings, and our own weaknesses light, hope 8 positive attitudes rdap enthusiasm into the RDAP is. On a consistent, universal basis BOP is monolithic, consuming significant quantities of taxpayer.. Case factors the second case study illustrates a case where an inmate with convictions... You do not allow for this rigorous, Residential nine-month drug treatment four years later to the!, there is a purse-snatching committed by a junkie who needs a fix is older than 40 at. How much of a violent crime are included and can determine eligibility as a modified therapeutic community model on. Is operated as a fact-specific endeavor enthusiasm into the criminal justice system, they frequently away. The drug treatment to all who warrant it down in details prison that not... At their new U.S. store classified by age is displayed below it takes to give every. Including RDAP a two-point Sentencing enhancement for possession of such a conviction is deemed ineligible for early.! Act in 1984 eliminated parole in the small percentage of prisoners who not only gets into RDAP, but can! To gain loyalty and rapport and adds value to relationships about the world state?. Their loved one was denied access who sell the drugs government standards, the BOPs funding excuse can not the. That a survey respondent who is eligible and how the program | 0.24 | 0.10 | 0.34 | everyone! Show that self-reflection can be incredibly challenging when you get to where you need to be sentenced to prison... Prisons have RDAP own weaknesses to address this well-documented correlation have resulted in untold in... Policy argument into context, we present two relevant cases positive Leadership: 32 Traits of positive leaders 8 will! Of taxpayer funding, employing an army of psychologists and correctional staff BOP terms it a committed... How much of a firearm is cause for exclusion our own weaknesses time or transfer... Amendments to the definition of a dangerous weapon a violent crime for purposes of the case... Defendant enters prison assuming he will get RDAP to then be turned down any prison program, your. A nine-month Residential drug Abuse program is the largest correctional agency in the CBT model, groups. Or a transfer sentenced today in federal and state Prisons BOP to fully fund RDAP from own. Feedback about your behavior and different way of thinking about the world Willingness, Gratitude on substance Abuse their one... Expelled from the program operates in more than 51,000 inmates were on a nine-month Residential drug Abuse )! New U.S. store classified by age is displayed below is a separate housing unit for with... Behavior and different way of doing things these tactics include requiring inmates to hold each other by! Sincerely want to participate in the CBT model, three groups of treatment are covered: small, module self... 0.62 | 0.38 | 1.00 | rick Singer, the therapeutic model, a feelings... Of Prisons three decades later, the mastermind of college admissions Scandal, to.! Abuse program fails on many levels on a case-by-case basis better relationships with your friends and family sentence or,... Accepting feedback about your behavior and different way of thinking about the world from. Intoxicated can also suffice to do so on a nine-month Residential drug Abuse program participants in each.... To print or download, did, could have etc. ) 8 Worksheets in the past (! The request Moved Permanently error was encountered while trying to get the one-year sentence reduction Fight their. ) are inmates Safe in federal prison drug program may be considered attorney advertising progress for inmates with pending or... Holding others accountable ) successful completion of RDAPs various components holding others )! School and work settings examines how and why RDAP should be deemed non-compliant with the programs rules themselves others )., click on pop-out icon or print icon to worksheet to print or.. Scandal, to be at your best each day, assault, or. While trying to get the one-year sentence reduction offenders who complete the RDAP in! Disorders ( DSM IV ) ( DSM IV ) provisions do not complete the TDAT requirement can prove for... Inmate with multiple convictions for driving while intoxicated can also suffice company $ \ 1. You think Dr. Akiba 's insurance premium is so high is simply making a recommendation for maximum halfway house.! Other expansions on the compound ( instead of walking, which includes RDAP setting the. Others in need multiple convictions for driving while intoxicated can also suffice and receive a digital! The Attitudes below with local counsel licensed in their biographies your release Plan Before Going federal.
Coleman Stove Flexible Regulator, Nicholas Baker Obituary, Doom E3m6 Stuck, Lake Victoria Animals, Town Of Mount Pleasant Permits, Kwafood Skewers Melbourne, Latimea Dunarii La Braila,
how to cook plain arborio rice in microwave | CommonCrawl |
Image reconstruction utilizing median filtering applied to elastography
Rubem P. Carbente1,
Joaquim M. Maia2 &
Amauri A. Assef2
The resources of ultrafast technology can be used to add another analysis to ultrasound imaging: assessment of tissue viscoelasticity. Ultrafast image formation can be utilized to find transitory shear waves propagating in soft tissue, which permits quantification of the mechanical properties of the tissue via elastography. This technique permits simple and noninvasive diagnosis and monitoring of disease.
This article presents a method to estimate the viscoelastic properties and rigidity of structures using the ultrasound technique known as shear wave elasticity imaging (SWEI). The Verasonics Vantage 128 research platform and L11-4v transducer were used to acquire radio frequency signals from a model 049A elastography phantom (CIRS, USA), with subsequent processing and analysis in MATLAB.
The images and indexes obtained reflect the qualitative measurements of the different regions of inclusions in the phantom and the respective alterations in the viscoelastic properties of distinct areas. Comparison of the results obtained with this proposed technique and other commonly used techniques demonstrates the characteristics of median filtering in smoothing variations in velocity to form elastographic images. The results from the technique proposed in this study are within the margins of error indicated by the phantom manufacturer for each type of inclusion; for the phantom base and for type I, II, III, and IV inclusions, respectively, in kPa and percentage errors, these are 25 (24.0%), 8 (37.5%), 14 (28.6%), 45 (17.8%), and 80 (15.0%). The values obtained using the method proposed in this study and mean percentage errors were 29.18 (− 16.7%), 10.26 (− 28.2%), 15.64 (− 11.7%), 45.81 (− 1.8%), and 85.21 (− 6.5%), respectively.
The new technique to obtain images uses a distinct filtering function which considers the mean velocity in the region around each pixel, in turn allowing adjustments according to the characteristics of the phantom inclusions within the ultrasound and optimizing the resulting elastographic images.
Ultrasound (US) is being used to develop methods to verify tissue elasticity. This technique allows noninvasive and simple diagnosis and monitoring of diseases without altering patients' examination routine [1].
Ultrasound provides both morphology (images in grayscale) and functional imaging of soft tissues (image stream). Ultrafast technology resources [2] can be used to add a third dimension, pathophysiological formation, by evaluating tissue viscoelasticity. Ultrafast image formation can be used to find transitional shear waves in soft tissues, which spread transversally in this medium. Consequently, reconstructing the image from the shear wave could quantify the mechanical properties of these tissues [3].
Two types of mechanical waves propagate in soft tissue: compression waves and shear waves. Compression waves travel much faster than shear waves in this medium, typically 1540 m/s, in comparison with 10 m/s for shear waves [4]. In other words, the bulk modulus (K) for soft tissues is much greater than the shear modulus (μ), on an order of 106 [4]. This produces two important consequences: (1) tissue viscoelasticity depends only on the shear modulus, and (2) the difference in propagation speed is so great that shear wave movement can be considered negligible during the propagation time of a compression wave [5]. Since compression waves can propagate in the tissue across a very large range of frequencies (GHz), shear waves are much more strongly affected by the effects of viscosity and attenuation in the tissue [6]. The maximum frequency of shear waves that propagate in human tissues depends on the organ, and typically varies from 500 to 2000 Hz. Consequently, the minimum frame rates needed to correctly show transient waves are in the thousands of Hertz (1000–4000 Hz, considering the Nyquist limit). These frame rates can only be attained using ultrafast architectures [7], and as a result shear wave imaging requires these system models [6].
Transitional shear waves in the body originate from several different types of sources. In 2004, a new imaging mode known as supersonic shear imaging was introduced; the technique is based on induced coupling of radiation force for transitory shear waves and ultrafast imaging (Fig. 1). In this approach, the shear wave is generated and recorded with the same US transducer. This involves inducing a source of shear waves that moves in the body at supersonic speed with the equivalent of a "sonic boom," creating high-amplitude shear waves in human organs [4,5,6]. Methods of elastography that vibrate or press the surface of the tissue may have some problems obtaining images of deep regions, because the distortion used to find the elasticity is very weak when it reaches these tissues. However, methods that use acoustic radiation force impulse (ARFI) [8], shear wave elastography imaging (SWEI) [9, 10], supersonic shear imaging (SSI) [11], and shear wave elastography (SWE) [12] may create distortions in deeper tissues, facilitating generation of images in these regions [3,4,5,6].
Formation of the shear wave
The objective of this study was to use the elastographic technique in developing digital signal processing routines in order to estimate the viscosity and rigidity of structures using a filtering system that calculates the mean values around each pixel. Most of the studies in the literature assess methodologies with variations that focus on different models of transducers or ultrasound equipment [13]. The research that does include adjustment methods to improve image resolution in the inclusion area utilizes a method of calculating velocity [14], a type of inclusion, or biological tissue in vivo [15,16,17,18]. These studies refine traditional techniques for obtaining elastographic images such as signal inversion [16] and Butterworth or Kalman filters [17]. This study applies a new methodology which presents an option for adjusting ultrasound images using a median-type filter which is easier to implement. The area can be selected, along with the level of resolution in the elastography, which represents progress in handling shear wave signals in comparison with traditional methods [19,20,21]. Furthermore, it allows ultrasound technicians the possibility to better interpret the data collected in order to achieve a more precise diagnosis without the use of subjective evaluation methods such as visual interpretation [22].
Relationship between density and shear wave velocity
The relationship between the shear wave velocity and density of the medium was used to derive the elastogram, and considers the sequence of deductions made in the following equations [23,24,25].
Equation 1 initially represents stress σ, which describes the force applied in a given region by the ratio between force (F) (in our specific case, the force provided by the ultrasound transducer) and the area (A) in which it is applied [23].
$$\sigma = \frac{F}{A}$$
The rheology deformation, which represents the variation of a length in relation to initial length, is calculated using Eq. 2, where ε is the rheological deformation, L is the final length, and L0 is the initial length [23].
$$\varepsilon = \frac{{L - L_{0} }}{{L_{0} }}$$
Equations 1 and 2 can be used to obtain rigidity or Young's modulus (E). This parameter, which is shown in Eq. 3, describes the longitudinal deformation captured by the US transducer in the form of the RF signal received [23]:
$$E = \frac{\sigma }{\varepsilon }$$
Shear stress τ, which occurs when an applied force moves surfaces and leaves them at an angle different from the initial angle, is obtained by Eq. 4 [23]:
$$\tau = \frac{Fc}{A}$$
where Fc is shear force and A is the area where force is applied.
Subsequent calculations require shear deformation to be obtained (γ), as shown in Eq. 5, by the difference between the length after deformation (L) and initial length (L0) divided by the length after deformation (L). This calculation is performed via software through a US signal processing routine and serves as the basis for autocorrelation algorithms [24].
$$\gamma = \frac{{L {-} L_{0} }}{L}$$
Young's modulus measures stiffness in simple extension or compression. There are different ways to deform a material, resulting in different effects on the interatomic forces and consequently different effects on the material [18]. The deformation mode investigated and used in this study was shear stress [6]. Shear stress occurs when an applied force moves parallel surfaces, leaving them at an angle that differs from the initial angle between these two surfaces.
Like Young's modulus, the shear stress modulus is defined as the ratio between stress and deformation. Equation 6 presents the shear module G, which is obtained by the ratio between shear stress and shear deformation [24].
$$G = \frac{\tau }{\gamma }$$
Young's modulus (E) and the shear modulus in an isotropic material (G) (for which the physical properties are the same regardless of direction) are described by the following equation [24]:
$$G = \frac{E}{{2\left( {1 + v} \right)}}$$
where ν is the Poisson ratio that quantifies the transverse deformation.
Because of the complexity of describing and analyzing wave propagation in material media, some approximations have been made considering ideal liquids and isotropic and homogeneous solids (acoustic properties are constant in the wave propagation region). According to Cobbold [26], infinite wave propagation through material media is analyzed by introducing two elastic constants known as Lamé coefficients, as shown in Eq. 8:
$$2v = \frac{{\lambda_{1} }}{{\left( {\lambda_{1} +\upmu} \right)}}$$
where λ1 is the first Lamé constant and µ is the second.
The first Lamé constant (λ) relates transverse deformation to longitudinal stress, and is obtained by Eq. 9. The second Lamé constant or shear modulus in soft tissue relates shear deformation to shear stress (as shown above in Eq. 6) [26]:
$$\lambda_{1} = \frac{Ev}{{\left( {1 + v} \right)\left( {1 - 2v} \right)}}$$
The relationship between Young's modulus and the shear stress modulus in soft tissue is obtained in Eq. 10:
$$E = 3\upmu$$
and dynamic viscosity for liquids is represented in the following equation:
$$\sigma =\upmu_{2}^{{\frac{de}{dt}}} \to \frac{de}{dt} = \frac{d}{dt}\left( {\frac{{L - L_{0} }}{{L_{0} }}} \right)$$
where µ2 is the dynamic viscosity, σ is stress, and de/dt is the variation in rheological deformation over time [23].
The models described by Eqs. 3 and 11 express the difference between a solid and a liquid. Forces applied to solids cause deformations, and stress is consequently proportional to deformation; forces applied to liquids or fluids cause outflow, and in this case stress is proportional to the temporal rate of deformation.
In terms of the elastic constants, the equations for longitudinal and transverse waves are as follows:
$$Cp = \sqrt {\frac{{\lambda_{1} + 2\upmu}}{\rho }}$$
where Cp is the speed of the longitudinal wave and ρ is the density of the medium.
The speed of the longitudinal wave, based on transversal deformation and longitudinal stress (as shown in Eq. 9) can also be obtained by:
$$Cp = \sqrt {\frac{{E\left( {1 - v} \right)}}{{\rho \left( {1 + v} \right)\left( {1 - 2v} \right)}}}$$
Finally, the velocity of the shear wave Cs based on the shear modulus (in Eq. 7) is represented by [26]:
$$Cs = \sqrt {\frac{\mu }{\rho }} = \sqrt {\frac{E}{{2\rho \left( {1 + v} \right)}}}$$
According to Lakes [27], typical biomaterials and materials with the characteristics of soft tissue, such as tissue-mimicking elastography phantoms, have a longitudinal wave speed many times greater than the transverse wave speed. In some soft tissue, the longitudinal wave speed is on the order of 1500–1580 m/s, while the transversal velocity is on the order of 0.5–20 m/s. Most living tissue is consequently non-compressible, with the Poisson coefficient ranging from 0.49 to 0.5 [28]. Therefore, the shear wave velocity obtained via US indirectly determines the density of the medium.
Once the ratio of shear wave velocity and the density of the medium are determined, US can be used to generate the SSI shear wave, which generates a wave front perpendicular to the focal point. A processing algorithm is used to calculate the density at different points in the phantom, using US to assess the different speeds of the wave spreading in the lateral positions. The rigidity of the phantom analyzed is determined on a full 2-D map. In post-processing, the filter algorithm is applied to the median of the array surrounding each pixel, which adjusts the elastographic map [26].
The impulse sequences were programmed in a Verasonics Vantage 128 US system using a L11-4v model transducer with 128 elements. The research platform consists of two parts: (1) a dedicated hardware system (to transmit and acquire US signals), and (2) a software package with open and proprietary functions running on a computer via MATLAB. The signal acquired is compressed before transferring to the computer; all beam formation and treatment is subsequently done using software. The acquisition modules are responsible for transmitting and receiving the US on each channel. The RF data received are stored in the local memory of these modules, and the data acquisition system is connected to the computer through a PCI cable. Figure 2 shows the setup used for data acquisition.
Setup used for data acquisition showing the CIRS 049A elastography phantom [30] and the Verasonics Vantage 128 system and L11-4v model transducer with 128 elements
Table 1 presents the main parameters used in the routine implementation of Verasonics equipment and respective adjustments. These parameters must comply with the sequencing of data processing protocols used in the Verasonics system (shear wave elastography imaging). Selection of the sequencing parameters is described in a system programming script. The data processing line to calculate shear wave speed includes estimating different densities in the phantom and filtering the data to estimate the velocity and subsequently produce an elastogram [29].
Table 1 Routine acquisition parameters in MATLAB SWEI structure
The Verasonics system is configured in a MATLAB programming environment. To generate a sequence of images, the user writes a programming script that generates a range of parameters which are loaded on the Verasonics scanner during execution. The objects are defined using MATLAB structures. The parameter base and action sequence are defined when the programming script is executed, and these definitions are filled in and saved in a series of structures; this file can then be loaded into the system by a program manager (VSX) to implement the sequence during execution. The RF data channel can be accessed after the image sequence is completed. If the Verasonics data beamforming is acquired in phase and in quadrature (IQ), the data from the detected image can also be accessed.
The procedures for calibrating the beam position, scanner time, and transducer face heating are based on the routine programming interface specifications of the US equipment manufacturer in order to avoid problems with measuring shear wave speed (SWS) and avoid likely damage to the transducer.
For the test sequences, a CIRS model 049A elastography phantom was used (Fig. 3) [30]. This device has eight cylindrical inclusions with gradually decreasing diameters, which are alternately positioned at two different depths with four different types of densities. Values for the elasticity of the 49A phantom models are presented in Table 2. These will be used for comparison of elastographic images obtained after processing.
CIRS 049A elastography phantom [30]
Table 2 Relationship between mold model and characteristic compression.
Applied algorithm
The phantom was excited with signals configured by the Verasonics device with amplitude of 50 V in plane wave mode; in other words, all the elements of the transducer were active at the same time with rates of 100 frames per second and velocity of 1540 m/s.
The event routine is presented in a simplified form in Fig. 4, representing the main stages of the process and the respective results for sequence transition to produce the elastogram.
Flowchart of the program sequence for elastogram generation
The IQ data were initially reconstructed by the Verasonics system for each acquisition in B mode. These data were obtained from three different angles (8°, 0°, and − 8°) and then passed through a third-order moving average filter between the frames, generating the angular composition [31]. In this way, each average frame was the result of three average original frames from different angles. The speed of the axial particle is proportional to the Doppler frequency ratio, namely the difference between the frequencies received and transmitted [31].
The resulting IQ signal can be used to estimate the movement of the shear wave. The 2D autocorrelation approach according to Loupas et al. [32] was used to estimate the speed of the local axial particle. Equation 15 shows the final expression for axial speed v [32], where c is the speed of the ultrasonic waves in the medium, Ts is the pulse repetition period, ts is the sampling time along the depth, fc is the ratio between the wave length of the central frequency wave and RF sample, M is the sample depth, N is the sample of different frames, m is the column number, and n is the line number.
$$v = \frac{cts}{{4\pi f_{c} Ts}}\tan^{ - 1} \left\{ {\frac{{\mathop \sum \nolimits_{n = 0}^{N - 2} \left[ {\mathop \sum \nolimits_{m = 0}^{M - 1} Q\left( {m,n} \right)\mathop \sum \nolimits_{m = 0}^{M - 1} I\left( {m,n + 1} \right) - \mathop \sum \nolimits_{m = 0}^{M - 1} I\left( {m,n} \right)\mathop \sum \nolimits_{m = 0}^{M - 1} Q\left( {m,n + 1} \right)} \right]}}{{\mathop \sum \nolimits_{n = 0}^{N - 2} \left[ {\mathop \sum \nolimits_{m = 0}^{M - 1} I\left( {m,n} \right)\mathop \sum \nolimits_{m = 0}^{M - 1} I\left( {m,n + 1} \right) + \mathop \sum \nolimits_{m = 0}^{M - 1} Q\left( {m,n} \right)\mathop \sum \nolimits_{m = 0}^{M - 1} Q\left( {m,n + 1} \right)} \right]}}} \right\}$$
As the shear wave propagates in a direction that is perpendicular to its direction of polarization [33], a directed impulse beam is required to produce inclined shear waves to compose the image. The L11-4v linear array transducer was used to produce shear waves from different angles using different parts of the probe. The combined impulse technique, introduced by Song et al. [29] was used to transmit multiple pulse beams simultaneously at different depths of the probe focus. This produces several shear waves of different angles in the different fields of view (FOV) at the same time. The same Verasonics system was used to produce the US beam and to track the movement of the resulting shear wave. The particle speed signals caused by the propagation of shear waves were used as a signal of shear wave movement in this study, calculated from the data for consecutive frames in IQ using a 2-D autocorrelation method [12].
The raw motion signal of the shear wave was calculated using three pixels in the axial spatial dimension, and time with two sampling points in the same direction. Finally, a spatial median filter was used in each frame of the shear wave signal to improve image definition.
Estimating velocity
The data were initially filtered according to the direction of the waves generated. Discrete Fourier transformation (DFT) for two dimensions was used to convert the data for the frequency domain. A mask was applied to select the quadrants that represent the direction of velocity, obtaining only positive speeds for the shear wave. Finally, inverse Fourier transformation was applied to return the values to the time domain [6].
Two pixels of reference were chosen to estimate the shear wave velocity in a target pixel. In order to compare the axial velocity, which depends on the time between the two pixels of reference, the time required for the shear wave to move between them is estimated. The average speed of the shear wave between the pixels of reference was calculated for each period and considered the speed of the local shear wave at the target pixel [34]. The selected reference pixels had the same depth as the target pixel, and three pixels located laterally to the left and right.
The propagation and attenuation of the shear wave can be seen in Fig. 5. Figure 5a shows sample positions of a pixel target in red (T1), together with its reference pixels in blue (T2). A cutting wave that propagates in the lateral direction would first reach one of the reference pixels. Figure 5b shows the axial speeds resulting from two of these pixels (one a reference pixel and the other the target) after directional filtering was applied; this is necessary because of the displacement of the waves, which can move from right to left or left to right. Only one direction matters for determining velocity.
Two shear wave forms from two neighboring pixels a before and b after directional filtering
Directional filtering removes interference, leaving the two waves with similar shapes. The cross-correlation can be used to estimate the delay in the arrival times for the shear wave between the two sites [35]. With the array of delay times and the distances traveled, the velocities can finally be obtained by dividing the distance by the respective time.
The frequency response in a Butterworth filter is very flat, without rippling or undulations in the pass band, and approaches zero in the rejected band. For a first-order filter, the response varies by − 6 dB per octave. The magnitude of Butterworth filters drops as a linear function; this type of filter is the only one that maintains the same format for higher orders, although there is a steeper slope in the attenuated band [31]. The Butterworth filter was used in this study to compare the results.
The image processing is based on the elastogram generated to calculate the velocities. Three correction methods were used to show the differences in resolution of the reconstructed images. A Butterworth filter was configured as a low-pass filter, with the following programming function:
$$\left[ {{\text{b}},{\text{a}}} \right] = {\text{butter}}\left( {{\text{n}},{\text{Wn}}} \right);$$
where the response returns the coefficients of the transfer function for a digital low-pass n-order Butterworth filter with normalized cutoff frequency Wn.
Spectral inversion converts the response to an impulse as the pass band becomes the block band, and vice versa. In other words, this procedure transforms a low-pass filter into a high-pass one, and a high-pass filter into a low-pass filter. Since the low-frequency components were subtracted from the original signal, only the high-frequency components remain [31]. Spectral inversion was also utilized in this study to compare the results.
The filtered data were applied to the inverted signal method, where the result of the velocity matrix is calculated using the value of each inverted cell via the following function:
$${\text{Y}} = {\text{inv}}\left( {\text{X}} \right);$$
where X is the estimated shear wave speed after applying the Butterworth filter.
Finally, the proposed filtering algorithm for the medium is applied using a sub-array. This function allows the array size to be adjusted to determine the median value that best fits the image (in this specific case, 3 × 3). Median filtering with the medfilt2 function follows the routine in the attached file. The complete calculations and images generated are presented for comparison.
Initially, images were obtained which confirmed the displacement of the supersonic shear wave in the phantom in two directions. In the experimental test, displacement occurred in a linear manner in real time to demonstrate the process of generating shear waves. The cone of the selected supersonic wave travels across the region of observation from left to right, as seen in the sequence in Figs. 6, 7 where 4 frames were captured progressively to demonstrate the movement of the wave.
Screens showing the displacement of the supersonic shear wave cone (dotted line) starting on the left (a), crossing the phantom (b) to the right and respective velocity in the time sequence
Screens showing the displacement of the supersonic shear wave cone (dotted line) crossing the phantom (a) to the right (b), along the FOV, and respective velocity in the time sequence
Measurements were taken in the 049 CIRS phantom, to compare the levels of viscoelasticities cited in Table 1. The measurements were also taken at different positions to emphasize the variation with respect to the medium. For the first measurement of the phantom, a homogeneous region (without any inclusions) was chosen for testing. The spatial region of interest (ROI) selected is above the phantom inclusions (blue line), as seen in Fig. 8 in B mode. The elastographic maps were generated after processing the RF signals.
Screen in B mode for the ROI in a homogeneous area in the 049 CIRS phantom; note the inclusion positioned below as a reference
Figures 9, 10 shows the elastograms positioned in the homogeneous area of the phantom referring to Fig. 8 (ROI without inclusions), using the following methods: (a) default elastogram, (b) low-pass filter, (c) signal inversion, and (d) median filter. Note that there is no distortion in the shear wave velocity, and consequently the elastographic map remains uniform in the region of observation. The column to the right of the elastograms presents viscoelasticity values proportional to the shear wave velocity in m/s. These characteristics demonstrate the effectiveness of this method and serve as a reference for subsequent measurements and calculations.
Elastograms positioned in the homogeneous area of the phantom referring to Fig. 8, using the following methods: a default elastogram, b low-pass filter
Next, the elastogram was produced using the traditional shear wave velocity technique (Fig. 9a). The resolution of this image was compared to identify the inclusion and differences in each method. First, a default low-pass Butterworth filter was applied (Fig. 9b). Next, an image correction method with signal inversion was used (Fig. 10a), followed by the proposed median spatial filter set to a 3 × 3 pixel array used in each frame of the shear wave signal (Fig. 10b).
Elastograms positioned in the homogeneous area of the phantom referring to Fig. 8, using the following methods: a signal inversion, and b median filter
The proposed filter is based on MATLAB's medfilt2 function. This function can perform median filtering of the data array for shear wave velocity in two dimensions. Each output pixel contains the average value around the corresponding pixel of the input image, and can also control the limits of this array and assign the values for the edges of the image.
The filter uses an algorithm with a linear function to perform the smoothing procedure. The results of the processing can be analyzed in the Fourier domain or in a frequency domain. The response of a linear spatial smoothing filter is the average of the pixels contained in the vicinity of the filtering mask. These filters are sometimes called mean filters, where the size of the mask array determines the degree of detail loss and the degree of smoothing. The median filter scans the image and calculated a region around each point in the image, calculating the median values in the region and replacing the value of the point with this median value. The smoothing filter eliminates noise while preserving the contour of the image.
The next step was to obtain the elastographic maps for each internal model of the phantom. To differentiate the analyses, the measuring area was altered slightly by displacing the ROI. This shows how the relationship between the area of inclusion with respect to the medium is modified to observe the changes in results. Figure 11 presents the type I model and the relationship between the medium and the elastographic mold is close to 50%. Here the inconclusive region appears on the left, serving as a reference for the focal point of the shear wave's origin.
Screen in B mode showing type I inclusion in the 049A CIRS phantom, with half of the area of the ROI positioned in the homogeneous area
In the other measurements, the type of model and the proportionality ratio with the elastographic phantom medium were altered. Measurements were progressively taken of the molds with greater response to pressure and changes in the area of observation where the phantom 049A CIRS was included.
The region of the inclusion presents a variation in the peak axial velocity of the shear wave, and it is consequently possible to distinguish the area in the maps in Figs. 12, 13. The values show variations due to the difference in elasticity, which is calculated by the interpolation used which analyzes the shear wave velocity through the pixels as a function of the fixed distance for determining the variation in velocity. The homogeneous area remains constant, regardless of the method used for interpolation or the filter used. For all methods, the inconclusive region continues to be displayed in the upper left corner of Figs. 12, 13 as a reference for the origin of the shear wave. The filter settings and reference distance for interpolation remained constant in all measurements from the phantom to facilitate interpretation and comparison of the elastograms.
Elastograms referring to Fig. 11 with the ROI showing half of its area in the homogeneous portion (light blue), using the following methods: a default elastogram, b low-pass filter
Elastograms referring to Fig. 11 with the ROI showing half of its area in the homogeneous portion (light blue), using the following methods: a signal inversion, and b median filter
Next, the measurements were taken for the other molds of the phantom. Type II is presented in Fig. 14 in B mode, with the inclusion highlighted, followed by the respective elastographic maps in Figs. 15, 16. Type III is presented in Fig. 17 in B mode and the respective elastographic maps in Figs. 18, 19. Type IV is presented in Fig. 20 in B mode and the respective elastographic maps in Figs. 21, 22.
Screen in B mode showing type II inclusion in the 049A CIRS phantom, with the ROI altering the homogeneous area for clearer analysis of the elastogram
Elastograms referring to Fig. 14, which comparatively show the increase in the area of the inclusion, emphasizing the results of the elastogram, using the following methods: a default elastogram, b low-pass filter
Elastograms referring to Fig. 14, which comparatively show the increase in the area of the inclusion, emphasizing the results of the elastogram, using the following methods: a signal inversion, and b median filter
Screen in B mode showing type III inclusion in the 049A CIRS phantom, emphasizing the progressive alteration of the area of inclusion in order to differentiate the results, demonstrating the efficacy of the elastogram
Elastograms referring to Fig. 17, which features the area of inclusion in obtaining the elastogram, using the following methods: a default elastogram, b low-pass filter
Elastograms referring to Fig. 17, which features the area of inclusion in obtaining the elastogram, using the following methods: a signal inversion, and b median filter
Screen in B mode showing type IV inclusion in the 049A CIRS phantom
Elastograms referring to Fig. 20; note that in addition to changes in the area of the inclusion, the value also changes proportional to the viscosity of the types of phantoms used, using the following methods: a default elastogram, b low-pass filter
Elastograms referring to Fig. 20; note that in addition to changes in the area of the inclusion, the value also changes proportional to the viscosity of the types of phantoms used, using the following methods: a signal inversion, and b median filter
After the graphics were obtained, the velocities were assessed for each inclusion model with respect to the values supplied by the manufacturer, considering the variation in shear wave velocity within the phantom, which were used to calculate the elasticity using the proposed method. The values provided by the phantom manufacturer and the results obtained are presented in Table 3.
Table 3 Comparison of obtained results for elasticity (kPa) on the elastography phantom
Comparison of the results obtained from the Butterworth filter, signal inversion, and the proposed median filter processing methods using the elastography phantom appear in Table 4.
Table 4 Comparison of the results for the Butterworth filter, signal inversion and the proposed median filter processing methods using the elastography phantom
Many approaches have been used to estimate shear wave velocity, such as inversion of the second-order wave equation and inversion of the Helmholtz equation [12]. Shear wave speed is estimated from the equation of motion for waves in media; these methods only use waves that are present in the tissue. Both approaches invoke second-order derivatives, which are difficult to estimate due to the low signal-to-noise ratio (SNR) which is inherent in the data [12]. To overcome this limitation, time-of-flight (TOF) approaches based on estimates of shear wave velocity have been introduced [36].
These approaches can be divided into various forms of cross-correlation (CC) and time-to-peak methods (TTP). The 2D approach to calculating TOF speed based on CC was developed and implemented to estimate shear wave velocity from any direction of the propagating wave [37, 38]. Multiple cross-correlations were run along the direction of wave propagation to provide an average estimate for the shear wave speed. The TTP method reduces the problem to a first-order spatial differentiation along a 2D map representing the maximum time of arrival of the wave form [12]. However, these methods require computational time and sophisticated hardware.
Implementing the data processing routine described in this study not only visualized shear wave action, it improved perception of the effects of these waves through the images compared to methods other authors have used to obtain elastographic images [12, 36, 37].
In relation to Engel's estimation method [12], the inclusion evaluation array could be adjusted, which smoothed the contours. And median filtering produced better image quality than the traditional method developed by Tanter [36], despite greater computational costs. The method presented by Song et al. [39] is one of the scientific bases used in this work. The condition used to differentiate the technique in obtaining the elastogram was the medfilt2 function, which expands the possibilities for image correction but comes with the disadvantage of not eliminating the inconclusive region, requiring redirection of the ROI to remove it from the elastogram.
Another major advance was the ability to implement all the data processing on the Verasonics research platform, combined with the fact that improving the generation of shear waves without relying on external sources such as other transducers, for example, provided a better solution than other methods, such as those presented by Zhao et al. [38].
The support offered by the manufacturer focuses on routines which are already included in the equipment. However, researchers in the area have been conducting innovative studies, since this equipment allows implementation of open hardware routines and diversified research possibilities including Doppler, synchronization of multiple systems, RF filters, and 2D autocorrelation algorithms [32].
The filter applied using the medfilt2 function was typified by more uniform representation of the regions where the phantom was included, compared with the methods proposed by Nordenfur [31], Lu et al. [17, 40], and Song [35], which were the foundations of this work. Figure 23a shows that the region of inclusion in a breast phantom (059 CIRS) can be best delimited by color contrast without damaging the interpretation of the viscosity value to be measured, compared with the traditional filtering method used by Carlsen et al. [41], (Fig. 23b).
Elastograms without the median filter (a) and using the filter (b) in a 059 CIRS breast phantom
Another important feature is the ability to adjust the pixel array region (from 3 × 3) to compare the standardized median value during the process of obtaining the images, which may be appropriate depending on the size of the inclusion to be detected. The ROI can be resized in proportion to the size of the inclusion for more uniform representation; in other words, if the inclusion is larger than the initial array, a new array covering this region can be resized to improve visualization for the operator, making the area of analysis more linear and facilitating interpretation of the real dimensions of the inclusion.
Analysis of the results presented in Fig. 24 shows the linearization of values for the regions of the four different types of inclusion in the phantom, as well as the smoothing of contours, improving interpretation of the dimensions of these regions. In quantitative terms, the processing of the signals acquired was proven by the respective results presented in Table 3, as a function of the elastograms obtained.
Comparison of the column of images generated from the original elastogram (blue) and the same column using the median filter (red)
The methodology for obtaining elastographic images depends on the difference in velocity between the pixels which are next in line to calculate viscoelasticity. However, the adjacent pixels above and below may have slightly different values depending on the quality of the RF data acquired and the choice of the distance between them which is used for calculation. Median array filtering tries to correct these errors, and the medfilt2 function permits more sensitive adjustment to assist in determining the result of exam images. Figure 24 shows the effect of this processing by selecting a column of images generated for the method with and without filtering, which demonstrates the smoothing effect, decreasing the variation in velocity values for the shear wave in the phantom, particularly when the wave meets the inclusion.
Analysis of the results shows that the difference was elevated for type I and II inclusions; however, considering that the standard deviation provided by the manufacturer exceeds 28% (as shown in Table 2), the values are acceptable. For type III and IV inclusions, the percentage error was low (bellow 7.0%).
The images obtained with the proposed median filter method clearly show better resolution when compared with images obtained using the methods in the literature (Butterworth filter and signal inversion) (see Figs. 12, 13, 15, 16, 18, 19, 21 and 22). Table 4 shows the quantitative analysis of the results using the Butterworth filter [31], signal inversion [31], and the proposed median filter method. The average error is within the manufacturer's margin, and is always lower than the error for other methods, which corroborates the efficacy of the median filter method.
This article is relevant because of the experimental tests performed to validate the proposed method based on commercial phantoms and analysis of its performance in relation to other studies. The US system programming provided a sequence of methods to generate images including the standard elastogram, velocity inversion, and low-pass filtering. In addition, direct comparison of the results showed the proposed method effective in relation to these current techniques for generating shear waves and correcting images to obtain elastograms.
The routines defined in this study provide scope for future work. This includes add-ons such as reduced processing time for real time elastography imaging, incorporation of technologies applied to humans, aggregating algorithms for pulse sequences that eliminate the inconclusive regional, automatic selection of the best median array for image filtering, and/or use of specific transducers for clinical examinations (prostate, transvaginal, etc.).
Comparison of the images obtained from these methods demonstrated the fundamental objectives of this study: to assist in early diagnosis of tumors and to guide medical professionals and health institutions in treatment and accurate assessment of disease.
Doherty JR, Trahey GE, Nightingale KR, Palmeri ML. Acoustic radiation force elasticity imaging in diagnostic ultrasound. IEEE Trans Ultrason Ferroelectr Freq Control. 2013;60(4):685–701.
Tanter M, Fink M. Ultrafast imaging in biomedical ultrasound. IEEE Trans Ultrason Ferroelectr Freq Control. 2014;61(1):102–19.
Lee WN, Pernot M, Couade M, Messas E, Bruneval P, Bel A, Hagège A, Fink M, Tanter M. Mapping myocardial fiber orientation using echocardiography-based shear wave imaging. IEEE Trans Med Imaging. 2012;31(3):554–62.
Bercoff J. Ultrafast ultrasound imaging. Ultrasound imaging—medical applications. InTech: Rijeka; 2011.
Montaldo G, et al. Coherent plane-wave compounding for very high frame rate ultrasonography and transient elastography. IEEE Trans Ultrason Ferroelectr Freq Control. 2009;56(3):489–506. https://doi.org/10.1109/tuffc.
Sandrin L, Tanter M, Catheline S, Fink M. Shear modulus imaging with 2-D transient elastography. IEEE Trans Ultrason Ferroelectr Freq Control. 2002;49(4):426–35.
Bruneel C, Torguet R, Rouvaen KM, Bridoux E, Nongaillard B. Ultrafast echotomographic system using optical processing of ultrasonic signals. Appl Phys Lett. 1977;30(8):371–3.
Sarvazyan AP, Rudenko OV, Swanson SD, Fowlkes JB, Emelianov SY. Shearwave elasticity imaging: a new ultrasonic technology of medical diagnostics. Ultrasound Med Biol. 1998;24:1419–35.
Bercoff J, Tanter M, Fink M. Sonic boom in soft materials: the elastic Cerenkov effect. Appl Phys Lett. 2004;84(12):2202–4.
Diao X, Zhu J, He X, Chen X, Zhang X, Chen S, Liu W. An ultrasound transient elastography system with coded excitation. Biomed Eng online. 2017;16:87.
Bavu É, Gennisson JL, Couade M, Bercoff J, Mallet V, Fink M, Badel A, Vallet-Pichard A, Nalpas B, Tanter M, Pol S. Noninvasive in vivo liver fibrosis evaluation using supersonic shear imaging: a clinical study on 113 hepatitis C virus patients. Ultrasound Med Biol. 2011;37(9):1361–73.
Engel AJ, Bashford R. A new method for shear wave speed estimation in shear wave elastography. IEEE Trans Ultrason Ferroelectr Freq Control. 2015;62(12):2106–14.
Mulabecirovic A, Vesterhus M, Gilja OH, Havre RF. In vitro comparison of five different elastography systems for clinical applications, using strain and shear wave technology. Ultrasound Med Biol. 2016. https://doi.org/10.1016/j.ultrasmedbio.2016.07.002.
Fovargue D, Kozerke S, Sinkus R, Nordsletten D. Robust MR elastography stiffness quantification using a localized divergence free finite element reconstruction. Med Image Anal. 2017;44:126–42. https://doi.org/10.1016/j.media.2017.12.005.
Havre RF, Waage JER, Mulabecirovic A, Gilja OH, Nesje LB. Strain ratio as a quantification tool in strain imaging. Bergen: Department of Medicine, Haukeland University Hospital; 2018.
Mousavi SR, Rivaz H, Sadeghi-Naini A, Czarnota GJ, Samani A. Breast ultrasound elastography using full inversion based elastic modulus reconstruction. Biomed Eng Online. 2014;13:132. https://doi.org/10.1186/1475-925X-13-132.
Lu M, Zhang H, Wang J, Yuan J, Hu Z, Liu H. Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography. Biomed Eng Online. 2013;12:79.
Pan X, Liu K, Bai J, Luo J. A regularization-free elasticity reconstruction method for ultrasound elastography with freehand scan. Biomed Eng Online. 2014;13:132.
Audière S, Angelini ED, Sandrin L, Charbit M. Maximum likelihood estimation of shear wave speed in transient elastography. IEEE Trans Med Imaging. 2014. https://doi.org/10.1109/tmi.2014.2311374.
Carlsen F, Săftoiu JA, Lönn L, Ewertsen C, Nielsen MB. Accuracy of visual scoring and semi-quantification of ultrasound strain elastography—a phantom study. PLoS ONE. 2014;9(2):e88699. https://doi.org/10.1371/journal.pone.0088699.
Song P, Macdonald M, Behler R, Lanning J, Wang M, Urban M, Manduca A, Zhao H, Callstrom M, Alizad A, Greenleaf J, Chen S. Two-dimensional shear-wave elastography on conventional ultrasound scanners with time-aligned sequential tracking (TAST) and comb-push ultrasound shear elastography (CUSE). IEEE Trans Ultrason Ferroelectr Freq Control. 2015. https://doi.org/10.1109/tuffc.2014.006628.
Fovargue D, Nordsletten D, Sinkus R. Stiffness reconstruction methods for MR elastography. NMR Biomed. 2018. https://doi.org/10.1002/nbm.3935.
Janmey PA, Schliwa M. Rheology. Curr Biol. 2008;18:639–41.
Vincent J. Basic elasticity and viscoelasticity, structural biomaterials. New Jersey: Princeton University Press; 2012. p. 1–28.
Santos F. Sistema Internacional de Unidades. 2012. http://www.inmetro.gov.br/noticias/conteudo/sistema-internacional-unidades.pdf. Accessed 2018.
Cobbold C. Foundations of biomedical ultrasound. New York: Oxford University Press; 2007.
Lakes RS, Park JB. Biomaterials: an introduction. 2nd ed. New York: Springer Science + Business Media; 1992.
Almeida J. Sistema para análise viscoelástica de tecidos moles por ondas de cisalhamento usando excitação magnética e medida ultrassônica. Universidade de São Paulo FFCLRP-Departamento De Física Programa De Pós-Graduação Em Física Aplicada À Medicina E Biologia, Ribeirão Preto. 2015.
Deng Y, Rouze NC, Palmeri ML, Nightingal KR. Ultrasonic shear wave elasticity imaging sequencing and data processing using a verasonics research scanner. IEEE Trans Ultrason Ferroelectr Freq Control. 2017;64(1):164–76.
Manual for Phantom Elastográfico Model 049A—CIRS.
Nordenfur T. Comparison of pushing sequences for shear wave elastography. 2013. www.Diva-portal.org. Accessed 2018.
Loupas T, Powers JT, Gill RW. An axial velocity estimator for ultrasound blood flow imaging, based on a full evaluation of the Doppler equation by means of a two-dimensional autocorrelation approach. IEEE Trans Ultrason Ferroelectr Freq Control. 1995;42(4):672–88.
Weaver JB, Pattison AJ, McGarry MD, Perreard IM, Swienckowski JG, Eskey CJ, Lollis SS, Paulsen KD. Brain mechanical property measurement using MRE with intrinsic activation. Phys Med Biol. 2012;57(22):7275–87.
Zile MR, Baicu CF, Gaasch WH. Diastolic heart failure—abnormalities in active relaxation and passive stiffness of the left ventricle. N Engl J Med. 2004;350(19):1953–9.
Song P. Innovations in ultrasound shear wave elastography. Thesis submitted to the faculty of the Mayo Clinic College of Medicine, Mayo Graduate School. 2014.
Tanter M, Bercoff J, Athanasiou A, Deffieux T, Gennisson JL, Montaldo G, Muller M, Tardivon A, Fink M. Quantitative assessment of breast lesion viscoelasticity: initial clinical results using supersonic shear imaging. Ultrasound Med Biol. 2008;34(9):1373–86.
Song P, Manduca A, Zhao H, Urban MW, Greenleaf JF, Chen S. Fast shear compounding using robust 2-D shear wavespeed calculation and multi-directional filtering. Ultrasound Med Biol. 2014;40(6):1343–55.
Zhao H, Song P, Meixner DD, Kinnick RR, Callstrom MR, Sanchez W, Urban MW, Manduca A, Greenleaf JF, Chen S. External vibration multi-directional ultrasound shear wave elastography (EVMUSE): application in liver fibrosis staging. IEEE Trans Med Imaging. 2014;33(11):2140–8.
Song P, Urban MW, Manduca A, Zhao H, Greenleaf JF, Chen S. Comb-push ultrasound shear elastography (CUSE): a novel and rapid technique for shear elasticity imaging. In: IEEE international ultrasonics symposium. Dresden: IEEE; 2012. p. 1842–5.
Lu M, Wu D, Lin W, Li W, Zhang H, Huang W. A stochastic filtering approach to recover strain images from quasi-static ultrasound elastography. Biomed Eng Online. 2014;13:15.
Carlsen JF, Ewertsen C, Săftoiu A, Lönn L, Nielsen MB. Accuracy of visual scoring and semi-quantification of ultrasound strain elastography—a phantom study. PloS ONE. 2014. https://doi.org/10.1371/journal.pone.0088699.
RPC developed the software and installed it in the ultrasound equipment, reviewed the results, and drafted the manuscript. AAA reviewed the results and prepared the manuscript. JMM proposed the idea, reviewed the results, and drafted the manuscript. All authors read and approved the final manuscript.
This research was supported by the following agencies: CNPq, FINEP, Fundação Araucária, and the Brazilian Ministry of Health.
The dataset used and analyzed in the current study are available from the corresponding author on reasonable request.
There is no founding for presented research.
Electrical Engineering Department and the Graduate School of Electrical Engineering (DAELT), Federal University of Technology–Paraná (UTFPR), Curitiba, PR, Brazil
Rubem P. Carbente
Electrical/Electronic Engineering Department and the Graduate School of Electrical Engineering and Applied Computer Sciences (DAELT–DAELN–CPGEI), Federal University of Technology–Paraná (UTFPR), Curitiba, PR, Brazil
Joaquim M. Maia & Amauri A. Assef
Joaquim M. Maia
Amauri A. Assef
Correspondence to Rubem P. Carbente.
Carbente, R.P., Maia, J.M. & Assef, A.A. Image reconstruction utilizing median filtering applied to elastography. BioMed Eng OnLine 18, 22 (2019). https://doi.org/10.1186/s12938-019-0641-6
Ultrafast imaging | CommonCrawl |
Multilevel Modelling for Public Health and Health Services Research pp 255–269Cite as
Multilevel Logistic Regression Using MLwiN: Referrals to Physiotherapy
Alastair H. Leyland3 &
Peter P. Groenewegen4
First Online: 29 February 2020
This chapter contains a tutorial for analysing a dichotomous response variable in multilevel analysis using multilevel logistic regression.
After introducing the multilevel logistic regression model, we move on to the example data set that will be used. This concerns variation in referral rates of general practitioners (GPs) to physiotherapists. The outcome or dependent variable is whether or not a patient was referred to a physiotherapist, something that may be influenced by characteristics of both patient and GP. We briefly discuss the theoretical model that the authors of this study applied to formulate hypotheses to explain the apparent variation in referrals.
The data were collected in the late 1980s in the Netherlands. The structure of the data was that consultations for problems with the locomotive system (the main reason for referral to physiotherapists) were nested within GPs.
In the chapter we describe the analysis of these data using MLwiN.
Multilevel analysis
Many research problems involve a response variable which is dichotomous; for example, a patient has a good or a poor outcome following surgical intervention. Such data are often assumed to arise from a binomial distribution and may be modelled using logistic regression. More generally, data may be in the form of a proportion (such as the proportion of GP consultations resulting in a referral to physiotherapy) and may be modelled in a similar manner. This chapter shows how a multilevel logistic regression model is formulated for binomial data clustered within higher-level units. We then introduce the example and the data set used. This is followed by an application within MLwiN. Further details on multilevel modelling and MLwiN are available from the Centre for Multilevel Modelling http://www.bristol.ac.uk/cmm/. The materials have been written for MLwiN v3.01. The teaching version of the software is available from https://www.bristol.ac.uk/cmm/software/mlwin/download/.
Multilevel Logistic Regression Model
Let yij denote a binary response (0 or 1) for the ith individual in the jth unit, and let πij denote the probability of a 'success' (i.e. yij = 1). The binomial distribution is characterised by two parameters: the probability of success πij and the number of 'trials' n. So if the outcome were the proportion of GP consultations that resulted in physiotherapy, the denominator n would be the total number of relevant consultations. For a logistic regression model, when each data item refers to an individual response with a dichotomous outcome rather than a proportion, the denominator is always equal to one. This means that we have
$$ {y}_{ij}\sim \mathrm{Binomial}\left(1,{\pi}_{ij}\right) $$
In a random intercept multilevel logistic regression model, we then model the transformed probability πij as a linear combination of a series of covariates or explanatory variables xpij together with a random effect for each higher-level unit u0j so that we can write
$$ \mathrm{logit}\left({\pi}_{ij}\right)=\log \left(\frac{\pi_{ij}}{1-{\pi}_{ij}}\right)={\beta}_0+{\beta}_1{x}_{1 ij}+\cdots +{u}_{0j} $$
As for the multilevel linear regression model, we make an assumption about the distribution of the higher-level residuals u0j
$$ {u}_{0j}\sim N\left(0,{\sigma}_{u0}^2\right) $$
Alternative link functions to the logit link can be employed for dichotomous outcomes; common alternatives are the probit and complementary log-log links. The logit link has the advantage that the parameter estimates βp can be interpreted as log odds ratios (and so, when exponentiated, they can be interpreted as odds ratios). For further details of link functions, the reader is referred to general works such as that by McCullagh and Nelder (1989).
Example: Variation in the GP Referral Rate to Physiotherapy
Until recently, patients in the Netherlands (from where the data used in this example are drawn) had to be referred by a GP before they could visit a physiotherapist. GPs are still the major source of referrals to physiotherapists in primary healthcare. Patients are predominantly referred to physiotherapists when they have complaints relating to the locomotive system. Of all patients that present their problem to their GP, a varying proportion is referred to a physiotherapist. The aim of the original study was to explain the variation between GPs in physiotherapy referrals (Uunk et al. 1992).
The authors followed the logic that was explained in Chap. 2. The average referral rate of the GPs in their sample for patients with complaints related to the locomotive system was 24%. This percentage varied between GPs from a low of 11% to a high of 45%. So some GPs referred only one out of ten patients with problems in the locomotive system to a physiotherapist, whilst at the other end of the scale almost half of another GP's patients were referred. The authors constructed an explanatory model based on social production function (SPF) theory (again see Chap. 2; Lindenberg 1996).
The GPs could either treat the patients themselves, including the use of a 'wait and see' policy, or they could refer patients to a physiotherapist. The dependent variable is therefore dichotomous. Following SPF theory it was assumed that GPs have two goals: improving their patients' health and increasing their own well-being. It was further assumed that both GPs and patients had resources that they could use to reach their goals. The theoretical model is given in Fig. 12.1.
Fig. 12.1
Theoretical model to explain variation in referrals to physiotherapy
Starting from the right-hand side of Fig. 12.1, the dependent variable is whether a patient is referred to physiotherapy or not. Preceding this are two boundary conditions: firstly, patients have to visit their GP with health complaints for which referral to a physiotherapist is a relevant alternative. The authors restricted the data to patients with complaints of the locomotive system. Hence this condition was fulfilled. The separate diagnoses were used in the analysis to take the case-mix of different GPs into account. The second condition is that there are physiotherapists to whom patients can be referred. That condition is always fulfilled globally, but there is variation in the local availability of physiotherapists within the practice area of the GPs. This variable was therefore used as a control variable.
The assumption was that GPs want to improve their patients' health. Whether they can realise this goal by referring a patient to physiotherapy might depend on their knowledge of and experience with physiotherapy. As the authors did not have a direct measure of this, they used the number of years of experience that each GP had working as a GP. It is also assumed that GPs want to achieve personal goals: well-being and social approval (from patients, colleagues and physiotherapists). Their workload and the way they were paid (depending on whether a patient was publicly or privately insured) were both assumed to influence well-being. The type of practice was considered a potential influence of sources of social approval: in single-handed practices, GPs depend more on their patients for social approval. The authors had information on whether GPs had physiotherapists in their social network. They interpreted this information in two ways: either this might influence the possibility of acquiring social approval through the referral of patients to physiotherapists, or it might relate to their knowledge of physiotherapy. Finally, it was assumed that patients themselves might want to visit a physiotherapist and that those patients who had achieved a higher educational level would be better able to put their point forward when discussing this issue with their GP. Patient characteristics such as age and sex were used as control variables. In the example dataset, we will use a less extensive set of variables for the sake of simplicity. However, you will still be able to explore the data and test your own ideas.
The data were collected in 1987 as part of a large national survey of general practice (Van der Velden 1999). The starting point was a sample of 100 GP practices in the Netherlands. The following data are relevant to this example:
GPs in these practices recorded all contacts with their patients over a period of 3 months, including the diagnosis and whether a patient was referred to a physiotherapist.
GPs filled in a questionnaire.
All patients on the list of each practice were sent a short questionnaire to collect social and demographic background variables.
The contacts of the same patients for the same health problem were combined into care episodes. This is especially relevant in the case of referrals where patients might first have a consultation, presenting their problem, and their GP might advise them to wait for a couple of weeks and come back if their complaints did not disappear. If we calculated the referral rate using separate contacts instead of the care episodes, we would therefore tend to find much lower referral rates. Consequently, the data have five levels: the practice, the GPs, the patients, the episodes and the contacts. In this example, we only use two levels: GPs and episodes (most GPs were single-handed at that time and the majority of patients only had one episode during the 3-month period). The data therefore form a two-level strict hierarchy of episodes nested within GPs. Patient characteristics, such as age, are simply distributed over episodes. The outcome of interest is a binary indicator of whether the patient was referred to a physiotherapist or not.
The data are contained in the MLwiN worksheet 'fysio.wsz'. When you open the worksheet, you will see the Names window providing an overview of all of the variables. Patients (as previously mentioned, these are not strictly speaking patients but episodes) are identified by PATID and GPs by GPID. Columns 3–8 contain data information relating to the patient. PATAGE is the patient's age in years, ranging from 18 to 98. This variable is subsequently categorised in PAGEGRP. This variable has been declared as a categorical variable; click on the variable name PAGEGRP in the Names window and then on the View button in the Categories section at the top of the Names window to display the category names. The categories used are 18–34, 35–44, 45–54, 55–64, 65–74, 75–84 and 85–98. PATSEX is also a categorical variable denoting the patient's sex—1 for male and 2 for female. Similarly, PATINSUR takes the value 1 if the patient is publicly insured and 0 if they are privately insured. The extent of the patient's education is contained in the variable PATEDU; this variable has four levels (1 for no formal education, 2 for those with only primary education, 3 for secondary and lower/middle vocational education and 4 for higher vocational and university education).
The variable DIAG contains the primary diagnosis resulting from the care episodes. These diagnoses are in 13 mutually exclusive categories:
diag_1: symptoms/complaints neck
diag_2: symptoms/complaints back
diag_3: myalgia/fibrositis
diag_4: symptoms of multiple muscles
diag_5: disabilities related to the locomotive system
diag_6: impediments of the cervical spine
diag_7: arthrosis cervical spine
diag_8: lumbago
diag_9: ischialgia
diag_10: hernia nuclei pulposi
diag_11: impediments of the shoulder
diag_12: epicondylitis lateralis
diag_13: tendinitis/synovitis
The variables in columns 9–13 relate to the GP. Their experience was measured by the number of years they had worked as a GP; we have rescaled this by dividing by 10 so that GPEXPER, a continuous variable, ranges from 0 to 3.3 indicating that the range of experience was from 0 to 33 years. Also at the level of GP we have workload (GPWORKLOAD), a continuous variable, containing the total number of contacts in the 3-month registration period, measured in thousands of patients, and ranging from 0.277 to 4.649 (i.e. from 277 to 4649 patients). The type of practice, PRACTYPE, is a categorical variable distinguishing between single-handed practices, partnership practices, group practices and health centres. The variable LOCATION differentiates between four categories of practice location: rural, suburban, urban and big city. Finally, the variable GPPHYSIFR indicates whether the GPs have physiotherapists in their social network (taking the value 1 for yes, 0 for no).
REFERRAL is the response variable with 0 indicating that the patient was not referred to a physiotherapist and 1 indicating that they were. (Note the use of 0 and 1 for the responses, not the 1 and 2 used by convention in some other software packages.) Finally, CONS is a column of 1s used to model the intercept in the fixed part of the model; for a random intercept model, this variable will also model the random variation across GPs.
Model Set-Up
Open the Equations window and the default unspecified model should appear. Declare REFERRAL to be the response, specify a two-level model and set the level 1 and 2 identifiers to be PATID and GPID, respectively. Next click on the N corresponding to the default (normal) distribution for the response and change this to binomial. Accept the default suggestion of a logit link to fit a logistic regression. The window should appear as follows:
In addition to asking for the model specification—the red β0x0 term—MLwiN requests the denominator nij. We can use the binomial distribution to model proportions in which case nij would be the number of 'attempts'. Since our data refer to individuals, and the response is whether or not an individual patient is referred to physiotherapy, the nij that we require is just another column of 1s. Click on the nij, select CONS from the drop-down list and click on Done.
Now we can specify the fixed part of the model and the level-2 variance component. It is sensible to start with a mean model to estimate the probability of being referred and see how this varies between GPs. Add CONS as an explanatory variable to estimate the mean probability and let this mean vary across GPs at level 2. The window should now appear as follows (you may need to press the + button at the bottom of the Equations window to expand the model that is shown).
The constant β0 will estimate the log odds of referral by the average GP and the GP residuals u0j, which are assumed to be normally distributed, will estimate the GP deviations from the mean log odds. The lowest level variance is a function of πij, the probability of individual i being referred to a physiotherapist by GP j; this is determined by the fact that we are assuming a binomial distribution and we do not estimate this variance explicitly.
Non-linear Settings
Before estimating the model, we need to specify the settings for non-linear estimation. There are three options that can be set, and this is done by clicking on the Nonlinear button at the bottom of the Equations window. The first option covers the distributional assumption and this relates to whether we wish to assume the variation at level 1 is binomial. For binary data we should assume that this is true rather than testing for over- or under-dispersion (Skrondal and Rabe-Hesketh 2007). The second and third options relate to the estimation procedure used by MLwiN. The estimation procedure is iterative and involves transforming the data and fitting a linear model. The linearisation option relates to the Taylor series expansion, which approximates a linear form for the model, and the options are either a first or second order expansion. The linearising expansion uses predicted values from one iteration to estimate the parameters at the next iteration, and estimation type relates to whether these predicted values are calculated from the fixed part of the model only (MQL) or from both the fixed and random parts of the model (PQL). The simplest estimation procedure (first order MQL) tends to underestimate the random parameters (variances), although it is computationally more robust than second order PQL estimation (Goldstein and Rasbash 1996; Rodríguez and Goldman 1995). A rule of thumb is to start with the simpler estimation procedure and, once a model of interest has been established, switch to second order PQL. To start with we shall use the default settings: a binomial distribution with a first order MQL estimation procedure. This can be selected by clicking on Use Defaults and then Done.
Once these options have been selected, we can estimate the model by clicking on the Start button at the top of the MLwiN window.
Model Interpretation and Model Building
The mean model should appear as follows:
Taking the antilogit function of the intercept (i.e. exp(β0)/[1 + exp (β0)]) gives the probability of being referred by the average GP to be 0.203. There is a great deal of variation between GPs and we can use this estimate to calculate a 95% confidence interval for the proportion of patients receiving a referral from their GP, again using the antilogit function. Thus, in 95% of GPs the probability of referral is between antilogit \( \left(-1.366-1.96\sqrt{0.232},-1.366+1.96\sqrt{0.232}\right)=\left(0.090,0.396\right) \).
In Chap. 6, we considered ways of examining the magnitude of the variance for multilevel logistic regression models. Firstly, the intraclass correlation coefficient can be approximated as
$$ {\rho}_{\mathrm{I}}=\frac{\sigma_{u0}^2}{\sigma_{u0}^2+3.29} $$
suggesting that 6.6% of the variability in whether a patient is referred to a physiotherapist can be attributed to differences between GPs. Secondly, we can calculate a median odds ratio (MOR) as
$$ \mathrm{MOR}=\exp \left(0.954\sqrt{\sigma_{u0}^2}\right) $$
This suggests that the median of all pairwise comparisons between GPs gives an odds ratio of 1.58. There is therefore considerable variation between GPs and we can go on to see how much of this variation can be explained by differences in patient populations. Firstly, add the two control variables, age and sex, as explanatory variables to the current model: PAGEGRP and PATSEX. As reference categories use women in the youngest age group. Then add the diagnoses contained in the variable DIAG, with the first category (symptoms or complaints of the neck) as the reference category. Now estimate this new model to obtain:
Note that MLwiN does not provide an estimate of −2∗loglikelihood for logistic regression models. This is because the estimation procedure used is not maximum likelihood but pseudo-likelihood. There has been a change in the estimate associated with the intercept β0. This is now an estimate of the log odds of referral by the average GP for a patient with the baseline characteristics, in this case a female aged 18–34. All of the covariate estimates are on the log odds scale and thus represent the change in log odds associated with a unit increase in each explanatory variable. By taking the exponential of these estimates, we can obtain estimates of the odds ratio (OR) of referral relative to an appropriate baseline group. The OR for referral for patients aged 35–44, relative to those aged 18–34, is exp(0.055) or 1.06; 95% confidence intervals are given by exp(0.055 ± 1.96 × 0.055) or (0.95, 1.18). The 95% confidence interval contains 1 suggesting that the odds of referral to a physiotherapist are not significantly different between the 18–34 and 35–44 age groups. The parameter estimates suggest a non-linear relationship with age and, relative to the younger patients, older patients (those aged 65 and over) are less likely to be referred to a physiotherapist. Relative to the youngest group, the OR of referral is 0.69 (0.59, 0.81) for those aged 65–74, 0.45 (0.37, 0.56) for those aged 75–84 and 0.31 (0.20, 0.49) for those aged 85 and over. Men are less likely to be referred than women, although this is of borderline significance (OR = 0.92; 95% C.I. 0.85, 1.00) .
After taking account of differences in patient populations and diagnoses, we see that the between GP variation has remained virtually unchanged. This is quite uncommon, as often a large part of the apparent variation between high level units is due to differences between individuals. It is, however, also possible for the variance between higher-level units to increase in multilevel models following the addition of variables at the lower (in this case patient) level. Snijders and Bosker (2012) provide an explanation as to why this is likely to happen in multilevel logistic regression models. In essence, since the variance in a binary outcome yij is constrained to be equal to πij(1 − πij) (see Chap. 6), the addition of a level 1 variable will tend to result in an increase in the level 2 variance so that the proportion of unexplained variation at level 1 will decrease.
We can now check for the effect of the other patient variables; add both PATEDU and PATINSUR to the current model, using the lowest educated and those with private insurance as the reference categories.
These two new covariates offer further insight into the pattern of referrals: there is a steady increase in the probability of referral with increasing educational level of those patients who present with complaints of the locomotive system. Relative to those with no education, those with higher education have more than twice the odds of being referred for physiotherapy (OR = 2.34; 95% C.I. 1.59, 3.45) . The type of insurance (and thus the way GPs are remunerated) does not significantly affect the chance of being referred; those with public insurance show a small and insignificant increase in the odds of referral (OR = 1.08; 95% C.I. 0.98, 1.19). Once again, the addition of these patient characteristics makes no difference to the variance between GPs.
Finally we add the five GP-level variables: GPEXPER, GPWORKLOAD, PRACTYPE (reference: single-handed practices), LOCATION (reference: rural) and GPPHYSIFR (reference: those GPs who do not have friends who are physiotherapists).
We have now built our final model. GPs working in joint practice and those in health centres (which usually include physiotherapists) refer slightly more patients than those in solo practice. The odds of referral are increased among GPs working in one of the big cities (OR = 1.85; 95% C.I. 1.23, 2.76) and GPs who have physiotherapists as friends or acquaintances are also more likely to refer patients (OR = 1.25; 95% C.I. 1.04, 1.50). Neither the experience of the GP nor their workload appears to influence the likelihood of referring patients to physiotherapy.
Altogether the GP characteristics have reduced the variation between GPs from 0.230 in the previous model to 0.196 (a reduction of about 15%). Although we would expect the introduction of variables at the GP level to decrease the variance between GPs, calculation of the intraclass correlation coefficient shows that 5.6% of the unexplained variation in patient referrals is attributable to differences between GPs. The median odds ratio for this model is 1.52.
A Note on Estimation
The current estimation procedure, first order MQL, is known to produce biased estimates (Goldstein and Rasbash 1996; Rodríguez and Goldman 1995) although it is a reasonable tool for model building. In practice, we recommend that you obtain the final results that you wish to report using second order PQL estimation. (There are alternative methods of estimation available in MLwiN including the parametric bootstrap and Markov chain Monte Carlo or MCMC. Some other packages also include the option of maximum likelihood estimates obtained using numerical integration.) The screenshot below replicates our final model using second order PQL.
These estimates differ markedly from those obtained using first order MQL. The level 2 variance estimated using second order PQL is considerably larger giving an intraclass correlation coefficient of 0.061 and a median odds ratio of 1.56. There are also changes in the fixed part of the model; for example, the estimate of the odds ratio associated with the practice being located in a big city (compared to rural practices) has increased to 1.90 (95% C.I. 1.25, 2.90).
As for a linear multilevel model, we can calculate residuals for multilevel logistic regression models. The residuals from our final model are shown below for the 158 GPs.
These residuals are now on a log odds scale; patients attending the GP with the largest residual (1.046) have an odds ratio of 2.85 (95% C.I. 1.94, 4.17) of being referred to chemotherapy relative to the average GP after taking patient and GP characteristics into account. Note the varying magnitude of the 95% confidence intervals around the GP residuals; those GPs about whom we have more data (i.e. those with more patients) have smaller confidence intervals.
Further Exercises
Explore the random slope variance for variables such as the insurance status of the patients. It was expected that privately insured patients would be referred less often. We did not find such an effect, but it might still be the case that some GPs are less likely to refer privately insured patients (depending on some measured or unmeasured GP variables).
Look at the GP residuals to check for outliers and explore the effects any outliers may have on the current model.
Goldstein H, Rasbash J (1996) Improved approximations for multilevel models with binary responses. J R Stat Soc Ser A 159:505–513
CrossRef Google Scholar
Lindenberg S (1996) Continuities in the theory of social production functions. In: Ganzeboom H, Lindenberg S (eds) Verklarende sociologie: opstellen voor Reinhard Wippler. Thesis Publishers, Amsterdam
McCullagh P, Nelder JA (1989) Generalized linear models, 2nd edn. Chapman & Hall/CRC, Boca Raton
Rodríguez G, Goldman N (1995) An assessment of estimation procedures for multilevel models with binary responses. J R Stat Soc Ser A 158:73–89
Skrondal A, Rabe-Hesketh S (2007) Redundant overdispersion parameters in multilevel models for categorical responses. J Educ Behav Stat 32:419–430
Snijders TAB, Bosker RJ (2012) Multilevel analysis: an introduction to basic and advanced multilevel modeling. Sage, Los Angeles
Uunk WJG, Groenewegen PP, Dekker J (1992) Verwijzingen van huisartsen naar fysiotherapeuten: een verklaring en analyse van verschillen tussen huisartsen. Mens en Maatschappij 67:389–411
Van der Velden K (1999) General practice at work: its contribution to epidemiology and health policy. NIVEL, PhD thesis Erasmus University, Utrecht
MRC/CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK
Alastair H. Leyland
Netherlands Institute for Health Services Research (NIVEL), Utrecht, The Netherlands
Peter P. Groenewegen
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
© 2020 The Author(s)
Leyland, A.H., Groenewegen, P.P. (2020). Multilevel Logistic Regression Using MLwiN: Referrals to Physiotherapy. In: Multilevel Modelling for Public Health and Health Services Research. Springer, Cham. https://doi.org/10.1007/978-3-030-34801-4_12
DOI: https://doi.org/10.1007/978-3-030-34801-4_12
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34799-4
Online ISBN: 978-3-030-34801-4
eBook Packages: MedicineMedicine (R0)
Share this chapter | CommonCrawl |
What exists in the Space between quarks
Apologies to all if this has been asked before, I searched but was unable to find one similar.
This is a question that has been bugging me for a while that i haven't really been able to find a suitable answer for.
I am aware that the space between an atoms nucleus and its electron cloud is teeming with virtual particles that allow the exchange of energy that give electrons an assigned energy level or 'shell' but what bugs me is about the space in between atoms.
What is in between atoms? is it classifiable as a vacuum where nothing at all exists?
I would find it hard to believe that atoms are pushed right up against each other at all times due to repulsive charges on the nucleus acting upon any other.
I accept that the gap is unbelievably small but on the scale of atoms and electrons, how small are we talking? Is there even a gap at all? Do we know what is in between or is it unknown? is it a similar process to the virtual particles between nucleus and electrons?
It is to my limited understanding that when particles "collide" there is no physical interaction, rather an exchange of energy through virtual photons. Is that what exists in all of these gaps? a constant exchange of virtual energy that acts as a consistent repulsion between all atoms?
atoms vacuum virtual-particles
Malcolm Brown
RhysWRhysW
$\begingroup$ Related: physics.stackexchange.com/q/34049/2451 and physics.stackexchange.com/q/7615/2451 $\endgroup$ – Qmechanic♦ Jan 11 '13 at 15:04
$\begingroup$ while my question is most certainly related to those (indeed my reading of both led me to finally ask) i don't think either covered my particular issue in enough depth. The second one lends itself to the inner machinations of transfer between nucleus and cloud, whereas mine is more focusing on exchange between atom and atom in a non vacuum environment, vacuum was merely (incorrectly perhaps) tagged as i theorized the area between particles on earth might well be a vacuum $\endgroup$ – RhysW Jan 11 '13 at 15:10
$\begingroup$ I should clarify what theorists mean by vacuum. Vacuum is full of fields! We say that it is not a vacuum if there are real particles (quantised excitations of a field carrying a definite energy). When there aren't any real particles present we call it vacuum, even though the fields are still there and the fields are still fluctuating (virtual particles). Vacuum is the state of lowest energy. $\endgroup$ – Michael Brown Jan 11 '13 at 15:35
The space between atoms depends very much on the medium you are talking about. In solids the typical distance between atoms is about the same as the size of the atoms themselves. In everyday gases at room temperature and pressure the distance between molecules is many times their size, and in deep space you can get densities as low as one proton per cubic centimetre!
You can get a rough idea of the average separation $\ell$ between atoms by using
$$ \ell \approx \left(\frac{m}{\rho}\right)^{1/3} $$
where $m$ is the mass of an atom and $\rho$ is the (mass) density of the material. This can be compared to the size of an atom, which for all elements is about the same at $\approx 10^{-10} - 10^{-9}\ \mathrm{m}$.
Space is full of fields like the electric and magnetic fields. You can think of certain types of "vibrations" of these fields as virtual particles, but the common view of modern physics is that the field picture is more fundamental. There are fields for all of the elementary particles, and the fields are constantly fluctuating due to quantum mechanics. You can think of temporary ripples in the fields as virtual particles, which are responsible for transmitting disturbances through space. Real particles are quantised excitations (or vibrations) in a field which propagate long distances.
Matt Strassler has gone to great lengths to explain this point of view in his popular articles.
Frederic Brünner brings up an important point about virtual particles. Physicists use an approximation called perturbation theory to do most of their calculations (because the calculations are really hard to do without making approximations). Virtual particles are a convenient way to organise these calculations, but you should not think of them as physical objects like real particles. In a sense virtual particles are misleading. What they really represent are rapid fluctuations of the fields (what I called ripples before). At large distances these fluctuations don't matter except when they average out to a smooth classical field. For the interactions between atoms, and even most of the interactions between electrons and nuclei, the classical field is all you need.
Michael BrownMichael Brown
$\begingroup$ @RhysW There is one field of each type. For example, there is one photon field (more commonly called the electro-magnetic field), one electron field, etc. that fill all of space; not a seperate field belonging to each particle. You can actually go a long way to understanding the physics without thinking about virtual particles. At very large distances (compared to $\approx10^{-13}m$) the net effect of all virtual photons is to create a classical electric field around charges (en.wikipedia.org/wiki/Coulomb%27s_law). Like charges repel and opposite charges attract. cont.-- $\endgroup$ – Michael Brown Jan 11 '13 at 15:21
$\begingroup$ cont. The fields produced by different particles simply add together, though since electrons and protons have opposite charges their fields have opposite signs and tend to cancel out. As a result the electric fields between atoms are very weak compared to the fields inside atoms, and the net force between atoms is a very weak residual of the original electrical force. Roughly speaking, atoms which are far apart attract each other (weakly) but atoms which are very close repel very strongly. This what enables many atoms to come together to form organised structures. $\endgroup$ – Michael Brown Jan 11 '13 at 15:24
$\begingroup$ cont. To answer your question (finally): You can visualise irregular materials that way, as long as you remember that any "membrane" is imaginary and arbitrarily defined. The field of an atom doesn't stop; it goes on forever, gradually weakening to the point of being immesurable. Atoms in certain other materials - crystals - form a regular repeating arrangement. The atoms in a crystal only wiggle around their regularly spaced equilibrium positions, without moving past each other. The Feynman Lectures on Physics are great reading. The man loved to talk about how everything is atoms jiggling. :) $\endgroup$ – Michael Brown Jan 11 '13 at 15:30
$\begingroup$ I have to disagree with the explanation regarding the virtual particles. They are mathematical artifacts of a perturbative expansion within the framework of QFT, and as such shouldn't be used as a physical explanation for phenomena within an atom. Calling them "ripples" doesn't change that fact. $\endgroup$ – Frederic Brünner Jan 11 '13 at 15:34
$\begingroup$ @RhysW Frederic brings up an important point about virtual particles. Physicists use an approximation called perturbation theory to do most of their calculations (because the calculations are really hard to do without making approximations). Virtual particles are a convenient way to organise these calculations, but you should not think of them as physical objects like real particles. What they really represent are rapid fluctuations of the fields (what I called ripples before). At large distances these fluctuations don't matter except when they average out to a smooth classical field. $\endgroup$ – Michael Brown Jan 11 '13 at 15:50
I have thought about this question for quite some time. The theory that I highly believe in is about string theory. The theory states that the space in between subatomic particles are almost massless strings from type 2a string theory. These string have so little mass (roughly .83 x 10 to the -5 GeV) that they are undetectable. Of course this theory relies solely on string theory. Thank you for your time.
dmckee --- ex-moderator kitten♦
Jack MoodyJack Moody
$\begingroup$ The FAQ is very down on "Pitches for your own personal theories or work". Also, please do not sign your posts--your usercard is appended automatically. $\endgroup$ – dmckee --- ex-moderator kitten♦ Mar 8 '13 at 17:56
$\begingroup$ @RhysW I have always thought that the strings are so tightly interwoven that there is minimal if any space in between these so called strings. But it is interesting to think about it. Back to the theory, the space in between these atoms or quarks are these tightly interwoven strings. Once again they have such little mass that they are undetectable. $\endgroup$ – Jack Moody Mar 8 '13 at 18:40
$\begingroup$ This is a complete misunderstanding of string theory. $\endgroup$ – Michael Brown Mar 9 '13 at 0:14
$\begingroup$ there are many understandings of string theory due to the many types. this is just one of the types. $\endgroup$ – Jack Moody Mar 9 '13 at 16:39
Not the answer you're looking for? Browse other questions tagged atoms vacuum virtual-particles or ask your own question.
What is in the space between a nucleus of an atom and its electrons?
What really goes on in a vacuum?
If empty space is not really empty, what does the space between an atomic nucleus and its electrons consist of?
How is the EM force exchanged over long distances?
With respect to the Casimir effect, why can't the wavelengths of the virtual particles between two plates just "pass through" the plates themselves?
Why does an electron react differently to a virtual photon in the interaction between two electrons and between an electron and a positron?
What explains the variety of particles we see? (Dealing with particulate nature of matter)
Virtual photon exchange instantaneously | CommonCrawl |
Wind Energy Potential
Offset Carbon
Value of Power
Electricity Transmission
Levelized Cost of Energy
Capital Cost Model
Energy Valuation
Turbine Properties
Wind Time Series data
Turbine Parameters
Offshore Wind Energy Production¶
Offshore wind energy is gaining interest worldwide, with 5,400 megawatts (MW) installed as of January 2013 and a growth rate around 25% per year (GWEC, 2013). Consistently higher offshore winds and proximity to coastal load centers serve as two of the major reasons wind energy developers are looking offshore. The goal of the InVEST offshore wind energy model is to provide spatial maps of energy resource availability, energy generation potential, and (optionally) energy generation value to allow users to evaluate siting decisions, use tradeoffs, and an array of other marine spatial planning questions. The model was developed to allow maximum flexibility for the user, in that it can be run with default data and parameters, but it can just as easily be updated with new turbine and foundation information, grid connection information, and parameter values that fit the user's context. Model outputs include wind power potential, energy generation, offset carbon emissions, net present value, and levelized cost of energy, all given at the farm level.
Peer-reviewed references for this model are http://dx.doi.org/10.1016/j.aquaculture.2014.10.035 for the financial portion of the model and http://dx.doi.org/10.1016/j.marpol.2015.09.024 for the physical portion.
This wind energy model provides an easily replicable interface to assess the viability of wind energy in your region under different farm design scenarios. The outputs are raster maps, whose point values represent the aggregate value of a farm centered at that point. This allows for detailed analysis of siting choices at a fine scale, though it comes at the cost of assuming that conditions are sufficiently symmetric around the center point so that the center point represents the median conditions of all turbines in the farm. Since the user can select the number of turbines for the farm, and the raster maps do not give an indication of farm size, the model also outputs a representative polyline polygon at a randomly selected wind data point that indicates the size of the farm.
To run the model, you are asked to supply information into the graphical user interface. This includes information about wind energy conditions, the type of turbine, number of turbines, the area of interest, etc. To make the model easier to run, it includes default data in .csv tables on two common offshore wind turbines: 3.6 MW and 5.0 MW. We also include two wind speed datasets: a global dataset and a dataset covering the Northwest Atlantic. Finally, it includes a table of less commonly changed default values used to parameterize various parts of the model, called the "Global Wind Energy Parameters" file. These .csv files are required inputs, and may be modified if alternate values are desired by directly editing the files using a text editor or Microsoft Excel. When modifying these files, it is recommended that the user make a copy of the default .csv file so as not to lose the original default values.
Wind Energy Potential¶
The wind energy model estimates wind power density (wind power potential) to identify offshore areas with high energy potential. The wind power density \(PD (Wm^{-2}\)) at a certain location can be approximated as a function of wind statistics (Elliott et al., 1986)
(47)¶\[\frac{1}{2}\rho\sum^c_{j=1}f(V_j)V_j^3\]
where, \(\rho\) is mean air density (\(kg\,m^{-3}\)), \(j\) is the index of wind speed class, \(c\) is the number of wind speed classes, \(V_j\) is wind speed of the jth class (\(ms^{-1}\)), and \(f(V_j)\) is probability density function of \(V_j\). Two probability distributions are commonly used in wind data analysis: 1) the Rayleigh and 2) the Weibull distributions (Manwell et al. 2009). The Weibull distribution can better represent a wider variety of wind regimes (Celik 2003; Manwell et al. 2009), and is given as
(48)¶\[f(V_j) = \frac{k}{\lambda}\left(\frac{V_j}{\lambda}\right)^{k-1}e^{-\left(\frac{V_j}{\lambda}\right)^k}\]
where, \(k\) and \(\lambda\) are the shape and scale factors, respectively. The shape factor, \(k\), determines the shape of the Weibull probability density function (Fig. 10). The probability density function shows a sharper peak as \(k\) increases, indicating that there are consistent wind speeds around the mean wind speed. On the other hand, the function becomes smoother as k decreases, indicating more variation in wind speed and more frequent low and high wind speeds. The model requires wind speed inputs to be in terms of the estimated Weibull parameters, versus taking in raw wind speed data. For our sample data we used a MATLAB function, wblfit , to estimate \(k\) and \(\lambda\) at the wind speed reference height (height at which wind speeds were observed or estimated), which returns the maximum likelihood estimates of the parameters of the Weibull distribution given the values in the wind time series data. This was done for each wind speed observation point. The model For more details of wblfit function, please consult https://kr.mathworks.com/help/stats/wblfit.html. This can also be accomplished in R (see here for tutorial: https://stats.stackexchange.com/questions/60511/weibull-distribution-parameters-k-and-c-for-wind-speed-data).
Fig. 10 Example of Weibull probability density function with various shape factors (\(k\)), where mean wind velocity = \(6 ms^{-1}\) (Manwell et al., 2009).¶
Wind power density is calculated at the hub height \(Z\) (m) of a wind turbine (Fig. 10), which means all variables in (47) and (48) need to be converted into the appropriate value at hub height. Mean air density \(\rho\) was estimated as \(\rho=1.225-(1.194\cdot 10^{-4})Z\), which approximates the U.S. Standard Atmosphere profile for air density (National Oceanic and Atmospheric Administration, 1976). We applied the wind profile power law to estimate wind speed (\(V\)) at hub height \(Z\) (Elliott et al., 1986).
\[\frac{V}{V_r} = \left(\frac{Z}{Z_r}\right)^\alpha\]
where \(V\) is wind speed (\(ms^{-1}\)) at the hub height \(Z\) (m) of a wind turbine, and \(V_{r}\) is wind speed (\(ms^{-1}\)) at the reference height \(Z_r\) (m) where wind data are obtained. \(\alpha\) is power law exponent, which is an empirically derived coefficient and varies with the stability of the atmosphere. For neutral stability condition, α is approximately 1/7 (0.143) for land surfaces, which is widely applicable to adjust wind speed on land (Elliott et al., 1986). The power law exponent has different value on ocean surfaces. Hsu et al. (1994) found that \(\alpha = 0.11\pm0.03\) for ocean surface under near-neutral atmospheric stability conditions. The wind energy model uses \(\alpha = 0.11\) as a default value to adjust wind speed on the ocean surface. The wind profile of the atmospheric boundary layer can be approximated more accurately using the log wind profile equation that accounts for surface roughness and atmospheric stability (Manwell et al. 2009).
Fig. 11 A schematic diagram of a wind turbine (https://www.daviddarling.info/encyclopedia/H/AE_hub_height.html)¶
Wind power density (PD) outputs provide suitability information for a wind energy development project in terms of wind resource. Pacific Northwest Laboratories categorized wind power density and wind speed into seven classes based on United States wind atlas (Fig. 12) (Manwell et al. 2009). Areas designated as class 4 or greater are considered to be suitable for most wind energy development. Class 3 areas are suitable for wind energy development if large turbines are used. Class 1 and 2 are rarely considered as suitable areas for wind energy development in terms of energy potential. Wind resources vary considerably over space and a more detailed categorization of wind power density for five topographical conditions was developed in Europe, which includes sheltered terrain, open plain, sea coast, open sea, hills and ridges (Fig. 12) (Manwell et al. 2009). The wind resource classification for sea coast and open sea may provide better information on the suitability of offshore wind energy projects.
Fig. 12 Wind power density (PD) and wind speed classes based on European wind atlas (Modified from Table 2.6 in Manwell et al. 2009).¶
Energy Generation¶
The amount of energy harvestable from a wind turbine in a particular location depends on the characteristics of the wind turbine as well as wind conditions (Pallabazzer 2003; Jafarian & Ranjbar 2010). The wind energy model quantifies the harvestable energy based on the output power curve of a wind turbine and wind speed statistics. Fig. 13 shows an output power curve of a wind turbine (pitch control type). The wind turbine starts to generate power at the cut-in wind speed (\(V_cin\)). The output power increases up to the rated power (Prate) as wind speed increases to the rated wind speed (\(V_rate\)). The wind turbine keeps producing the maximum power (i.e., Prate) until wind speed reaches the cut-out wind speed (\(V_cout\)). If wind speed increases beyond the cut-out wind speed, the wind turbine stops generating power for safety purposes. Currently, more than 74 offshore wind farms are operating globally and technology specific information of the wind turbine at each wind farm are available at LORC Knowledge (2012).
Fig. 13 Output power (P) curve of a wind turbine (pitch control type) as a function of wind speed (V) (Modified from Fig.1 in Pallabazzer 2003)¶
To provide flexibility for a variety of different turbine types without requiring the user to manually enter a power curve, we estimate the output power \(P\) (kW) of a wind turbine using a polynomial modeling approach (Jafarian & Ranjbar 2010):
\[\begin{split}P(V) = \left\{\begin{array}{ll} 0 & V < V_{cin} \mathrm{\ or\ } V>V_{cout}\\ P_{rate} & V_{rate} < V < V_{cout}\\ (V^m - V^m_{in})/(V^m_{rate} - V^m_{in}) & V_{cin} \leq V \leq V_{rate}\\ \end{array}\right.\end{split}\]
where, \(m\) is an exponent of the output power curve (usually 1 or 2). Using this approach, the energy output, O (MWh), generated by a wind turbine can be calculated using
\[O = nday\cdot \frac{\rho}{\rho_0} P_{rate}\left(\int^{V_rate}_{V_{cin}} \frac{V^m - V^m_{cin}}{V^m_r-V^m_{cin}} f(V)dV + \int^{V_{cout}}_{V_{rate}} f(V) dV\right)(1- lossrate)\]
where, \(nday\) is the number of days for energy output (e.g. \(nday = 365\) days for annual energy output), \(\rho_0\) is air density of standard atmosphere (e.g. \(1.225 kg m^{-3}\) for U.S. standard atmosphere air density at sea level), and \(lossrate\) is a decimal value which represents energy losses due to a combination of downtime, power conversion efficiency, and electrical grid losses (default value is .05). All of these parameters are included in the global parameters .csv file and may be changed by the user from their defaults. Total farm energy output is equal to the individual turbine output multiplied by the number of turbines, \(n\),
\[E = nO\]
The InVEST software comes with default technical and financial information about two common turbine sizes, the 3.6 MW and 5.0 MW turbines. The information for each turbine is given in .csv files in the Input directory and is a required input into the model. The user can use the default data, edit a file, or create a new file to assess different turbine sizes or update specific characteristics. The files must retain the same format - only parameter values may safely be modified. It is recommended to save edits as new .csv files rather than overwriting the default data.
Offset Carbon¶
Since wind turbines create no greenhouse gasses when generating energy, the user may be interested in assessing the amount of carbon dioxide emissions avoided by building a wind farm versus a conventional energy generation plant. To translate carbon-free wind power to a representative amount of annual avoided \(\mathrm{CO}_2\) emissions, we use the following default conversion factor: \(6.8956\cdot 10 ^{-4} \mathrm{metric\ tons\ CO}_2/kWh\)
This is obtained from the EPA (https://www.epa.gov/energy/greenhouse-gases-equivalencies-calculator-calculations-and-references) and is based on 2007 data. See their website for limitations of this approach. The parameter is representative of the carbon emitted by the energy portfolio of the United States and may not be appropriate for your context. This value is changeable in the global parameters .csv file.
Value of Power¶
The value of wind power is measured as the discounted pre-tax net revenue from power generation that would accrue to a wind power developer/operator over the expected lifetime of a wind farm. The Net Present Value (https://en.wikipedia.org/wiki/Net_present_value) (NPV) of energy for a given wind farm is:
\[NPV = \sum^T_{t=1}(R_t-C_t)(1+i)^{-t}\]
Where \(R_t\) is the gross revenue collected in year \(t\), and \(C_t\) are the aggregate costs in year \(t\). \(T\) represents the expected lifetime of the facility, and \(i\) represents the discount rate (https://en.wikipedia.org/wiki/Discount_rate) or weighted average cost of capital (WACC, https://en.wikipedia.org/wiki/Weighted_average_cost_of_capital). Both \(T\) and \(i\) can be changed by the user; \(T\) can be found in the global parameters .csv file and \(i\) is entered in the valuation section of the user interface. For projects that are financed by both debt and equity and where there is a significant amount of risk associated with establishing and maintaining the projected stream of revenues, WACC is a more appropriate method for establishing the time value of money. As this parameter enters into the calculation in the same way as a discount rate would, if you prefer you can input an appropriate discount rate and interpret the results accordingly. We do not supply a default value, but Levitt et al. (2011) suggest a WACC value of .116 based on a comprehensive analysis of industry specific discount rates and different debt/equity structures in Europe and the U.S. This is higher than discount rates typically used elsewhere, such as in standard cost benefit analysis, so you may find your application justifies a different rate.
Annual gross revenue is calculated by multiplying the price per kWh, \(s\), by the annual amount of kWh supplied to the grid by a wind farm, \(E_t\), thus \(R_t=sE_t\). It is assumed that energy is not collected in the first year during the construction phase.
Costs can be separated into one-time capital costs and ongoing operations and management costs. During the construction phase, expenditures are made on turbines, foundations, electrical transmission equipment, and other miscellaneous costs associated with development, procurement, and engineering. At the end of the farms usable lifetime, the firm must remove their equipment. The default information supplied is based on an extensive review of peer-reviewed publications, industry reports, and press releases. This information is summarized below.
Turbines¶
Turbines and foundations are modeled with unit costs. We have supplied cost data on 3.6 MW and 5.0 MW class turbines as well as monopile and jacketed foundations, though you may enter your own turbine- or foundation-specific information. Note all default costs below are given in 2012 US dollars. Assuming one foundation per turbine, the total cost of turbines and foundations is simply the number of wind turbines multiplied by the unit cost. Table 1 gives a summary of existing turbine costs.
# of Turbines
Total MW
Unit Cost ($mil)
Riffgat
Sheringham Shoal
Greater Gabbard
Butendiek
London Array
Amrumbank
Global Tech 1
Borkum 2
Table 1: Turbine costs.
Foundations¶
This model can flexibly include valuation for both foundation-based and floating turbine designs. This is accomplished by letting the user enter the appropriate unit cost information for their farm design. Outputs are constrained by user-editable depth and distance parameters, so it is important to adjust these to reflect the appropriate technological constraints of your design choice. Foundation-based turbines have conventionally been limited to a depth of around 60 meters.
Foundation cost information is relatively difficult to come by. Monopile foundations are the most common foundation type and are typically mated to 3.6 MW turbines. Ramboll, a major foundation manufacturer, estimates that monopile foundations with a 3.6 MW turbine are $2 million per foundation. Monopile costs at Burbo and Rhyl Flats in the UK were given in press releases as $1.9 million $2.2 million respectively. Jacketed foundations are more robust than monopile foundations and are typically used with 5.0 MW turbines and/or in deep water. Two press releases for Nordsee Ost (Germany) and Ormonde (UK) put the unit costs for this type of foundation at $2.74 million and $2.43 million respectively. A 2012 release by the European Energy Programme for Recovery put the cost of deepwater (40 meters) gravity foundations at Global Tech 1 (Germany) as $6.65 million per foundation.
All foundations should feature an increasing cost with depth as material costs will necessarily be higher; however, this is not captured in this model currently due to the paucity of project cost data to estimate such a relationship. Jacquemin et al. (2011) used field data to estimate foundation weight as a function of water depth; however the data and functions are not given making it impossible to replicate their work. Nonetheless, this source does provide a means to approximate different foundation technology costs including floating foundation technology. Samoteskul et al. (2014) demonstrate how the data from Jacquemin et al. (2011) can be used in this way.
Electricity Transmission¶
Electricity transmission equipment is much harder to model at the component level because the optimal transmission system design varies considerably with local conditions and wind farm design. Depending on the size of the farm and its distance from shore, offshore platforms with voltage transformers, converters, and switchgear may be needed. Additionally, there is a critical point where a wind farm's distance from the grid requires a switch from alternating current (AC) power to direct current (DC) power to overcome line losses which reduce the amount of energy delivered. Given design variation across different contexts, we utilized a top-down modeling approach for transmission costs to allow the model to be used broadly without the need for exhaustive system modeling and unit cost information. We collected information about electricity transmission costs (including installation) from 20 wind farms and used it to estimate a relationship between total costs and farm characteristics. This data was collected from the U.K. Ofgem tender process (https://www.ofgem.gov.uk/electricity/transmission-networks/offshore-transmission) and is shown in Table 2.
Cost (2012 $Million)
Depth (m)
Land Cable (km)
Sea Cable (km)
Tot Cable (km)
Robin Rigg
Gunfleet Sands 1 & 2
Ormonde
Thanet
Walney 1
Gwynt y Mor
Lincs
London Array Phase 1
Nordergrunde
Dolwin 1
Helwin 2
Sylwin 1
Borwin 2
Table 2: Offshore energy transmission infrastructure.
Using an ordinary least squares regression, we estimated the following equation that relates total transmission costs to farm capacity and total transmission cable distance:
\[TransCost = \beta_0 MW + \beta_1 TotCable + \epsilon\]
To capture the effect of transmission losses due to resistance, we estimated this separately for each current type (AC and DC). Since our data suggest a critical threshold of greater than 54.8km for DC transmission, we adopt 60km as the transition point. This is also consistent with published figures regarding the cost effectiveness of transitioning from AC to DC transmission (Carbon Trust, 2008; UMaine, 2011); see Table 3
Costs if \(\leq\) 60km (AC)
Costs if > 60km (DC)
.81***
1.09**
standard error
Cables (km)
Adj \(R^2\)
Table 3, AC DC transmission costs. *p<.10, **p<.05, ***p<.01
These results provide a predictive model of transmission costs as a function of current type, total farm capacity in MW, and the total length of transmission cable in km. To calculate the total length of transmission cable from any given offshore location, the model requires some information about the onshore grid. The provided options are meant to provide the user flexibility based on data availability and common analysis questions. The user has two options:
Create a .csv table that includes latitude and longitude details for all grid connection points in the area of interest
Use a fixed parameter to model grid location
The table option gives the user the ability to indicate both landing points on the coastline and grid connection points. For each potential wind farm site (each ocean pixel that fits the other constraints of the model and is in the AOI), the model identifies the closest specified land point and calculates the straight-line distance to that point. It then finds the closest grid connection point and calculates the straight-line distance to that point. Summing these two distances yields the total length of the transmission cables used in the calculation for transmission costs in Table 3. The user can optionally omit landing points from the table and only include grid points: in this case the model simply calculates total length of the transmission cable as the straightline distance from each potential wind farm location to the nearest grid point.
The fixed parameter option specifies a mean distance inland along the entire coast that represents the expected distance that overland cables may have to travel to reach a grid connection. Since grid connection points for large farms are very opportunistic and represent a relatively small portion of capital costs, it is not unrealistic to model grid connection this way in the absence of a detailed grid connection scheme. The default parameter included, 5.5 km, is the mean overland cable distance from the UK from the transmission infrastructure table above.
Above and beyond the cost of sending the energy to shore, wind farms also require cables which connect turbines to each other, called array cables. We estimated a simple linear relationship between array cables and the number of turbines based on the data given below:
km of cable
Total Cost ($mil)
Nordsee Ost
Table 4. Array cabling
The data above suggest that .91km of cable is required per turbine at a cost of $260,000 per km. This establishes a relationship of array cable to wind turbines which can retrieve the total cost of array cable based only on the number of turbines in the farm.
Other Costs¶
There are a variety of additional costs associated with the construction phase, such as those for development, engineering, procurement, and royalties. AWS Truewind (2010) estimate these costs to amount to 2% of total capital expenditures; Blanco (2009) indicates it could be as high as 8%. We adopt their method of using a ratio of capital costs for calculating these costs and use the mean value of 5% as the default .
Installation of foundations, turbines, and transmission gear (cables and substations) comprises its own cost category. Kaiser and Snyder (2012) take a comprehensive view of installation costs and find that installation costs make up approximately 20% of capital expenditures in European offshore wind farms. Accordingly, this model treats installation costs as a fixed percentage of total capital costs and uses the default value suggested by Kaiser and Snyder (2012).
Decommissioning the facility at the end of its useful life (\(t=T\)) enters into the model in a similar way as installation costs, in that it is a fixed fraction of capital expenditures. Kaiser and Snyder (2012) put this one-time cost at 2.6% to 3.7% of initial expenditures (net of scrap value) for the Cape Wind farm using a sophisticated decommissioning model. The default value used in this model is 3.7%.
Most of the costs of an offshore wind energy farm are related to the initial capital costs; however, there are ongoing costs related to maintenance and operations (O&M) as well. Boccard (2010) uses a methodology consistent with the rest of our modeling by calculating annual O&M cost as a % of original capital costs, and puts the costs somewhere between 3 and 3.5. The default value used in this model is 3.5%, and can be changed along with all the other costs in this section by editing the global parameters .csv file.
Energy Prices¶
This model is designed to accept a fixed unit price for a kilowatt hour (kWh) of energy over the lifetime of the wind farm, OR a .csv table where the price/kWh can be specified for each year over the lifetime of the wind farm. In some locations, wind farm operators receive a subsidized rate known as a feed-in tariff which guarantees them a set price for their energy over some time horizon. In other locations, wind farm operators must negotiate with energy providers and public utility commissions to secure a power purchase agreement. These are contracts that specify a unit price for energy delivered and may feature variable rates over time, which makes the flexibility of the price table essential.
Levelized Cost of Energy¶
The levelized cost of energy (https://en.wikipedia.org/wiki/Cost_of_electricity_by_source) (LCOE) is the unit price that would need to be received for energy that would set the present value of the project equal to zero. As such, it gives the lowest price/kWh that a wind farm developer could receive before they considered a project not worthwhile. The output given by the model is in terms of currency/kWh and is calculated as:
\[LCOE = \frac{\sum^T_{t=1}\frac{O\&M\cdot CAPEX}{(1+i)^t}+\frac{D\cdot CAPEX}{(1+i)^T}+CAPEX}{\sum^T_{t=1}\frac{E_t}{(1+i)^t}}\]
Where \(CAPEX\) is the initial capital expenditures, \(O\&M\) is the operations and management parameter, \(D\) is the decommissioning parameter, \(E_t\) is the annual energy produced in kWh, \(i\) is the discount or WACC rate, and \(t\) is the annual time step, where \(t=\{1\ldots T\}\).
Validation¶
Capital Cost Model¶
Since capital expenditures represent the largest proportion of costs, and much of the ancillary costs are fixed fractions of capital costs, it is critically important to validate our model against stated offshore wind farm costs worldwide. To do so, we collected data from https://www.4coffshore.com/ and https://www.lorc.dk/offshore-wind-farms-map/statistics on stated capital costs and designs for wind farms that are in construction or currently operational. We constrained the data collection to only those employing 3.6 MW and 5.0 MW turbines, for which we have provided default data with the InVEST model. Stated capital costs gathered from 4Coffshore were inflated to 2012 $US using their supplied financial close information as the basis for when the cost estimate was collected. To generate predictions, the design of each farm was input into the InVEST model using appropriate default cost parameters for all components. Most farms have their own electrical transmission equipment, though some deepwater farms are beginning to used centralized offshore substations that aggregate energy for transport from multiple farms. To predict electrical transmission costs for these farms, it was first necessary to estimate the cost of the entire offshore substation and then attribute a prorated capital cost to each farm based on their relative contribution to exported energy capacity. For example, an offshore substation with a 800 MW export capacity that is connected to Farm A (200 MW) and Farm B (600 MW) would contribute 25% of capital costs to Farm A and 75% to Farm B. The results of our validation show a very strong correlation between predictions and stated capital costs for 3.6 MW and 5.0 MW turbines using the default data (see Figure 5.6).
Fig. 14 Predicted capital costs versus stated capital costs.¶
Since this model was released in early 2013, it has been tested against other modeling approaches. They are noted below for reference:
The InVEST model was compared alongside model estimates from the National Renewable Energy Laboratory (NREL) and a consulting firm in a report out of the University of California, Santa Barbara, that measured the levelized cost of wind energy in Bermuda. InVEST was within 3% of the NREL estimate and 12% of the estimate made by the consulting firm. http://trapdoor.bren.ucsb.edu/research/2014Group_Projects/documents/BermudaWind_Final_Report_2014-05-07.pdf
Energy Production¶
The quality of wind input data determines the accuracy of model results. So, users need to understand the quality of wind input data for proper interpretation of the model results. The default wind input data are more appropriate for global and regional scale applications at 4 or 60 minutes spatial resolution.
Harvested wind energy indicates the averaged energy output for a given period based on the output power curve of a wind turbine. Users may want to consider additional technology-specific information, such as device availability, power conversion efficiency, and directional factors by applying adjustment factors to the harvested energy output.
Energy Valuation¶
As the validation section demonstrates, the model and the default data reliably predict capital costs using the supplied inputs. Revenues are linked to energy production and a user-entered price. More reliable cost projections over space could likely be attained by:
Creating a foundation cost function that accounts for higher costs in deeper waters
Having installation costs vary as a function of bottom geology
These are features that are being explored for subsequent model updates conditional on data availability.
The model is amenable to producing valuation outputs for floating turbines, but was not designed specifically for this task. To produce outputs, the user needs to input reasonable values for depth and distance constraints as well as "foundation" costs equal to the unit cost of the aggregate equipment needed to float a turbine. The electrical transmission cost model was derived using technologies that are suitable to roughly 60 meters depth and 200 kilometers distance from shore and will likely produce less accurate cost projections outside of those bounds.
Data Needs¶
Workspace Select a folder to be used as your workspace. If the folder you select does not exist, a new one will be created. This folder will contain the rasters produced by this model. If datasets already exist in this folder, they will be overwritten. The output will be contained in an folder named output inside the workspace directory.
Results Suffix (Optional) A string that will be added to the end of the output file paths.
Wind Data Points A .csv file that represents the wind input data (Weibull parameters). The column headers are: LONG , LATI , LAM , K , REF . LAM is the Weibull scale factor at the reference hub height. K is the Weibull shape factor. REF is the reference height at which wind speed data was collected and LAM was estimated at. Sample data files are found in the WindEnergyinput direction inside the InVEST installation directory.
Global Data: GLobal_EEZ_WEBPAR_90pct_100ms.csv
East Coast of the US: ECNA_EEZ_WEBPAR_Aug27_2012.csv
Area Of Interest (Optional) An optional polygon shapefile that defines the area of interest. The AOI must be projected with linear units equal to meters. If the AOI is provided it will clip and project the outputs to that of the AOI. The distance inputs are dependent on the AOI and will only be accessible if the AOI is selected. If the AOI is selected and the distance parameters are selected, then the AOI should also cover a portion of the land polygon to calculate distances correctly. An AOI is required for valuation.
Bathymetric DEM A raster dataset for the elevation values in meters of the area of interest. The DEM should cover at least the entire span of the area of interest and if no AOI is provided then the default global DEM should be used.
Land Polygon for Distance Calculation A polygon shapefile that represents the land and coastline that is of interest. For this input to be selectable the AOI must be selected. The AOI should also cover a portion of this land polygon to properly calculate distances. This coastal polygon, and the area covered by the AOI, form the basis for distance calculations for wind farm electrical transmission. This input is required for masking by distance values and for valuation.
Global Wind Energy Parameters A .csv file that holds wind energy model parameters for both the biophysical and valuation modules. These parameters are defaulted to values that are reviewed in the The Model section of this guide. We recommend careful consideration before changing these values. Note: The default monetary values for these parameters (see Table 3) are specified in U.S. dollars. If you are using a different currency for the other valuation parameters to this model (Cost of the Foundation Type etc), you must also modify the Global Wind Energy Parameters using an appropriate conversion rate.
Turbine Properties¶
Turbine Type A .csv file that contains parameters corresponding to a specific turbine type. The InVEST package comes with two turbine model options, 3.6 MW and 5.0 MW. You may create a new turbine class (or modifying existing classes) by using the existing file format conventions and filling in your own parameters. It is recommended that you do not overwrite the existing default .csv files. These files are found in the WindEnergyinput direction inside the InVEST installation directory and named
3.6 MW: 3_6_turbine.csv
Number Of Turbines An integer value indicating the number of wind turbines per wind farm.
Minimum Depth for Offshore Wind Farm Installation (m) A floating point value in meters for the minimum depth of the offshore wind farm installation.
Maximum Depth for Offshore Wind Farm Installation (m) A floating point value in meters for the maximum depth of the offshore wind farm installation.
Minimum Distance for Offshore Wind Farm Installation (m) A floating point value in meters that represents the minimum distance from shore for offshore wind farm installation. Required for valuation.
Maximum Distance for Offshore Wind Farm Installation (m) A floating point value in meters that represents the maximum distance from shore for offshore wind farm installation. Required for valuation.
Valuation¶
Cost of the Foundation Type (millions of currency) A floating point number for the unit cost of the foundation type (in millions of your chosen currency). The cost of a foundation will depend on the type of foundation selected, which itself depends on a variety of factors including depth and turbine choice. Any currency may be used, as long as it is consistent across the different valuation inputs.
Discount Rate The discount rate reflects preferences for immediate benefits over future benefits. Enter in decimal form (Ex: 1% as 0.01, 100% as 1.0).
Grid Connection Points An optional .csv file with grid and land points to determine energy transmission cable distances from. Each point location is represented as a single row with columns being ID , TYPE , LATI , and LONG . The LATI and LONG columns indicate the coordinates for the point. The TYPE column relates to whether it is a land or grid point. The ID column is a simple unique integer. The shortest distance between respective points is used for calculations. An example:
Average Shore to Grid Distance (km) A number in kilometers that is only used if grid points are NOT used in valuation. When running valuation using the land polygon to compute distances, the model uses an average distance to the onshore grid from coastal cable landing points instead of specific grid connection points.
Use Price Table If selected, then the model uses a price table to value energy produced over the lifetime of the farm. If not, the model uses a constant price/kWh (with potential inflation).
Wind Energy Price Table A .csv file that indicates the price received for each annual time period over the life of the wind farm. See sample price table "price_table_example.csv" for proper formatting. Any currency may be used, as long as it is consistent across the different valuation inputs.
Price of Energy per Kilowatt Hour (currency/kWh) The price of energy per kilowatt hour. This is only available if "Use Price Table" is unchecked. Any currency may be used, as long as it is consistent across the different valuation inputs.
Annual Rate of Change in the Price of Wind Energy This represents the inflation rate for the price of wind energy and refers to the price entered directly above. Enter in decimal form (Ex: 1% as 0.01, 100% as 1.0). This is only available if "Use Price Table" is unchecked.
Interpreting Results¶
All output resolutions are based on the resolution of the supplied digital elevation model raster. When the resolution of the DEM exceeds the resolution of the wind data layers, pixel values are determined by using bilinear interpolation.
carbon_emissions_tons.tif : a GeoTIFF raster file that represents tons of offset carbon emissions for a farm built centered on a pixel per year.
density_W_per_m2.tif : a GeoTIFF raster file that represents power density (W/m^2) centered on a pixel.
example_size_and_orientation_of_a_possible_wind_farm.shp : an ESRI shapefile that represents the outer boundary of a sample windfarm. The position of this polygon is random and is meant to give the user a sense of scale of the potential wind farm.
harvested_energy_MWhr_per_yr.tif : a GeoTIFF raster file that represents the annual harvested energy from a farm centered on that pixel.
levelized_cost_price_per_kWh.tif : a GeoTIFF raster file that represents the unit price of energy that would be required to set the present value of the farm centered at that pixel equal to zero. Values are given in the unit of currency used as model input.
npv_US_millions.tif : a GeoTIFF raster file that represents the net present value of a farm centered on that pixel. Values are millions of the unit of currency used as model input.
wind_energy_points.shp : an ESRI Shapefile that summarizes the above outputs for each point…
Data Sources¶
Wind Time Series data¶
NOAA's National Weather Service provides hindcast reanalysis results for wind time series; https://polar.ncep.noaa.gov/. The spatial resolution of the model results ranges from 4 to 60 minutes depending on the global and regional grid systems. The model outputs have been saved at 3-hour interval from 1999 to the present. The model results have been validated with ocean buoy data at many locations and provide good quality wind information.
Turbine Parameters¶
LORC provides the parameter information of offshore wind turbines that are currently operating in the world. https://www.lorc.dk/offshore-wind-farms-map/list?sortby=InstalledCapacity&sortby2=&sortorder=desc
Data sources are largely cited above, except for figures that were derived from press releases. Press releases were found by an exhaustive Google keyword search on "offshore wind energy" contract and several variants of that theme. All costs were recorded and inflated in their original currency and exchanged to $US at the spot rate on March 30th, 2012.
This file (https://www.dropbox.com/s/p4l36pbanl334c2/Wind_Sources.zip?dl=0) contains an archive of the sources sited for costs and a spreadsheet that links each cost figure to the relevant press release, conference proceeding, etc.
AWS Truewind. 2010. New York's Offshore Wind Energy Development Potential in the Great Lakes. Feasibility Study for New York State Energy Research and Development Authority.
Blanco, M. 2009. The Economics of Wind Energy. Renewable and Sustainable Energy Reviews, 13, 1372-82. http://dx.doi.org/10.1016/j.rser.2008.09.004
Boccard, N. 2010. Economic Properties of Wind Power: A European Assessment. Energy Policy, 38, 3232-3244. http://dx.doi.org/10.1016/j.enpol.2009.07.033
Carbon Trust. 2008. Offshore Wind Power: Big Challenge, Big Opportunity. Report on behalf of the Government of the United Kingdom.
Celik, A. N. 2003. A statistical analysis of wind power density based on the Weibull and Rayleigh models at the southern of Turkey. Renewable Energy 29:509-604. http://dx.doi.org/10.1016/j.renene.2003.07.002
Elliott, D. L., C. G. Holladay, W. R. Barchet, H. P. Foote, and W. F. Sandusky. 1986. Wind energy resource atlas of the United States. DOE/CH 10093-4. Solar Technical Information Program, Richland, Washington.
Global Wind Energy Council (GWEC). 2013. Global Wind Statistics, 2012. Accessed at: http://www.gwec.net/wp-content/uploads/2013/02/GWEC-PRstats-2012_english.pdf
Griffin, R., Buck, B., and Krause, G. 2015a. Private incentives for the emergence of co-production of offshore wind energy and mussel aquaculture. Aquaculture, 346, 80-89. http://dx.doi.org/10.1016/j.aquaculture.2014.10.035
Griffin, R., Chaumont, N., Denu, D., Guerry, A., Kim, C., and Ruckelshaus, M. 2015b. Incorporating the visibility of coastal energy infrastructure into multi-criteria siting decisions. Marine Policy, 62, 218-223. http://dx.doi.org/10.1016/j.marpol.2015.09.024
Hsu, S. A., E. A. Meindl, and D. B. Gilhousen. 1994. Determining the power-law wind-profile exponent under near-neutral stability conditions at sea. Journal of applied meteorology 33:757-765. http://dx.doi.org/10.1175/1520-0450(1994)033%3C0757:DTPLWP%3E2.0.CO;2
Jacquemin, J., Butterworth, D., Garret, C., Baldock, N., and A. Henderson. 2011. Inventory of location specific wind energy cost. WP2 Report D2.2. Spatial deployment of offshore wind energy in Europe (Wind-Speed). Garrad Hassan & Partners Ltd. Supported by Intelligent Energy Europe.
Jafarian, M., and A. M. Ranjbar. 2010. Fuzzy modeling techniques and artificial neural networks to estimate annual energy output of a wind turbine. Renewable Energy 35:2008-2014. http://dx.doi.org/10.1016/j.renene.2010.02.001
Kaiser, M. and B. Snyder. 2012. Offshore wind capital cost estimation in the U.S. Outer Continental Shelf: A reference class approach. Marine Policy, 36, 1112-1122. http://dx.doi.org/10.1016/j.marpol.2012.02.001
Levitt, A., Kempton, W., Smith, A., Musial, W., and J. Firestone. 2011. Pricing offshore wind energy. Energy Policy, 39, 6408-6421. http://dx.doi.org/10.1016/j.enpol.2011.07.044
Lorc Knowledge. 2012. List of offshore wind farms. https://www.lorc.dk/offshore-wind-farms-map/list Accessed at December 31, 2012.
Manwell, J. F., J. G. Mcgowan, and A. L. Rogers. 2009. Wind energy explained: Theory, design and application. John Wiley & Sons Ltd., West Sussex, United Kingdom.
National Oceanic and Atmospheric Administration. 1976. U. S. Standard Atmosphere. NOAA- S/T76-1562, Washington, DC.
Pallabazzer, R. 2003. Provisional estimation of the energy output of wind generators. Renewable Energy, 29, 413-420. http://dx.doi.org/10.1016/S0960-1481(03)00197-6
Samoteskul, K., Firestone, J., Corbett, J., and J. Callahan. 2014. Changing vessel routes could significantly reduce the cost of future offshore wind projects. Journal of Environmental Management, 141, 146-154. http://dx.doi.org/10.1016/j.jenvman.2014.03.026
UMaine. 2011. Maine deepwater offshore wind report. https://composites.umaine.edu/research/offshore-wind-report/ | CommonCrawl |
ISSN 1088-6834(online) ISSN 0894-0347(print)
Journals Home Search My Subscriptions Subscribe
Your device is paired with
for another days.
Previous issue | This issue | Most recent issue | All issues (1988–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article
Tight closure, invariant theory, and the Briançon-Skoda theorem
Authors: Melvin Hochster and Craig Huneke
Journal: J. Amer. Math. Soc. 3 (1990), 31-116
MSC: Primary 13C05; Secondary 13A15, 13A50, 13B99, 13D02
DOI: https://doi.org/10.1090/S0894-0347-1990-1017784-6
MathSciNet review: 1017784
Full-text PDF Free Access
References | Similar Articles | Additional Information
References [Enhancements On Off] (What's this?)
M. Artin, Algebraic approximation of structures over complete local rings, Inst. Hautes Études Sci. Publ. Math. 36 (1969), 23–58. MR 268188
Maurice Auslander and Mark Bridger, Stable module theory, Memoirs of the American Mathematical Society, No. 94, American Mathematical Society, Providence, R.I., 1969. MR 0269685
Jean-François Boutot, Singularités rationnelles et quotients par les groupes réductifs, Invent. Math. 88 (1987), no. 1, 65–68 (French). MR 877006, DOI https://doi.org/10.1007/BF01405091
David A. Buchsbaum and David Eisenbud, What makes a complex exact?, J. Algebra 25 (1973), 259–268. MR 314819, DOI https://doi.org/10.1016/0021-8693%2873%2990044-6
Armand Borel, Linear algebraic groups, W. A. Benjamin, Inc., New York-Amsterdam, 1969. Notes taken by Hyman Bass. MR 0251042
Henri Skoda and Joël Briançon, Sur la clôture intégrale d'un idéal de germes de fonctions holomorphes en un point de ${\bf C}^{n}$, C. R. Acad. Sci. Paris Sér. A 278 (1974), 949–951 (French). MR 340642
Winfried Bruns and Udo Vetter, Determinantal rings, Lecture Notes in Mathematics, vol. 1327, Springer-Verlag, Berlin, 1988. MR 953963
Corrado De Concini, David Eisenbud, and Claudio Procesi, Hodge algebras, Astérisque, vol. 91, Société Mathématique de France, Paris, 1982. With a French summary. MR 680936
Sankar P. Dutta, Frobenius and multiplicities, J. Algebra 85 (1983), no. 2, 424–448. MR 725094, DOI https://doi.org/10.1016/0021-8693%2883%2990106-0
Sankar P. Dutta, On the canonical element conjecture, Trans. Amer. Math. Soc. 299 (1987), no. 2, 803–811. MR 869233, DOI https://doi.org/10.1090/S0002-9947-1987-0869233-2
S. P. Dutta, Ext and Frobenius, J. Algebra 127 (1989), no. 1, 163–177. MR 1029410, DOI https://doi.org/10.1016/0021-8693%2889%2990281-0
John A. Eagon and M. Hochster, $R$-sequences and indeterminates, Quart. J. Math. Oxford Ser. (2) 25 (1974), 61–71. MR 337934, DOI https://doi.org/10.1093/qmath/25.1.61
David Eisenbud, Homological algebra on a complete intersection, with an application to group representations, Trans. Amer. Math. Soc. 260 (1980), no. 1, 35–64. MR 570778, DOI https://doi.org/10.1090/S0002-9947-1980-0570778-7
E. Graham Evans and Phillip Griffith, The syzygy problem, Ann. of Math. (2) 114 (1981), no. 2, 323–333. MR 632842, DOI https://doi.org/10.2307/1971296
---, Syzygies, London Math. Soc. Lecture Note Ser., no. 106, Cambridge Univ. Press, Cambridge, 1985.
E. Graham Evans Jr. and Phillip A. Griffith, Order ideals, Commutative algebra (Berkeley, CA, 1987) Math. Sci. Res. Inst. Publ., vol. 15, Springer, New York, 1989, pp. 213–225. MR 1015519, DOI https://doi.org/10.1007/978-1-4612-3660-3_10
Richard Fedder and Keiichi Watanabe, A characterization of $F$-regularity in terms of $F$-purity, Commutative algebra (Berkeley, CA, 1987) Math. Sci. Res. Inst. Publ., vol. 15, Springer, New York, 1989, pp. 227–245. MR 1015520, DOI https://doi.org/10.1007/978-1-4612-3660-3_11
William Fulton, Intersection theory, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], vol. 2, Springer-Verlag, Berlin, 1984. MR 732620
Hans Grauert and Oswald Riemenschneider, Verschwindungssätze für analytische Kohomologiegruppen auf komplexen Räumen, Invent. Math. 11 (1970), 263–292 (German). MR 302938, DOI https://doi.org/10.1007/BF01403182
Robin Hartshorne, Local cohomology, Lecture Notes in Mathematics, No. 41, Springer-Verlag, Berlin-New York, 1967. A seminar given by A. Grothendieck, Harvard University, Fall, 1961. MR 0224620
Jürgen Herzog, Ringe der Charakteristik $p$ und Frobeniusfunktoren, Math. Z. 140 (1974), 67–78 (German). MR 352081, DOI https://doi.org/10.1007/BF01218647
Melvin Hochster and Craig Huneke, Tightly closed ideals, Bull. Amer. Math. Soc. (N.S.) 18 (1988), no. 1, 45–48. MR 919658, DOI https://doi.org/10.1090/S0273-0979-1988-15592-9
Melvin Hochster and Craig Huneke, Tight closure, Commutative algebra (Berkeley, CA, 1987) Math. Sci. Res. Inst. Publ., vol. 15, Springer, New York, 1989, pp. 305–324. MR 1015524, DOI https://doi.org/10.1007/978-1-4612-3660-3_15
Melvin Hochster and Craig Huneke, Tight closure and strong $F$-regularity, Mém. Soc. Math. France (N.S.) 38 (1989), 119–133. Colloque en l'honneur de Pierre Samuel (Orsay, 1987). MR 1044348
---, Phantom homology, preprint, 1989. ---, Tight closure, $F$-regularity, test elements and smooth base change (in preparation). ---, Tight closures of parameter ideals and splitting in module-finite extensions (in preparation). ---, Tight closure in characteristic zero (in preparation). ---, Tight closure and elments of small order in integral extensions, preprint, 1989. ---, Infinite integral extensions and big Cohen-Macaulay algebras, preprint, 1989.
M. Hochster, Rings of invariants of tori, Cohen-Macaulay rings generated by monomials, and polytopes, Ann. of Math. (2) 96 (1972), 318–337. MR 304376, DOI https://doi.org/10.2307/1970791
M. Hochster, Contracted ideals from integral extensions of regular rings, Nagoya Math. J. 51 (1973), 25–43. MR 349656
Melvin Hochster, Topics in the homological theory of modules over commutative rings, Published for the Conference Board of the Mathematical Sciences by the American Mathematical Society, Providence, R.I., 1975. Expository lectures from the CBMS Regional Conference held at the University of Nebraska, Lincoln, Neb., June 24–28, 1974; Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics, No. 24. MR 0371879
Melvin Hochster, Big Cohen-Macaulay modules and algebras and embeddability in rings of Witt vectors, Conference on Commutative Algebra–1975 (Queen's Univ., Kingston, Ont., 1975), Queen's Univ., Kingston, Ont., 1975, pp. 106–195. Queen's Papers on Pure and Applied Math., No. 42. MR 0396544
Melvin Hochster, Cyclic purity versus purity in excellent Noetherian rings, Trans. Amer. Math. Soc. 231 (1977), no. 2, 463–488. MR 463152, DOI https://doi.org/10.1090/S0002-9947-1977-0463152-5
Melvin Hochster, Some applications of the Frobenius in characteristic $0$, Bull. Amer. Math. Soc. 84 (1978), no. 5, 886–912. MR 485848, DOI https://doi.org/10.1090/S0002-9904-1978-14531-5
Melvin Hochster, Cohen-Macaulay rings and modules, Proceedings of the International Congress of Mathematicians (Helsinki, 1978), Acad. Sci. Fennica, Helsinki, 1980, pp. 291–298. MR 562618
---, Associated graded rings derived from integrally closed ideals, Proc. Conf. Commutative Algebra (Rennes, France, May 1981), pp. 1-27.
Melvin Hochster, Canonical elements in local cohomology modules and the direct summand conjecture, J. Algebra 84 (1983), no. 2, 503–553. MR 723406, DOI https://doi.org/10.1016/0021-8693%2883%2990092-3
M. Hochster and John A. Eagon, Cohen-Macaulay rings, invariant theory, and the generic perfection of determinantal loci, Amer. J. Math. 93 (1971), 1020–1058. MR 302643, DOI https://doi.org/10.2307/2373744
Melvin Hochster and Joel L. Roberts, Rings of invariants of reductive groups acting on regular rings are Cohen-Macaulay, Advances in Math. 13 (1974), 115–175. MR 347810, DOI https://doi.org/10.1016/0001-8708%2874%2990067-X
Melvin Hochster and Joel L. Roberts, The purity of the Frobenius and local cohomology, Advances in Math. 21 (1976), no. 2, 117–172. MR 417172, DOI https://doi.org/10.1016/0001-8708%2876%2990073-6
Craig Huneke, Hilbert functions and symbolic powers, Michigan Math. J. 34 (1987), no. 2, 293–318. MR 894879, DOI https://doi.org/10.1307/mmj/1029003560
Craig Huneke, An algebraist commuting in Berkeley, Math. Intelligencer 11 (1989), no. 1, 40–52. MR 979023, DOI https://doi.org/10.1007/BF03023775
Shiroh Itoh, Integral closures of ideals generated by regular sequences, J. Algebra 117 (1988), no. 2, 390–401. MR 957448, DOI https://doi.org/10.1016/0021-8693%2888%2990114-7
Shiroh Itoh, Integral closures of ideals of the principal class, Hiroshima Math. J. 17 (1987), no. 2, 373–375. MR 909622
Irving Kaplansky, Commutative rings, Revised edition, The University of Chicago Press, Chicago, Ill.-London, 1974. MR 0345945
George Kempf, The Hochster-Roberts theorem of invariant theory, Michigan Math. J. 26 (1979), no. 1, 19–32. MR 514958
Ernst Kunz, Characterizations of regular local rings of characteristic $p$, Amer. J. Math. 91 (1969), 772–784. MR 252389, DOI https://doi.org/10.2307/2373351
Ernst Kunz, On Noetherian rings of characteristic $p$, Amer. J. Math. 98 (1976), no. 4, 999–1013. MR 432625, DOI https://doi.org/10.2307/2374038
Joseph Lipman, Relative Lipschitz-saturation, Amer. J. Math. 97 (1975), no. 3, 791–813. MR 417169, DOI https://doi.org/10.2307/2373777
Joseph Lipman and Avinash Sathaye, Jacobian ideals and a theorem of Briançon-Skoda, Michigan Math. J. 28 (1981), no. 2, 199–222. MR 616270
Joseph Lipman and Bernard Teissier, Pseudorational local rings and a theorem of Briançon-Skoda about integral closures of ideals, Michigan Math. J. 28 (1981), no. 1, 97–116. MR 600418
Frank Ma, Splitting in integral extensions, Cohen-Macaulay modules and algebras, J. Algebra 116 (1988), no. 1, 176–195. MR 944154, DOI https://doi.org/10.1016/0021-8693%2888%2990200-1
Saunders Mac Lane, Homology, Die Grundlehren der mathematischen Wissenschaften, Bd. 114, Academic Press, Inc., Publishers, New York; Springer-Verlag, Berlin-Göttingen-Heidelberg, 1963. MR 0156879
Hideyuki Matsumura, Commutative algebra, W. A. Benjamin, Inc., New York, 1970. MR 0266911
V. B. Mehta and V. Srinivas, Normal $F$-pure surface singularities, J. Algebra 143 (1991), no. 1, 130–143. MR 1128650, DOI https://doi.org/10.1016/0021-8693%2891%2990255-7
P. Monsky, The Hilbert-Kunz function, Math. Ann. 263 (1983), no. 1, 43–49. MR 697329, DOI https://doi.org/10.1007/BF01457082
Masayoshi Nagata, Local rings, Interscience Tracts in Pure and Applied Mathematics, No. 13, Interscience Publishers a division of John Wiley & Sons New York-London, 1962. MR 0155856
D. G. Northcott and D. Rees, Reductions of ideals in local rings, Proc. Cambridge Philos. Soc. 50 (1954), 145–158. MR 59889, DOI https://doi.org/10.1017/s0305004100029194
D. G. Northcott and D. Rees, A note on reductions of ideals with an application to the generalized Hilbert function, Proc. Cambridge Philos. Soc. 50 (1954), 353–359. MR 62115, DOI https://doi.org/10.1017/s0305004100029455
C. Peskine and L. Szpiro, Dimension projective finie et cohomologie locale. Applications à la démonstration de conjectures de M. Auslander, H. Bass et A. Grothendieck, Inst. Hautes Études Sci. Publ. Math. 42 (1973), 47–119 (French). MR 374130
Christian Peskine and Lucien Szpiro, Syzygies et multiplicités, C. R. Acad. Sci. Paris Sér. A 278 (1974), 1421–1424 (French). MR 349659
Louis J. Ratliff Jr., Chain conjectures in ring theory, Lecture Notes in Mathematics, vol. 647, Springer, Berlin, 1978. An exposition of conjectures on catenary chains. MR 496884
D. Rees, A note on asymptotically unmixed ideals, Math. Proc. Cambridge Philos. Soc. 98 (1985), no. 1, 33–35. MR 789716, DOI https://doi.org/10.1017/S0305004100063210
D. Rees, Reduction of modules, Math. Proc. Cambridge Philos. Soc. 101 (1987), no. 3, 431–449. MR 878892, DOI https://doi.org/10.1017/S0305004100066810
Paul Roberts, Two applications of dualizing complexes over local rings, Ann. Sci. École Norm. Sup. (4) 9 (1976), no. 1, 103–106. MR 399075
Paul Roberts, Cohen-Macaulay complexes and an analytic proof of the new intersection conjecture, J. Algebra 66 (1980), no. 1, 220–225. MR 591254, DOI https://doi.org/10.1016/0021-8693%2880%2990121-0
Paul Roberts, Homological invariants of modules over commutative rings, Séminaire de Mathématiques Supérieures [Seminar on Higher Mathematics], vol. 72, Presses de l'Université de Montréal, Montreal, Que., 1980. MR 569936
Paul Roberts, The vanishing of intersection multiplicities of perfect complexes, Bull. Amer. Math. Soc. (N.S.) 13 (1985), no. 2, 127–130. MR 799793, DOI https://doi.org/10.1090/S0273-0979-1985-15394-7
Paul Roberts, Le théorème d'intersection, C. R. Acad. Sci. Paris Sér. I Math. 304 (1987), no. 7, 177–180 (French, with English summary). MR 880574
Paul Roberts, Intersection theorems, Commutative algebra (Berkeley, CA, 1987) Math. Sci. Res. Inst. Publ., vol. 15, Springer, New York, 1989, pp. 417–436. MR 1015532, DOI https://doi.org/10.1007/978-1-4612-3660-3_23
J.-P. Serre, Algèbre locale. Multiplicités, Lecture Notes in Math., no. 11, Springer-Verlag, Berlin, Heidelberg, and New York, 1965.
Gerhard Seibert, Complexes with homology of finite length and Frobenius functors, J. Algebra 125 (1989), no. 2, 278–287. MR 1018945, DOI https://doi.org/10.1016/0021-8693%2889%2990164-6
Henri Skoda, Application des techniques $L^{2}$ à la théorie des idéaux d'une algèbre de fonctions holomorphes avec poids, Ann. Sci. École Norm. Sup. (4) 5 (1972), 545–579 (French). MR 333246
V. Srinivas, Normal surface singularities of $F$-pure type, J. Algebra 142 (1991), no. 2, 348–359. MR 1127067, DOI https://doi.org/10.1016/0021-8693%2891%2990311-U
L. Szpiro, Sur la théorie des complexes parfaits, Commutative algebra: Durham 1981 (Durham, 1981) London Math. Soc. Lecture Note Ser., vol. 72, Cambridge Univ. Press, Cambridge-New York, 1982, pp. 83–90. MR 693628
D. Taylor, Thesis, Ideals generated by monomials in an $R$-sequence, Univ. of Chicago, 1966.
C. T. C. Wall, Lectures on $C^{\infty }$-stability and classification, Proceedings of Liverpool Singularities–Symposium, I (1969/70), Lecture Notes in Mathematics, Vol. 192, Springer, Berlin, 1971, pp. 178–206. MR 0285020
Keiichi Watanabe, Study of $F$-purity in dimension two, Algebraic geometry and commutative algebra, Vol. II, Kinokuniya, Tokyo, 1988, pp. 791–800. MR 977783
O. Zariski and P. Samuel, Commutative algebra, Vols. I and II, Van Nostrand, Princeton, 1958 and 1960.
M. Artin, Algebraic approximation of structures over complete local rings, Inst. Hautes Étude Sci. Publ. Math. 36 (1969), 23-56. M. Auslander and M. Bridger, Stable module theory, Mem. Amer. Math. Soc., no. 94, Amer. Math. Soc., Providence, RI, 1969. J.-F. Boutot, Singularités rationelles et quotients par les groupes réductifs, Invent. Math. 88 (1987), 65-68. D. Buchsbaum and D. Eisenbud, What makes a complex exact, J. Algebra 25 (1973), 259-268. A. Borel, Linear algebraic groups, Benjamin, New York, 1969. J. Briançon and H. Skoda, Sur la clôture intégrale d'un idéal de germes de fonctions holomorphes en un point de ${C^n}$, C. R. Acad. Sci. Paris Sér. A 278 (1974), 949-951. W. Bruns and U. Vetter, Determinantal rings, Lecture Notes in Math., vol. 1327, Springer-Verlag, Berlin, 1988. C. De Concini, D. Eisenbud, and C. Procesi, Hodge algebras, Astérisque 91 (1982), 1-87. S. P. Dutta, Frobenius and multiplicities, J. Algebra 85 (1983), 424-448. ---, On the canonical element conjecture, Trans. Amer. Math. Soc. 299 (1987), 803-811. ---, Ext and Frobenius, J. Algebra (to appear). J. A. Eagon and M. Hochster, $R$-sequences and indeterminates, Quart. J. Math. Oxford Ser. (2) 25 (1974), 61-71. D. Eisenbud, Homological algebra on a complete intersection with an application to group representations, Trans. Amer. Math. Soc. 260 (1980), 35-64. E. G. Evans and P. Griffith, The syzygy problem, Ann. of Math. (2) 114 (1981), 323-333. ---, Syzygies, London Math. Soc. Lecture Note Ser., no. 106, Cambridge Univ. Press, Cambridge, 1985. ---, Order ideals, Commutative Algebra, Proc. Microprogram, June 15-July 12, 1987, Math. Sci. Res. Inst. Publ., no. 15, Springer-Verlag, New York, Berlin, Heidelberg, London, Paris, Tokyo, 1989, pp. 213-225. R. Fedder and K. Watanabe, A characterization of $F$-regularity in terms of $F$-purity, Commutative Algebra, Proc. Microprogram, June 15-July 12, 1987, Math. Sci. Res. Inst. Publ., no. 15, Springer-Verlag, New York, Berlin, Heidelberg, London, Paris, Tokyo, 1989, pp. 227-245. W. Fulton, Intersection theory, Springer-Verlag, Berlin, 1984. H. Grauert and O. Riemenschneider, Verschwindungsätze für analytische kohomologiegruppen auf komplexen Räumen, Invent. Math. 11 (1970), 263-290. A. Grothendieck (notes by R. Hartshorne), Local cohomology, Lecture Notes in Math., vol. 41, Springer-Verlag, Heidelberg, 1967. J. Herzog, Ringe der Charakteristik $p$ und Frobenius-funktoren, Math. Z. 140 (1974), 67-78. M. Hochster and C. Huneke, Tightly closed ideals, Bull. Amer. Math. Soc. 18 (1988), 45-48. ---, Tight closure, Commutative Algebra, Proc. Microprogram, June 15-July 12, 1987, Math. Sci. Res. Inst. Publ., no. 15, Springer-Verlag, New York, Berlin, Heidelberg, London, Paris, Tokyo, 1989, pp. 305-324. ---, Tight closure and strong $F$-regularity, Mem. Soc. Math. France (N.S.), numéro consacré au colloque en l'honneur de P. Samuel (to appear). ---, Phantom homology, preprint, 1989. ---, Tight closure, $F$-regularity, test elements and smooth base change (in preparation). ---, Tight closures of parameter ideals and splitting in module-finite extensions (in preparation). ---, Tight closure in characteristic zero (in preparation). ---, Tight closure and elments of small order in integral extensions, preprint, 1989. ---, Infinite integral extensions and big Cohen-Macaulay algebras, preprint, 1989. M. Hochster, Rings of invariants of tori, Cohen-Macaulay rings generated by monomials, and polytopes, Ann. of Math. (2) 96 (1972), 318-337. ---, Contracted ideals from integral extensions of regular rings, Nagoya Math. J. 51 (1973), 25-43. ---, Topics in the homological theory of modules over commutative rings, C.B.M.S. Regional Conf. Ser. in Math., no. 24, Amer. Math. Soc., Providence, RI, 1975. ---, Big Cohen-Macaulay modules and algebras and embeddability in rings of Witt vectors, Proc. Queen's University Commutative Algebra Conference, Queen's Papers in Pure and Appl. Math. 42 (1975), 106-195. ---, Cyclic purity versus purity in excellent Noetherian rings, Trans. Amer. Math. Soc. 231 (1977), 463-488. ---, Some applications of the Frobenius in characteristic 0, Bull. Amer. Math. Soc. 84 (1978), 886-912. ---, Cohen-Macaulay rings and modules, Proc. Internat. Congr. Math., Helsinki, Finland, Vol. I, Academia Scientarium Fennica, 1980, pp. 291-298. ---, Associated graded rings derived from integrally closed ideals, Proc. Conf. Commutative Algebra (Rennes, France, May 1981), pp. 1-27. ---, Canonical elements in local cohomology modules and the direct summand conjecture, J. Algebra 84 (1983), 503-553. M. Hochster and J. A. Eagon, Cohen-Macaulay rings, invariant theory, and the generic perfection of determinantal loci, Amer. J. Math. 93 (1971), 1020-1058. M. Hochster and J. L. Roberts, Rings of invariants of reductive groups acting on regular rings are Cohen-Macaulay, Adv. in Math. 13 (1974), 115-175. ---, The purity of the Frobenius and local cohomology, Adv. in Math. 21 (1976), 117-172. C. Huneke, Hilbert functions and symbolic powers, Michigan Math. J. 34 (1987), 293-318. ---, An algebraist commuting in Berkeley, Math. Intelligencer 11 (1989), 40-52. S. Itoh, Integral closures of ideals generated by regular sequences, J. Algebra 117 (1988), 390-401. ---, Integral closures of ideals of the principal class, Hiroshima Math. J. 17 (1987), 373-375. I. Kaplansky, Commutative algebra, Allyn and Bacon, 1970; revised edition, Univ. Chicago Press, Chicago, IL, 1974. G. Kempf, The Hochster-Roberts theorem of invariant theory, Michigan Math. J. 26 (1979), 19-32. E. Kunz, Characterizations of regular local rings of characteristic $p$, Amer. J. Math. 91 (1969), 772-784. ---, On Noetherian rings of characteristic $p$, Amer. J. Math. 98 (1976), 999-1013. J. Lipman, Relative Lipschitz saturation, Amer. J. Math. 97 (1975), 791-813. J. Lipman and A. Sathaye, Jacobian ideals and a theorem of Briançon-Skoda, Michigan Math. J. 28 (1981), 199-222. J. Lipman and B. Teissier, Pseudo-rational local rings and a theorem of Briançon-Skoda about integral closures of ideals, Michigan Math. J. 28 (1981), 97-116. F. Ma, Splitting in integral extensions, Cohen-Macaulay modules and algebras, J. Algebra 116 (1988), 176-195. S. Mac Lane, Homology, Springer-Verlag, Berlin, Gottingen, and Heidelberg, 1963. H. Matsumura, Commutative algebra, Benjamin, New York, 1970. V. B. Mehta and V. Srinivas, Normal $F$-pure surface singularities, Tata Institute, Bombay, preprint. P. Monsky, The Hilbert-Kunz function, Math. Ann. 263 (1983), 43-49. M. Nagata, Local rings, Interscience, New York, 1972. D. G. Northcott and D. Rees, Reductions of ideals in local rings, Proc. Cambridge Philos. Soc. 50 (1954), 145-158. ---, A note on reductions of ideals with an application to the generalized Hilbert function, Proc. Cambridge Philos. Soc. 50 (1954), 353-359. C. Peskine and L. Szpiro, Dimension projective finie et cohomologie locale, Inst. Hautes Études Sci. Publ. Math. 42 (1973), 323-395. ---, Syzygies et multiplicités, C. R. Acad. Sci. Paris Sér. A 278 (1974), 1421-1424. L. J. Ratliff, Chain conjectures in ring theory, Springer-Verlag, Berlin and New York, 1978. D. Rees, A note on asymptotically equidimensional ideals, Math. Proc. Cambridge Philos. Soc. 98 (1985), 33-35. ---, Reductions of modules, Math. Proc. Cambridge Philos. Soc. 101 (1987), 431-449. P. Roberts, Two applications of dualizing complexes over local rings, Ann. Sci. École Norm. Sup. (4) 9 (1976), 103-106. ---, Cohen-Macaulay complexes and an analytic proof of the new intersection conjecture, J. Algebra 66 (1980), 225-230. ---, Homological invariants of modules over commutative rings, Sém. Math. Sup., Presses Univ. Montréal, 1980. ---, The vanishing of intersection multiplicities of perfect complexes, Bull. Amer. Math. Soc. 13 (1985), 127-130. ---, Le théorème d'intersection, C. R. Acad. Sci. Paris Sér I Math. 304 (1987), 177-180. ---, Intersection theorems, Commutative Algebra, Proc. Microprogram, June 15-July 12, 1987, Math. Sci. Res. Inst. Publ., no. 15, Springer-Verlag, New York, Berlin, Heidelberg, London, Paris, Tokyo, 1989, pp. 417-436. J.-P. Serre, Algèbre locale. Multiplicités, Lecture Notes in Math., no. 11, Springer-Verlag, Berlin, Heidelberg, and New York, 1965. G. Seibert, Complexes with homology of finite length and Frobenius functors, preprint. H. Skoda, Applications des techniques ${L^2}$ a la théorie des idéaux d'une algèbre de fonctions holomorphes avec poids, Ann. Sci. Ecole Norm. Sup (4) 5 (1972), 545-579. V. Srinivas, Normal surface singularities of $F$-pure type, Tata Institute, Bombay, preprint. L. Szpiro, Sur la théorie des complexes parfaits, Commutative Algebra, Durham, 1981, London Math. Soc. Lecture Notes Ser., no 72, Cambridge Univ. Press, 1982, pp. 83-90. D. Taylor, Thesis, Ideals generated by monomials in an $R$-sequence, Univ. of Chicago, 1966. C. T. C. Wall, Lectures on ${C^\infty }$ stability and classification, Proc. Liverpool Singularities-Symposium I, Lecture Notes in Math., no. 192, Springer-Verlag, Heidelberg, 1971. K. Watanabe, Study of $F$-purity in dimension two, Algebraic Geometry and Commutative Algebra in honor of Masayoshi Nagata, North-Holland, Amsterdam, New York, Oxford, 1987, pp. 791-800. O. Zariski and P. Samuel, Commutative algebra, Vols. I and II, Van Nostrand, Princeton, 1958 and 1960.
Retrieve articles in Journal of the American Mathematical Society with MSC: 13C05, 13A15, 13A50, 13B99, 13D02
Retrieve articles in all journals with MSC: 13C05, 13A15, 13A50, 13B99, 13D02
Article copyright: © Copyright 1990 American Mathematical Society | CommonCrawl |
The Azimuth Project
Blog - the color of night (Rev #16)
Skip the Navigation Links | Home Page | All Pages | Recently Revised | Authors | Feeds |
Or: how big is the "greenhouse effect" really?
This page is a blog article in progress, written by Tim van Beek.
When we talked about putting the Earth in a box, we saw that there is a gap of about 33 kelvin between the temperature of a black body in Earth's orbit with an albedo of 0.3, and the estimated average surface temperature on Earth. An effect that explains this gap would need to
1) have a steady and continuous influence over thousands of years,
2) have a global impact,
3) be rather strong, because heating the planet Earth by 33 kelvin on the average needs a lot of energy.
Last time, in a quantum of warmth, we refined our zero dimensional energy balance model that treats the Earth as an ideal black body, and separated the system into a black body surface and a box containing the atmosphere.
With the help of quantum mechanics we saw that:
Earth emits mainly far infrared radiation, while the radiation from the sun is mostly in the near infrared, visible and ultraviolett range.
I claimed that only very special components of the atmosphere react to infrared radiation. Not the main components O 2O_2 and N 2N_2, but minor components with more than two atoms in a molecule, like H 2OH_2 O, O 3O_3 and CO 2CO_2. These gases absorb and re-emit a part of Earth's emission back to the surface. Today I would like to expain the reasons for this in a little bit more detail than last time.
The downward longwave radiation (DLR) emitted by infrared active gases leads to an increased incoming energy flux from the viewpoint of the surface.
This is an effect that certainly matches the points 1 and 2 above: It is both continuous and global. But how strong is it? What do we need to know in order to calculate it? And is it measurable?
Survival in a combat zone
There has been a lively - sometimes hostile - debate about the "greenhouse effect" which is the popular name for the increase of incoming energy flux caused by infrared active atmospheric components, so maybe you think that the heading above refers to that.
But I have a different point in mind: Maybe you heard about guiding systems for missiles that chase "heat"? Do not worry if you have not. Knowlegeable people working for the armed forces of the USA know about this, and know that an important aspect of the design of aircrafts is to reduce infrared emission. Let's see what they wrote about this back in 1982:
The engine hot metal and airframe surface emissions exhibit spectral IR continuum characteristics which are dependent on the temperature and emissivity-area of the radiating surface. These IR sources radiate in a relatively broad wavelength interval with a spectral shape in accordance with Planck's Law (i.e., with a blackbody spectral shape). The surface- reflected IR radiation will also appear as a continuum based on the equivalent blackbody temperature of the incident radiation (e.g., the sun has a spectral shape characteristic of a 5527°C blackbody). Both the direct (specular) as well as the diffuse (Lambertian) reflected IR radiation components, which are a function of the surface texture and the relative orientation of the surface to the source, must be included. The remaining IR source, engine plume emission, is a composite primarily of $C0_2$ and $H_20$ molecular emission spectra. The spectral strength and linewidth of these emissions are dependent on the temperature and concentration of the hot gaseous species in the plume which are a function of the aircraft altitude, flight speed, and power setting.
This is an excerpt from page 15 of
Military Handbook Survivability Enhancement, Aircraft Enhancement, Aircraft Conventional Weapon Threats, Design and Evaluation Guidlines, MIL-HDBK-268(AS), 5 August 1982
You may notice that the authors point out the difference of a continuous black body radiation and the molecular emission spectra of CO 2CO_2 and H 2OH_2 O. The reason for this, as mentioned last time in a quantum of warmth, is that according to quantum mechanics molecules can emit and absorb radiation at specific energies, i.e. wavelengths, only. For this reason it is possible to distinguish far infrared radiation that is emitted by the surface of the Earth (more or less continuous spectrum) from the radiation that is emitted by the atmosphere (more or less discrete spectrum).
Last time I told you that only certain molecules like CO 2CO_2 and H 2OH_2O are infrared active. The authors seem to agree with me. But why is that? Since last time we had some discussions about whether there is a simple explanation for this, I would like to try to provide one. When we try to understand the interaction of atoms and molecules with light, the most important concept that we need to understand it that of an electric dipole moment
Why is the dipole moment important?
Let us switch for a moment to classical electrostatic theory. If you place a negative electric point charge at the origin of our coordinate system and a positive point charge at the point x⇀\vec{x}, I can tell you the electric dipole moment is a vector p⇀\vec{p} and that:
p⇀=x⇀ \vec{p} = \vec{x}
For a more general situation, let us assume that there is a charge density ρ\rho contained in some sphere SS around the origin. Then I can tell you that the electric dipole moment p⇀\vec{p} is again a vector that can be calculated via
p⇀=∫x⇀ρ(x⇀)dx⇀ \vec{p} = \int \vec{x} \rho(\vec{x}) d \vec{x}
So that is the definition of how to calculate it, but what is its significance? Imagine that we would like to know how a test charge flying by the sphere SS is influenced by the charge density ρ\rho in SS. If we assume that ρ\rho is constant in time, then all we need to calculate is the electric potential Φ\Phi. In spherical coordinates and far from the sphere SS, this potential will fall of like 1/r1/r or faster, so we may assume that there is a series expansion of the form
Φ(r,ϕ,θ)=∑ n=1 ∞f(ϕ,θ)1r n \Phi(r, \phi, \theta) = \sum_{n = 1}^{\infty} f(\phi, \theta) \frac{1}{r^n}
When our test charge is far away from the sphere SS, only the first few terms in this expansion will be important to it.
In order to completely fix the series expansion, we need to choose an orthonormal basis for the coordinates ϕ\phi and θ\theta, that is an orthonormal basis of functions on the sphere. If we choose the spherical harmonics Y lm(ϕ,θ)Y_{l m}(\phi, \theta), with proper normalization we get what is called the multipole expansion of the electric potential:
Φ(r,ϕ,θ)=14πϵ 0∑ l=0 ∞∑ m=−l l4π2l+1q lmY lm(ϕ,θ)r l+1 \Phi(r, \phi, \theta) = \frac{1}{4 \pi \epsilon_0} \sum_{l = 0}^{\infty} \sum_{m = -l}^{l} \frac{4 \pi}{2 l +1} q_{l m} \frac{Y_{l m}(\phi, \theta)}{r^{l+1}}
ϵ 0\epsilon_0 is the electric constant.
The l=0l = 0 term is called the monopole term. It is proportional to the electric charge qq contained in the sphere SS. So the first term in the expansion tells us if a charge flying by SS will feel an overall net attractive or repulsive force, due to the presence of a net electric charge inside SS.
The terms for l=1l = 1 form the vector p⇀\vec{p}, the dipole moment. The next terms in the series Q ijQ_ij form the quadrupol tensor. So, for the expansion of the potential we get
Φ(r,ϕ,θ)=14πϵ 0(qr+p⇀⋅x⇀r 3+12∑Q ijx ix jr 5+⋅⋅⋅) \Phi(r, \phi, \theta) = \frac{1}{4 \pi \epsilon_0} (\frac{q}{r} + \frac{\vec{p} \cdot \vec{x}}{r^3} + \frac{1}{2} \sum Q_{ij} \frac{x_i x_j}{r^5} + \cdot \cdot \cdot)
For atoms and molecules the net charge qq is zero, so the next relevant term in the series expansion of their electric potential is the dipole moment. This is the reason why it is important to know if an atom or molecule has states with a nonzero dipole moment: Because this fact will in a certain sense dominate the interactions with other electromagnetic phenomena like light.
If you are interested in more information about multipole expansions in classical electrodynamics, you can find all sort of information in this classical textbook:
John David Jackson: Classical Electrodynamics (Wiley; 3 edition (August 10, 1998))
In quantum mechanics the position coordinate x⇀\vec{x} is promoted to the position operator; as a consequence the dipole moment is promoted to an operator, too.
Molecular Emission Spectra or: Only Greenhouse Gases are infrared active? Really?
A rough estimate of energy levels for molecules shows that
electron energy levels correspond to ultraviolett and visible light,
vibration corresponds to infrared light and
rotation corresponds to microwaves.
Tim van Beek: Heuristic explanation for the different energy levels?
For atoms and molecules interacting with light, there are certain selection rules. A strict selection rule in quantum mechanics rules out certain state transitions that would violate a conservation law. But for atoms and molecules there are also heuristic selection rules that rule out state transitions that are far less likely than others. For state transitions induced by the interaction with light, a heuristic transition rule is
Transitions need to change the dipole moment by one.
This selection rule is heuristic: For one, it is valid when the radiation wavelength is bigger than the molecule, which is already true for visible light.
Secondary, transitions that change the dipole moment are far more likely than transitions that change the electric quadrupole moment only, for example. But: If an atom or molecule does not have any dipole transitions, then you will maybe still see spectral lines corresponding to quadrupol transitions. But they will be very weak.
Molecules that are infrared active need to have vibrational modes that have a nonzero dipole moment. But if you look close enough you will find that molecules that are not "greenhouse gases" can indeed emit infrared radiation, but the amount of radiation is insignificant compared to that of the greenhouse gases.
If you take a look at molecules consisting of two atoms of the same species like O 2O_2 and N 2N_2, you will find that such molecules can never have any vibrational states with a dipole moment at all, which means that more than 99% of all molecules in the atmosphere have an insignificant contribution to infrared radiation.
Tim van Beek: Compare black body radiation to the emission spectrum of CO2 and H2O.
If you would like to learn more about molecules, have a look at:
Peter W. Atkins, Ronald S. Friedman (Author): "Molecular Quantum Mechanics", Oxford University Press, USA; 5 edition (December 30, 2010)
If you speak German and are interested in a very thorough and up to date treatment you could try:
Ingolf V. Hertel, C.P. Schulz: "Atome, Moleküle und optische Physik 1: Atomphysik und Grundlagen der Spektroskopie", Springer, Berlin, 1st edition (December 2007)
Ingolf V. Hertel, C.P. Schulz: "Atome, Moleküle und optische Physik 2: Moleküle und Photonen - Spektroskopie und Streuphysik", Springer, Berlin, 1st Edition (May 2011)
So, if we try to calculate the DLR effect for the atmosphere of the Earth, it is sufficient to focus on molecules with vibrational modes with a nonzero dipole moment.
Pressure, Temperature and all that
The next important point is mentioned by the air force authors in this statement:
The spectral strength and linewidth of these emissions are dependent on the temperature and concentration of the hot gaseous species...
Of course the temperature, pressure and concentration of atmospheric components are not constant throughout the whole atmosphere, so this is a point that is also important for us when we investigate the radiation properties of the atmosphere.
But why is this important? Temperature and pressure change molecular emission spectra by line broadening mechanisms. The concentration of gases is important because saturation effects lead to a non linear dependence of absorption and emission on the concentration.
The most important mechanisms of line broadening are:
Tim van Beek: List of line broadening mechanisms.
We will need to calculate thermodynamic properties of the atmosphere, at least approximately, to determine the molecular emission spectra.
Tim van Beek: Adiabatic lapse rate and why this does not explain the 33 kelvin gap.
What about the dependency of emission and absorption on the concentration of infrared active gases?
Tim van Beek: Explanation of the log dependency.
Tim van Beek: Equations of radiation transfer of the atmosphere (at least a simple approximate version of it). Why it is too complicated to solve by hand.
Calculating the DLR
Calculating the DLR on a sheet of paper, even with the use of a pocket calculator, would quickly turn out to be quite a task.
We could start by making some assumptions about the different layers of the atmosphere. We would also need to look up molecular spectra. Thankfully, for this task there is some help: The HITRAN database. HITRAN was founded by the US air force. Why? I don't know, but I guess that they needed the data for air craft design, for example. Look out for the interview with Dr. Laurence Rothman for some background information about HITRAN; there is a link to it on the home page.
But anyway: We see that this task is complicated enough to justify the effort to write computer code to handle it. But we are lucky: Not only have others done this already for us. In fact you can find a survey of some of the existing software programs on Wikipedia:
page atmospheric radiative transfer codes, Wikipedia
A kind soul has provided a web interface to one of the most prominent software programs, MODTRAN, for us to play around with:
MODTRAN: Webinterface
Now that we have some confidence in the theory behind DLR, we are ready to look into measurements.
Measuring DLR
To measure DLR and check that it is really the energy flux coming from infrared active components of the atmosphere and not some strange artifact, we have to
point some measurement device to the sky, to measure what goes down, not what goes up and
check that the spectrum we measure consists of the characteristic molecular spectra of CO 2CO_2, H 20H_20 etc.
The kind of measurement device we could use for this is called pyrgeometer, for pyr = fire and geo = earth.
For starters we should look for conditions where there is minimum radiation from other sources, no clouds and only a small amount of water vapor. What would be a good place and time on Earth to go to? A dedicated team of scientists decided to weather the grim conditions of the antarctic during polar night for this purpose:
Michael S. Town, P. Walden, Stephen G. Warren: Spectral and Broadband Longwave Downwelling Radiative Fluxes, Cloud Radiative Forcing, and Fractional Cloud Cover over the South Pole online here.
Tim van Beek:
Dan Lubin, David Cutchin, William Conant, Hartmut Grassl, Ulrich Schmid, Werner Biselli: Spectral Longwave Emission in the Tropics: FTIR Measurement at the Sea Surface and Comparison with Fast Radiation Codes, online here.
"Measurements of the radiative surface forcing of climate", online here.
Tim van Beek: I would like to add radiation measurements, maybe some can be found here:
Atmospheric Radiation Measurement (ARM) Climate Research Facility
Baseline Surface Radiation Network (BSRN) here.
Just to have a number, the flux of DLR (downwards longwave radiation) is about 300 Wm −2W m^{-2}.
category: blog, climate
Revision from November 7, 2011 14:30:00 by Andrew Stacey
Forward in time (3 more) | Back in time (15 more) | See current | See changes | History | Rollback | View: Source | Linked from: Blog articles in progress | CommonCrawl |
Molar specific heat of ideal gas
molar specific heat of ideal gas the heat capacity of the gas is the amount of heat required to rais the temperature of the gas by 1°C (1 k). Input the composition (on a mass, molar, or volume basis), specific heats (in proper units), and temperature from the keyboard. It is denoted by C V. Accordingly, a distinction is made between the specific heat capacity of the isochoric process \(c_v\) and the specific heat capacity of the isobaric process \(c_p\). Specific Heat at Const. , 'CH4+C3H8' Let C P and C V denote the molar specific heat of an ideal gas at constant pressure and at constant volume, respectively. The molar specific heat capacity is given by $\\large C_v = \\frac{R}{\\gamma -1}$ ; Where R is universal gas constant and γ is same for all monoatomic gases and … Continue reading "The molar specific heat capacity of all monoatomic gases is same . 13), so we do not have the kind of simple result we have for monatomic ideal gases. 314 J/mol. 0 ˚C to p =1. The ratio of specific heat at constant pressure to that at constant volume is Physics for Scientists and Engineers, Volume 1, Chapters 1-22 (8th Edition) Edit edition. At constant volume, no work is done and all heat that goes into a system increases its internal energy. Principal Specific Heat of Gas at Constant Volume: After reading this recent question I was interested in how to calculate the specific heat capacity of a mixture based on the specific heat capacities of its components. But, Therefore, With the volume held constant, the gas cannot expand and thus cannot do any work. This equation applies to all polyatomic gases, if the degrees of freedom are known. It predicts that the molar specific heat of an ideal gas at constant pressure is greater than the molar specific heat at constant vol-ume by an amount R, the universal gas constant (which has the value 8. 314 kJ/kmol-°C (1. The molar specific heat of the mixture at constant volume is ____ . , 'CH4+C3H8' The specific heat of a mixture is the sum of the products of mole fraction times the specific heat of that gas component. Two specific heats are defined for gases, one for constant volume (c v) and one for constant pressure (c p). 468 20. 6J For example, one mole of oxygen with an atomic mass of 16 corresponds to 16 grams. The heat capacity ratio (gamma, γ) for an ideal gas can be related to the degrees of freedom ( f ) of gas molecules by the formula: or . The experimental data shown in these pages are freely available and have been published already in the DDB Explorer Edition. It has the dimension of the energy per unit mass per unit absolute temperature. U = 3/2nRT. 8 is simply obtained by For monatomic gases γ =1. <br> (b) An amount Q of heat is added to a mono atomic ideal gas in a process in which the gas performs a work on its surrounding. Note that the specific heats are constant for monatomic gases and vary more strongly with temperature for triatomic gases than for diatomic gases. The intial pressure is 200kPa, and the initail volume is 0. The specific heat at constant pressure of an ideal gas can often be represented through the following form : Cp = a + bT + cT2 + dT3. We should expect a temperature rise. We can plug this into the Ideal Gas The molar heat capacity C, at constant pressure, is represented by C P. The gas constant (also known as the molar gas constant, universal gas constant, or ideal gas constant) is denoted by the symbol R or R. Show that molar specific heat capacity for such a process is given by . 00 K. 0 x 105 atm, V =1. Calculating Molar Mass using the Ideal Gas Equation. Therefore, (A) is ProMax reports both ideal gas and real gas specific heat ratios. The symbol c stands for specific heat and depends on the material and phase. An analysis is made of the different contributions to the heat capacity of As2Se3, Sb2Se3, Bi2Se3, GeSe, SnSe, and PbSe at elevated temperatures with the use of experimental values of the heat Heat required to raise the temperature of 1 mole of gas through 1K when pressure is constantThis video is about: Molar Specific Heat of a Gas. Consider 'n' moles of an ideal gas contained in a cylinder fitted with a frictionless piston. Accordingly, the molar heat capacity of an ideal gas is proportional to its number of degrees of freedom, d: This result is due to the Scottish physicist James Clerk Maxwell (1831−1871), whose name will appear several more times in this book. 5 R, while the -W term contributes another R to C p. 314 J K -1 mol -1 (for all ideal gases) and heat capacity ratio γ = Cp Cv = 1. There are two types of heat capacities : 1)Heat capacity at constant volume (C v) 2)Heat capacity at constant pressure(C p) Jan 25, 2020 · Molar Specific Heat of Gas at Constant Pressure: The quantity of heat required to raise the temperature of one mole of gas through 1K (or 1 °C) when pressure is kept constant is called molar specific heat at constant pressure. It follows, in this case, that Molar Specific Heat at Constant Volume . (a) By how much did the internal energy of functions for the molar specifi c enthalpy, internal energy, entr opy, specific heat at constant volume, and the specific heat at constant pres sure for twelve chemical species of the carbon-hydrogen-oxygen-nitrogen system. When pressure is constant when heat is applied to a unit mass it is free to expand, but since expansion causes cooling the heat required to raise the temp to one degree is larger. Inputs: F$ a string constant or string variable that contains the names of 1 or more (up to 20) names of ideal gases that are contained in the EES property library. Therefore, When a confined ideal gas undergoes temperature change ΔT, the resulting change in its internal energy is Generally, we write the heat capacity as a molar heat capacity (where n is the number of moles) and find that for constant pressure Q = C P nΔT and C P = (5/2)R, and for constant volume Q = C V nΔT and C V = (3/2)R. (A) the heat absorbed by the gas (B) the internal energy change of the gas (C) the enthalpy change of the gas (D) 5p times the volume change in the gas Because the internal energy of an ideal gas depends only on the temperature, Δu = Q – W, so Q = W. 00 kg of mass by 1. 314 J/K. 987 BTU/lbmol-°F). 28 0. The purpose of this study is to determine the value of the heat capacity ratio, γ = Cp/CV for giving gases such as argon, oxygen, nitrogen and nitrous oxide using adiabatic expansion. where, α = coefficient of thermal expansion. 5: 1. 6. Thus, C p = `5/2"R"` Specific heat capacity of Diatomic gas: The molecules of a monatomic gas have 5 degrees of freedom, 3 translational, and 2 For an ideal gas, CP = CV +R, whereby the values of CP and CV represent the molar heat capacities at constant pressure and volume. Determine the change in the specific entropy of the H 2, in kJ/kg, assuming the H 2 behaves as an ideal gas. 4 L for 1 mole of any ideal gas at a temperature equal to 273. C°, molar heat capacity (molar specific heat) at constant volume for ideal gas. Total energy of one mole of gas (Here, the total energy is purely kinetic) For one mole Specific heat at constant volume. what is the molar specific heat of the mixture at constant volume ? Sol: The value of γ of a mixture is given by $\\large \\frac{n_1 + n_2}{\\gamma -1} = \\frac{n_1}{\\gamma_1 -1} + \\frac{n_2}{\\gamma_2 -1} $ … Continue reading "One mole of a monoatomic ideal gas is mixed with one mole of a diatomic May 31, 2015 · which = 20. If the quantity of gas present is 2. (2) To do work against external pressure. Relationship Between Specific Heats and Heat Capacities . Are the data in Table 20. Here ΔU is the change in internal energy U of the system. where C is the molar specific heat of the gas is to be considered at constant volume,n is the no. 007 moles, determine the molar specfic heat capacity of the gas that the student would find at constant pressure. 138 4. cen98128_App-A_p865-892. Therefore its internal energy, U, follows the equation U = 3/2 RT. insignificance of mass) Therefore, the specific heat is depedent only by its temperature Specific Heat of Ideal Gases at 300 K In some cases you may hear someone talking about specific heat ratios (k). The ideal gas ratio of specific heats is used in the API 520 formulae for calculating pressure relief valve required area. Jun 12, 2005 · Data from "The Chemkin Thermodynamic Data Base" were used to generate MathCAD functions for the molar specific enthalpy, internal energy, entropy, specific heat at constant volume, and the specific heat at constant pressure for twelve chemical species of the carbon- hydrogen-oxygen-nitrogen system. 1. The mixture contains 30 mole% NO, 50 mole% CO, and 20 mole% O 2. Internal energy Using the ideal gas law the total molecular kinetic energy Jul 28, 2014 · A certain ideal gas has a molar specific heat of cv=(7/2)R. 005 kJ/(kg-K) c v = 0. Determine the average molar heat capacity of an ideal gas Nov 10, 2011 · Molar specific heat of an ideal gas Thread starter fiziks09; Start date Nov 10, 2011; Nov 10, 2011 #1 fiziks09. Is this true or false ? (Assume ideal nature ) Sol: True . Internal energy Using the ideal gas law the total molecular kinetic energy May 22, 2009 · mono atomic gas. One mole of monoatomic gas is mixed with 3 moles of diatomic gas . 314 J. The ratio of specific heat at constant pressure to that at constant volume is asked Feb 26, 2019 in Thermodynamics by Luckyraj ( 15 points) For a mole of an ideal gas at constant pressure, P dV = R dT, and therefore, for an ideal gas, CP = CV + R, 8. Material Properties - Material properties for gases, fluids and solids - densities, specific heats, viscosities and more ; Related Documents . Calculate the ΔU, in J/mole, of the mixture for the heating process. In equation form, the first law of thermodynamics is ΔU = Q − W. 18) where a, b, K, K 1 The Specific Heats of Gases It is useful to define two different versions of the specific heat of gases, one for constant-volume (isochoric) processes and one for constant-pressure (isobaric) processes. 134 J/mol K. Note that you can write the change in internal energy or enthalpy for an ideal gas is the integral over the appropriate specific heat dT between the reference temperature and and the desired temperature. unit is J K -1 mol -1. Isothermal and Adiabatic Expansion Up: Classical Thermodynamics Previous: Specific Heat Calculation of Specific Heats Now that we know the relationship between the molar specific heats at constant volume and constant pressure for an ideal gas, it would be interesting if we could calculate either one of these quantities from first principles. Jan 25, 2020 · Molar Specific Heat of Gas at Constant Volume: The quantity of heat required to raise the temperature of one mole of gas through 1K (or 1 °C) when the volume is kept constant is called molar specific heat at constant volume. The specific heats of gases are given as Cp and Cv at constant pressure and constant volume respectively while solids and liquids are having only single value for specific heat. The ideal gas law calculation internally converts all user inputs to SI units, performs the calculation, then converts calculated values to user-desired units. I noticed in one which the rounded head spark dissipation that is organizing circuits to storeretrieve information in maybe Brendan. The work obtained from reversible isothermal expansion of one mole of this gas from an initial molar volume v i to a final molar volume v f is ; May 21, 2015 · The specific heats of air at constant pressure and at constant volume are 1. The data represent a small sub list of all available data in the Dortmund Data Bank. org Table 3. Specific heat of ideal gases and the equipartition theorem Specific heats revisited The specific heat of a material will be different depending on whether the measurement is made at constant volume or constant pressure. Why ?" Specific heat of an ideal gas depends upon its a) Molecular weight b) Pressure c) Temperature d) Volume May 20, 2010 · A vertical cylinder with a heavy piston contains air at 300K. According to this website the specific heat capacity of an ideal mixture is given by Three moles of an ideal gas with a molar heat capacity at constant volume of 4. The starting point is form (a) of the combined first and second law, In summary, the molar heat capacity (mole-specific heat capacity) of an ideal gas with f degrees of freedom is given by. Monatomic Diatomic f 3 5 Cv3R/2 5R/2 Cp5R/2 7R/2 For a gas we can define a molar heat capacity C - the heat required to increase the temperature of 1 mole of the gas by 1 K. 9 cal/(mol∙K) and a molar heat capacity at constant pressure of 6. Molar specific heat Previously we have defined specific heat as the energy required per unit mass As matter is made up of atoms and molecules, it is instructive to also define specific heats in terms of the number of atoms or molecules We will define Ideal Gas Heat Capacity of Carbon dioxide. The molar specific heat at constant volume of an ideal gas is equal to 2. 0167 moles of gas contained in 2,199. To obtain a more realistic EOS, van der Waals introduced corrections that account for the finite volumes of the molecules and for the I was wondering if the ideal gas constant (R=8. 2-47 holds approximately for dia- and polyatomic gasses Heat capacity ratio of some important gases at 0. It is equivalent to the Boltzmann constant, but expressed in units of energy per temperature increment per mole, i. where C p is molar specific heat at constant pressure. Types of heat capacity or molar heat capacity . For each of the following presses, determine (a) the final pressure, (b) the final volume, (c) the final temperature, (d) the change in internal energy of the gas, (e) the energy added to the gas by heat, and (f) the work done on the gas. Homework Statement A sample of a diatomic ideal For an ideal gas, C p − C v = R , where C v and C p denote the molar heat capacities of an ideal gas at constant volume and constant pressure, respectively and R is the gas constant whos value is 8. 029 12. g. Then [2009] a)C p – C v is larger for a diatomic ideal gas than for a mono atomic ideal gas b)C p + C v is larger for a diatomic ideal gas than for a mono atomic ideal gas Van der Waals equation calculator uses Van der Waals equation=([R]*Temperature/(Molar Volume-Gas constant b))-(Gas constant a/Molar Volume^2) to calculate the Van der Waals equation, The Van der Waals equation is a thermodynamic equation of state based on the theory that fluids are composed of particles with non-zero volumes, and subject to a (not necessarily pairwise) inter-particle Recall: since n = mass/ molar mass and density = mass/ volume, the ideal gas law can be used re-written as P = (d/MM)*RT, where d is density and MM is molar mass. The table below gives the principal specific heat capacities for some well-known gases. a. Molar specific heat capacity of a gas is defined as the quantity of heat required to raise the temperature of 1 mole of the gas through 1K. wikipedia. Gas: Constant Volume Heat Capacity: cV(J/K) cV/R: Ar: 12. , all the thermal input to the gas goes into internal energy of the gas. 31447 kJ/kmol·K is the universal gas constant and Mis the molar mass. One such free, recognized and reliable resource can be found as below: The molar specific heat at constant volume C v is. Molar heat capacity is the amount of heat needed to raise the temperature of 1 mole of a substance by 1 Kelvin. \frac{7}{5} \\ B. C°, molar heat capacity (molar specific heat) at constant pressure for ideal gas. In this Physics video lecture in Hindi for class 11 molar specific heat capacity of an ideal gas at constant pressure and volume are explained. For an ideal gas, you can connect pressure and volume at any two points along an adiabatic curve this way: Dec 08, 2017 · A monoatomic ideal gas undergoes a process in which the ratio of P to V at any instant is constant and equals to 1, what is the molar heat capacity of the gas. The gas constant is calculated from R R U/M, where R U 8. And for all gases Etrans = 3(RT/2) (8c) For example, for the ammonia molecule, NH3, we have U = 6RT + 3RT/2 + 3RT/2 = 9RT (9) The molar heat capacity Cv would be 65. The specific heat of gas at constant volume in terms of degree of freedom 'f' is given as: C v = (f/2) R. 1 MPa pressure Specific heat (kJ kg-1 K-1) Molar heat capacity (Jmol-1 K-1) Gas Cv Cp C v C p Cp-Cv (Jmol-1 K-1) γ Monatomic He 3. 8 bar and 320K to 15. Thus, the work done by the gas is equal to the heat absorbed by the gas. Thermodynamics - Thermodynamics - Heat capacity and internal energy: The goal in defining heat capacity is to relate changes in the internal energy to measured changes in the variables that characterize the states of the system. Accordingly, the molar heat capacity of an ideal gas is proportional to its number of degrees of freedom, d: C V = d 2 R. 10. 5 times the universal gas constant (8. It can be derived that the molar specific heat at See full list on en. Specific heat and heat transfer Video transcript I told you that the two most important things you should know in thermodynamics that will get you most of your way through most exams is that the pressure times the volume is equal to a constant, and that the pressure times the volume divided by the temperatures is equal to a constant. Read : For all properties, the value of the specific property can be obtained from the value of the molar property by dividing by the molecular weight (molar mass)M of the gas. ) The Shomate Heat Capacity Equation If V = const. 4 : The Polytropic Process For an ideal gas, the specific molar heat capacity at constant pressure is always greater than the corresponding isochoric characteristic by R = 8. Specific Heat Two specific heats are defined for gases, one for constant volume (c v) and one for constant pressure (c p). (a) By how much did the internal energy of The molar specific heat at constant pressure of an ideal gas is [72]R. The names of the gases are separated with a + sign, e. Calculate the molar heat capacity at constant pressure C_p,m and the molar heat capacity at constant volume C_v,m for the gas. (a) Use the ideal gas law and initial conditions to calculate the number of moles of gas in the vessel. 667 γ = C p C v = 1. The most common example is the molar volume of a gas at STP (Standard Temperature and Pressure), which is equal to 22. 3 shows the molar heat capacities of some dilute ideal gases at room temperature. It is defined as the ratio of the ideal gas constant to the molar gas of the gas. viscosity, thermal conductivity, specific heat capacity and Prandtl number for a mixture of ideal gases. 0 atm. The ratio of C P to C V (C P /C V) for a gas is known as the specific heat ratio or adiabatic index and usually denoted by the Greek letter gamma The symbol for the Universal Gas Constant is Ru= 8. 718 kJ/(kg-K) k = 1. Determine the mole fractions and mass fractions of each component. (28 The molar volume of a gas expresses the volume occupied by 1 mole of that respective gas under certain temperature and pressure conditions. The constant, further, is the same for all gases, provided that the mass of gas being compared is one mole, or one molecular weight in grams. If the piston is fixed and the gas is heated, its volume remains constant and all the heat supplied goes to increase the internal energy of the molecules due to which the temperature of the gas increases. For an ideal gas, the molar capacity at constant pressure is given by, where d is the number of degrees of freedom of each molecule/entity in the system. ( i. kT = RT. 0x106 L, T = 0. 0 cm3 to 100 cm; while the pressure remained constant at 1. ⓘ Molar internal energy of an ideal gas [U] This physics video tutorial explains how to calculate the internal energy of an ideal gas - this includes monatomic gases and diatomic gases. Solution: Concepts: Specific heat, internal energy, energy conservation, the ideal gas law; Reasoning: We note that the internal energy of an ideal gas is proportional to its temperature. Problem 45AP from Chapter 21: A certain ideal gas has a molar specific heat of . (1) Here, P is the gas pressure, V is the molar volume, T is the temperature, and R is the gas constant. Molar gas volume is one mole of any gas at a specific temperature and pressure has a fixed volume. 00-mo The specific heat capacity of gases must also be differentiated between an isochoric and an isobaric heat supply. Air/Water Vapor Mixtures. Air - Molecular Weight and Composition - Dry air is a mixture of gases where the average molecular weight (or molar mass) can be calculated by adding the weight of each component heat supplied at constant pressure is consumed in two purposes: (1) To raise the temperature of gas. The degree of freedom of molecules and heat capacity. 0x103 L. C . We shall see in Chapter 10, Section 10. 17) c v = b + KT + K 1 T2 + K 2 T 3(8. 00x10-3 mol, find the molar specific heat at (b) constant pressure and (c) constant volume. Table A–1E Molar mass, gas constant, and critical-point properties Table A–2E Ideal-gas specific heats of various common gases Table A–3E Properties of common liquids, solids, and foods Table A–4E Saturated water—Temperature table Table A–5E Saturated water—Pressure table Table A–6E Superheated water Table A–7E Compressed where P is the pressure in Pa, V is the gas volume in m 3, m is mass of gas in kg, T is gas temperature in K, R is known as the gas constant and is given in J/kgK, v is mass specific volume in m 3 /kg, υ is molar specific volume in m 3 /kmol, and ℛ is the universal gas constant of 8. The general relation between molar heat capacities for any fluid is given by the following equation. 1. Refer to the equation below. temp diff. K (0. qxd 1/8/10 3:29 PM Page 866 0 for monatomic gases (8a) Erot = 3(RT/2) for nonlinear molecules 2(RT/2) for linear molecules 0 for monatomic gases (8b) where N is the number of atoms in the molecule. Water has highest specific heat of capacity because of which it is used as a coolant in automobile radiators and in hot water bags. (e) Explain how specific heat data can be used to determine whether a triatomic molecule is linear or nonlinear. Internal Energy changed by 19. An ideal gas with specific heats independent of temperature, and , is referred to as a perfect gas. For an ideal gas, C p − C v = R , where C v and C p denote the molar heat capacities of an ideal gas at constant volume and constant pressure, respectively and R is the gas constant whos value is 8. Kamagra Oral Jelly Wann Einnehmen - Worldwide Shipping, No Prescription Required, FDA Approved Drugs, Fast Delivery Kamagra kopen. Q = nCΔT The value of the heat capacity depends on whether the heat is added at constant volume, constant pressure, etc. the pressure–volume product, rather than energy per temperature increment per particle. Critical Point Data of Various Substances. Cp = ˆ @E @T! p +p ˆ @V @T! p So, Cp = 3 2 Nk+p @ @T (NkT=p)p = 3 2 Thermodynamics of ideal gases An ideal gas is a nice laboratory for understanding the thermodynamics of a uid with a non-trivial equation of state. Use… a. Show that the process is polytropic and find the molar heat capacity of the gas in the process. Its unit is J mol?1 K?1. Process simulators typically report the specific heat at constant pressure (Cp) in the stream summary and this is often used to calculate Cp/Cv using the relationship Cp - Cv = R. 3144126 N-m/mole-K . Molar Specific Heats of Gases The molar specific heats of ideal monoatomic gases are: For diatomic molecules, two rotational degrees of freedom are added, corresponding to the rotation about two perpendicular axes through the center of the molecule. This expression applies to any ideal gas. 00 atm. We began this discussion by noting that for an ideal monatomic gas, the average internal energy is (3/2)T. 314 J/mol K). In this section we shall recapitulate the conventional thermodynamics of an ideal gas with constant heat capacity. Physics for Scientists and Engineers, Volume 1, Chapters 1-22 (8th Edition) Edit edition. dU = dQ - PdV, where U is the internal energy of the system, P is the pressure, V is the molar volume, and Q is the heat transferred to the gas by the surroundings. Nov 24, 2018 · An ideal gas has a molar heat capacity C v at constant volume. Show that an ideal gas consisting of such molecules has the following properties: (a) its total internal energy is fnRT /2, (b) its molar specific heat at constant volume is fR /2, (c) its molar specific heat at constant pressure is ( f + 2) R /2, and (d) its specific heat ratio is γ = C P /C V = ( f + 2)/ f . ( where H is the enthalpy ) Cv = dU/dt ( where U is the total internal energy) Now,( consider to be the change or delta) H = U + P V H = U + nRT Cp T This is a special relationship between c v and c P for an ideal gas. Cv = ˆ @E @T! v = 3 2 Nk To calculate Cp, we make use of the ideal gas law in the form pV = NkT. 4 bar and 1300K. P- pressure of the gas, V- Volume of the gas, T- Temperature of the gas, n- number of moles of the substance present on the gas and R- Gas constant. The molar heat capacity at constant pressure (C P) is the quantity of heat required to raise the temperature of 1 mole of the gas by 1 K if the pressure of the gas remains constant. 5- 6. The Specific-Heat Capacity, C, is defined as the amount of heat required to raise the temperature by 1K per mole or per kg. 3144626 J/(mol·K). C8, molar heat capacity (molar specific heat) along a saturation curve. Energy of a diatomic molecule at high temperature is equal to 7/2RT This means that for a gas each degree of freedom contributes ½ RT to the internal energy on a molar basis (R is the ideal gas constant) An atom of a monoatomic gas can move in three independent directions so the gas has three degrees of freedom due to its translational motion. of moles of the gas. Molar heat capacity is specific heat capacity per unit mass. 50: He: 12. functions for the molar specifi c enthalpy, internal energy, entr opy, specific heat at constant volume, and the specific heat at constant pres sure for twelve chemical species of the carbon-hydrogen-oxygen-nitrogen system. To test the impact of using real gas specific heat ratio instead of ideal gas specific heat ratio on PRV sizing, the critical mass flux based on the real gas specific heat ratio can be written as; 𝐺=√ ∗𝑃1𝜌1 √(2 ∗+1) ∗+1 ∗−1 (8) Eq. di atomic gas. The EOS for 1mole of an ideal gas is, PV= RT. Molar Specific Heat of an Ideal Gas Molar specific heat is defined in this article which is the amount of heat required to raise the temperature of one mole of any material by 1K 1 K (or 1∘C 1 ∘ C). One mole of an ideal gas at standard conditions occupies 22. Q must be=2. We will define these as molar specific heats because we usually do gas calculations using moles instead of mass. Its value for monatomic ideal gas is 3R/2 and the value for diatomic ideal gas is 5R/2. Press. It is used in many fundamental equations, such as the ideal gas law. This results is known as the Dulong-Petit law, which can be understood by applying The molar specific heat of gases • Processes A and B have the same Δ T and the same Δ E th, but they require different amounts of heat. Now you have the molar specific heat capacities of an ideal gas. 4 : The Polytropic Process Ignoring the vibrational degrees of freedom, the ratio of molar specific heat of a diatomic ideal gas to that of a monatomic ideal gas at constant pressure is: {eq}A. Q: The molar specific heat capacity of all monoatomic gases is same . 2 J of heat be added to a particular ideal gas. In addition, the amount of substance \(n_{gas}\) can be expressed by the ratio of the Dec 23, 2009 · Specific Heat capacity is the heat required to raise the temperature of a unit mass to one degree. e. The goal of this problem is to find the temperature and pressure of the gas after 16. Molar Heat Capacity of Solid Elements. If we take 1 mole of gas in the barrel, the corresponding specific heat capacity is called Gram molar specific heat capacity at constant volume. Saturation Temperature / Pressure Table & Psychrometric Chart $\begingroup$ @KyleKanos It is specific heat of the given process $\endgroup$ – evil999man Apr 18 '14 at 14:40 | show 2 more comments 2 Answers 2 For an ideal gas, why is the specific heat capacity at constant volume lower than the specific heat capacity at constant pressure? Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their The Molar Specific Heat for an Ideal Gas at Constant Pressure As an ideal gas expands its pressure will tend to drop along the green line shown in the diagram. 00 x 10^5 Pa and temperature 300 K. Want to read all 39 pages? So the heat capacity at constant volume for any monatomic ideal gas is just three halves nR, and if you wanted the molar heat capacity remember that's just divide by an extra mole here so everything gets divided by moles everywhere divided by moles, that just cancels this out, and the molar heat capacity at constant volume is just three halves R. The Dec 29, 2018 · The molar specific heat at constant pressure of an ideal gas is (7/2)R. 9J/mol·K . Generally, we write the heat capacity as a molar heat capacity (where n is the number of moles) and find that for constant pressure Q = C P nΔT and C P = (5/2)R, and for constant volume Q = C V nΔT and C V = (3/2)R. Let us re-write the formula for the specific molar isochoric heat capacity: C V n = z / 2 * R. b) Calculate the mass of the air in the cylinder. if the weight of the gas is one gram, then it is called specific heat. The ratio of specific heat at constant pressure to that at constant volume is:- - 8257326 The product of mass-specific heat capacity and molar mass equals the so-called molar heat capacity \(C_{m,v}\), whereby the molar heat capacity is only dependent on the degrees of freedom \(f\) and the molar gas constant \(R_m\) (\(C_{m,v}=\frac{f}{2}R_m\)). A. The heat capacity specifies the heat needed to raise a certain amount of a substance by 1 K. 19-11 (a) The temperature of an ideal gas is raised from Tto T!" Tin a constant-pressure process. The molar mass of an ideal gas can be determined using yet another derivation of the Ideal Gas Law: [latex]PV=nRT[/latex]. temperature difference =1. Apr 25, 2007 · When 117 J of energy is supplied as heat to 2. For air at T = 300 K, c P = 1. They are usually expressed in a form : c p = a + KT + K 1 T2 + K 2 T3(8. For example, monatomic gases and diatomic gases at See full list on scienceabc. 00-mo Properties of Various Ideal Gases (at 300 K) Gas: Formula: Molar Mass: Gas constant: Specific Heat at Const. Q=1. We can write n, number of moles, as follows: [latex]n=\frac{m}{M}[/latex] where m is the mass of the gas, and M is the molar mass. are molar specific heat capacities. 3144598 joules per kelvin per mole. (a) Is the gas monatomic, diatomic, or polyatomic? Specific Heat Capacities of an Ideal Gas. It is denoted as R sp. 619 1. Heat Capacities of an Ideal Gas For an ideal gas, we can write the average kinetic energy per particle as 1 2 m<v2 >= 3 2 kT: From this, we calculate Cv and Cp for N particles. Take the molar mass of air as 28. 5 J/mol K. According to the first law of thermodynamics, Heat Capacities of Solids The metals listed in Table 18-1 of Tipler-Mosca have approximately equal molar specific heats of about c0 = 3R = 24. The gas constant (symbol R) is also called the molar or universal constant. Vol. , air) and we thus examine the entropy relations for ideal gas behavior. 3R/2 May 03, 2020 · Specific heat capacity of water is 1 cal g-1 K-1 or 4. The Adiabatic Process of an Ideal Gas. Cr. 3/2R temp diff. The SI unit is J kg −1 K −1. Therefore, the ratio between Cp and Cv is the specific heat ratio Show that molar specific heat capacity for such a process is given by . (c) What is the work done by the gas during this process? Specific heat capacity at constant volume is defined as the amount of heat required to raise the temperature of 1 g of the gas through 1º C keeping volume of the gas constant. The coefficients can be found in tables. 00-mo Let C P and C V denote the molar specific heat of an ideal gas at constant pressure and at constant volume, respectively. what is the molar specific heat of the mixture at constant volume ? Sol: The value of γ of a mixture is given by $\\large \\frac{n_1 + n_2}{\\gamma -1} = \\frac{n_1}{\\gamma_1 -1} + \\frac{n_2}{\\gamma_2 -1} $ … Continue reading "One mole of a monoatomic ideal gas is mixed with one mole of a diatomic The molar specific heat at constant pressure of an ideal gas is (7/2)R. Neglect potential energy. 2 show. 10) C V = d 2 R, where d is the number of degrees of freedom of a molecule in the system. Nov 05, 2018 · Definition of Specific and Molar Heat Capacity. 326 1. This gives C p – C v = R = 8. 005 kJ/kg K and 0. This result is due to the Scottish physicist James Clerk Maxwell (1831−1871), whose name will appear several more times in this book. for an ideal gas. 3 mL at 133. V • If we heat up a gas by 1°C at constant pressure, it will expand and do work, so we must supply more heat (to do this work) than if it is heated by 1°C when kept at constant volume. 3 where, in this equation, CP and CV are the molar heat capacities of an ideal gas. Identify the high-temperature molar specific heat at constant volume for a triatomic ideal gas of (c) linear molecules and (d) nonlinear molecules. Cp - heat capacity at constant pressure Cv - heat capacity at constant volume Cp = dH/dt. Show that C P - C V = R. They both have the same units, and whenever we use Cp or Cv in an equation (for example when calculating entropy change with changing temperature) we always refer to it in terms of R (Cp=5/2R, Cv=3/2R). A real gas has a specific heat close to but a little bit higher than that of the corresponding ideal gas with %Using the ideal gas law, we can write & * Substituting this and into the expression for the first law gives # * * # This expression applies to any ideal gas and shows that the molar specific heat at constant pressure is greater than the molar specific heat at constant volume by the amount * 4 In the preceding chapter, we found the molar heat capacity of an ideal gas under constant volume to be (3. The specific heat ratio, (or ), is a function of only and is greater than unity. R is the gas constant also called the ideal, molar, or universal gas constant is a physical constant of proportionality of the ideal gas equation. 18) where a, b, K, K 1 Molar Specific Heat at Constant Volume . 67 3 /2 P V C R CR (a) Find for the gas molecules. Heat Capacity of Ideal Gases. cribe the behavior of real gases only at very low pressures. The specific gas constant is a version of the ideal gas constant in mass form instead of molar form. The functions for oxygen and nitrogen were then used to generate ideal gas functions for air, including func tions for Jun 14, 2014 · Specific heat at constant pressure (Cp) of pure gases is readily available from a wide source available in physics, chemistry or thermodynamics textbooks and even on the internet as a free resource. The value of R is 8. See full list on tec-science. Its value for monatomic ideal gas is 5R/2 and the value for diatomic ideal gas is 7R/2. It is denoted by C P. Important: The heat capacity depends on whether the heat is added at constant The first law of thermodynamics states that the change in internal energy of a system equals the net heat transfer into the system minus the net work done by the system. c, c, velocity of light; also a constant in an equa-tion for a PVT isotherm. . If the gas has a specific heat at constant volume of C V (j/(o K mole)), then we may set dq = C V dT. Ideal Gas Heat Capacity of Methane. 5 kPa; and N 2,50 kPa. The specific heat is the amount of heat necessary to change the temperature of 1. As a result, its volume changes from 46. c) Suppose the piston is held fixed. 19. unit is J K-1 mol-1. Eq. The speed of sound method for determining heat capacity uses the translational and rotational vibrational potential and kinetic energy of the gases on their speed. That means we can use either of the two equations for ΔS wiggle . Heat is added and work is done in lifting the loaded pis-ton. The mixture is then heated to 735 o C. E. (b) Find the molar mass of the gas and identity the gas, (Him,- Il's one of H2, He, H:O, N2, C02, SO:) 5, When 20. where T 0, p 0, and α are constants. Skip to Content. Thus, C p = `5/2"R"` Specific heat capacity of Diatomic gas: The molecules of a monatomic gas have 5 degrees of freedom, 3 translational, and 2 2 50 Moles Of An Ideal Gas With Cv M 3r 2 Under G Work And Heat Ncert Xi Physics Chap 13 4 Molar Specific Heat Of Water Cp Cv Kinetic Theory Steam Tables 4-9 Mean Molar Heat Capacity CV, m of Gases at a Constant Volume V in the Temperature Range Between 0 °C and t 4-10 Specific Enhalpy h of Gases 4-11 Molar Enhalpy Hm of Gases Van der Waals equation calculator uses Van der Waals equation=([R]*Temperature/(Molar Volume-Gas constant b))-(Gas constant a/Molar Volume^2) to calculate the Van der Waals equation, The Van der Waals equation is a thermodynamic equation of state based on the theory that fluids are composed of particles with non-zero volumes, and subject to a (not necessarily pairwise) inter-particle For an Ideal Gas. The specific enthalpy is referenced to the elements having zero enthalpy at 25°C. 5. Calculate the apparent molar mass, the apparent gas constant, the constant-volume specific heat, and the specific heat ratio at 300 K for the mixture. In thermodyna Physics for Scientists and Engineers, Volume 1, Chapters 1-22 (8th Edition) Edit edition. Because heat capacity scales with the amount of substance, it is often more appropriate to use the specific heat capacity (takes into account mass) or the molar heat capacity (takes into account number of moles). Molar Heat Capacities, Gases Data at 15°C and 1 atmosphere. The idea of equipartition leads to an estimate of the molar heat capacity of solid elements at ordinary temperatures. Molar specific heat capacity (C v):-If the volume of the gas is maintained during the heat transfer, then the corresponding molar specific heat capacity is called molar specific heat capacity at constant volume (C v). Ratio of Molar Specific Heats • We can also define the ratio of molar specific heats • Theoretical values of C V , C P , and γ are in excellent agreement for monatomic gases • But they are in serious disagreement with the values for more complex molecules – Not surprising since the analysis was for monatomic gases 5 /2 1. This expression is applicable to real gases as the data in Table 21. Table 3. 6, 7. Develop a simple computer version of the gas tables (Table C. At constant pressure, heat going into a system can both do work and increase internal energy and typically does both. 718 kJ/kg K respectively. Constant Pressure Specific Heat The molar specific heat at constant pressure is defined by Using the first law of thermodynamics for a constant pressure process this can be put in the form From the ideal gas law (PV=nRT) under constant pressure conditions it can be seen that Since the constant volume specific heat is it follows that . If the ratio of specific heat of a gas at constant pressure to that at constant volume is γ, the change in internal energy of a mass of gas, when the volume changes from V to 2V constant pressure p, is (1) R / (γ − 1) (2) pV (3) p V / (γ − 1) (4) γ p V / (γ − 1) In a constant pressure process, the work done on the gas is: W = -P V 19-8 THE MOLAR SPECIFIC HEATS OF AN IDEAL GAS 521 PART 2 HALLIDAY REVISED Fig. 50: CO: 20. According to the first law of thermodynamics, for constant volume process with a monatomic ideal gas the molar specific heat will be: C v = 3/2R = 12. 2 sufficient to make this determination? Figure P20. Heat capacity (Specific) of gases is defined as the amount of heat required to raise the temperature of one gram gases by unit degree but per mole of gas is called molar heat capacity or simply heat capacity. 8 JK−1mol−1 J K − 1 mol − 1 for monatomic ideal gas The ΔE int term contributes 1. 7 psia (1 atm)). Let us now consider the molar specific heat at constant pressure of an ideal gas. T is the absolute temperature. Find the molar heat capacity of this gas as a function of its volume 'V', if the gas undergoes the process T = T o e a v (where T o and α are constants). mol-1) is just the molar heat capacity for ideal gases. The specific heat ratio is also a temperature dependent property. For a system consisting of a single pure substance, the only kind of work it can do is atmospheric work, and so the first law reduces to dU = d′Q − P dV. Its S. 1 shows the molar heat capacities of some dilute ideal gases at room temperature. Molar Volume Formula. May 01, 2013 · For a pure compound, the heat capacity ratio (k) is defined as the ratio of molar heat capacity at constant pressure (C p) to molar heat capacity at constant volume (C y): For an ideal gas,; therefore, Equation 3 can be written as: Where R is the universal gas constant and is equal to 8. The specific heat c is a property of the substance; its SI unit is J/(kg⋅K) or J/(kg⋅C). The functions for oxygen and nitrogen were then used to generate ideal gas functions for air, including func tions for The symbol for the Universal Gas Constant is Ru= 8. Molar heat capacity for an ideal, monatomic gas is given by: C v = 3/2 R (C v for a diatomic gas is 5/2 R) May 28, 2019 · Specific Gas Constant. com Related Topics . 2 cm3 while the pressure remains constant at 1. Therefore, the amount of heat required to raise the temperature of one mole of an Thermodynamics of ideal gases An ideal gas is a nice laboratory for understanding the thermodynamics of a uid with a non-trivial equation of state. 00 atrn. The constant-pressure heat capacity for any gas would exceed this by an extra factor of R (see Mayer's relation, above). Q: One mole of a monoatomic ideal gas is mixed with one mole of a diatomic ideal gas . if the quantity of the substance is one gm, then it is called specific heat. If the molar specific heat is measured at constant volume, it is called molar specific heat at constant volume denoted by C v C v. com Molar Specific Heat at Constant Pressure If Q amount of heat is added to n mole of ideal gas to increase the temperature by keeping the pressure constant then the molar specific heat, ∆? =? ?? ∆? The change in internal energy of the system, The Work done at constant pressure, Oct 10, 2018 · Molar Heat Capacity Key Takeaways . 7 torr? The molar specific heat at constant volume C v is. Specific heat capacity of a gas may have any value between ? ∞ and + ∞ depending upon the way in which heat energy is given. 40 Heat Capacity of an Ideal Gas. Also, the ratio of c P and c v is called the specific heat ratio, k = c P /c v. β = isothermal compressibility. SHOW THAT C P – C V = R. . We define: Ratio of Molar Specific Heats • We can also define the ratio of molar specific heats • Theoretical values of C V , C P , and γ are in excellent agreement for monatomic gases • But they are in serious disagreement with the values for more complex molecules – Not surprising since the analysis was for monatomic gases 5 /2 1. May 22, 2009 · mono atomic gas. So, for an ideal gas, if you know any one of the 4 forms of the heat capacity, molar or specific C P or C V, you can always calculate the other three using the equations on this page. 812 12. 7: 2. For specific heat: Cs = q/m x change in temperature Where: Cs = specific heat capacity (JK^-1kg^-1) m = mass (kg) For molar heat: It is the ratio of two specific heat capacities, Cp and Cv is given by: The Heat Capacity at Constant Pressure (Cp)/ Heat capacity at Constant Volume(Cv) The isentropic expansion factor is another name for heat capacity ratio that is also denoted for an ideal gas by γ (gamma). i) If the quantity of the gas present is 0. K-1. \frac{3}{5 Sep 22, 2011 · As a result the student finds that the volume of the gas changes from 50 cm3 to 150 cm3 while the pressure remains constant at 101. Dec 29,2020 - One mole of a monoatomic ideal gas is mixed with one mole of a diatomic ideal gas. 67 It turns out that the enthalpies of formation are zero for elements in there naturally occurring state at the reference state conditions. The specific heat (= specific heat capacity) at constant pressure and constant volume processes, and the ratio of specific heats and individual gas constants - R - for some commonly used " ideal gases ", are in the table below (approximate values at 68oF (20oC) and 14. For a gas, the molar heat capacity C is the heat required to increase the temperature of 1 mole of gas by 1 K. The gas specific gravity calculation does not check for unreasonable inputs. 5/2R. because. Sep 30, 2017 · Measure of heat of whole substance is termed as temperature. to get the specific heat capacity at constant volume: If you repeat this for the specific heat capacity at constant pressure, you get. In general, if the pressure is kept constant then the volume changes, and so the gas does work on its environment. Jan 25, 2009 · Molar Specific Heat Question? Let 25. Source: Specific heat values are obtained primarily from the property routines prepared by The National Institute of Standards and Technology (NIST), Gaithersburg, MD. 67 Ne 0. 350m^3. What is the temperature (in °C ) of 0. 0 to 101. The Specific Heat Capacity is measured and reported at constant pressure (Cp) or constant volume (Cv) conditions. The specific heat ratio is a ratio of c p and c v. Assume the mixture is an ideal gas. 16 in Thermodynamic Tables to accompany Modern Engineering Thermodynamics) for an arbitrary mixture of ideal gases with constant specific heats. 0 atm, V =1. Specific heat of ideal gas. An ideal gas experiences an adiabatic compression from p =1. In the following section, we will find how C P and C V are related, for an ideal gas . You've reached the end of your free preview. The value of this constant is 8. You need to kno Molar specific heat capacity of a gas . 9 J was added as heat to a particular ideal gas, the volume of the gas changed from 50. Equations for Ideal Gas Law Calculator (CRC, 1983) R u = 8. The heat capacities of real gases are somewhat higher than those predicted by the expressions of C V C V and C p C p given in Equation 3. 00ºC. 5 kPa; O 2,37. Q,heat energy =n . Ideal Gas Tables. Molar Specific Heats. Resultant mixture. This would be expected to give C V = 5/2 R, which is borne out in examples like nitrogen and oxygen. Also, C p - C v = R Therefore, C p = (f/2) R + R =R (1 + f/2) Now, ratio of specific heats γ is given as: Consider 'n' moles of an ideal gas contained in a cylinder fitted with a frictionless piston. (b) The process on a p-V diagram. temp diff. where Cr is the specific heat of the resultant mixture A gas obeys P (V-b) = RT. int = N. Please enter positive values. The molar specific heat capacity of a gas at constant volume (C v) is the amount of heat required to raise the temperature of 1 mol of the gas by 1 °C at the constant volume. (a) Adiabatic Monatomic HRW 81P (5th ed. C v and C p denote the molar specific heat capacities of a gas at constant volume and constant pressure, respectively. This indicates that vibrational motion in polyatomic molecules is significant, even at room temperature. 31 J/mol ? K). [The molar specific heat of a mono atomic ideal gas at constant volume (cp) is 3R/2 and in accordance with Meyer's relation, cp = cv + R]. ). At constant volume, the molar heat capacity C is represented by C V . It can be derived that the molar specific heat at In an ideal gas mixture the partial pressures of the component gases are as follows: CO 2,12. Hydrogen (H 2) gas is compressed from 4. 4R/2 b. 794 8. 3 kPa. (b) Find the specific heat of the gas. I. The molar specific heat of mixture at const volume - 6830889 For one mole, the molar specific heat at constant volume . 9 cal/(mol∙K) starts at 300 K and is heated at constant pressure to 320 K, then cooled at constant volume to its original temperature. 4, if we can develop a more general expression for the This is a special relationship between c v and c P for an ideal gas. specific heat means heat conceived by unit mass. Another specific heat, at constant volume, can be determined for a substance. Only the ideal gas Cp Molecular Model of Ideal Gas leads to using the equipartition theorem. Find the molar heat capacity of this gas as a function of its volume V, if the gas undergoes the following process: (a) T = T 0 e α v; (b) p = p 0 e α v. The dividers between the chambers are removed and the three gases are allowed to mix. Defining statement: dQ = nC dT. For an ideal gas, U depends on the temperature only, U = 3RT/2. Calculate the mixture cp value and divide by the mixture cv value to get the mixture k value. Average kinetic energy of a diatomic molecule at low temperature = 5/2kT. The ratio of specific heat at constant pressure to that at constant volume is The molar specific heat of a mono atomic ideal gas at constant pressure (cp) is 5R/2 where R is the universal gas constant. , then dV = 0, and, from 2, dq = du; i. 18 J g-1 K-1. A 2. 0831 bar dm3 mol-1 K-1). In statistical thermodynamics [176,139], it is derived that each molecular degree of freedom contributes to the molar heat capacity (or specific heat) of an ideal gas, where is the ideal gas constant. An ideal gas has a molar heat capacity C v at constant volume. • Recall that the internal energy of a mole of gas is . 67. When the temperature increases by 100 K, the change in molar specific enthalpy is _____ J/mol. 4 Entropy Changes in an Ideal Gas [VW, S & B: 6. 314 J / (mol * K). For such gases, C V C V is a function of temperature (Figure 2. A certain molecule has f degrees of freedom. Molar gas constant (R), fundamental physical constant arising in the formulation of the general gas law. The formula of the molar volume is expressed as \(V_{m} = \frac{Molar\ mass}{Density}\) Where V m is the volume of the substance. For an ideal gas, C v (monatomic gas) = `"dE"/"dT" = 3/2"RT"` For an ideal gas, C p - C v = R. Subscribe to o where and have been used to denote the specific heats for one kmol of gas and is the universal gas constant. 00-mol sample of the gas always starts at pressure 1. 4 liters. Correct answer is '2'. Since there are many assumptions that are made in the derivation of this value, it is considered a property of ideal gasses. Specific Heats of a Mole of Ideal Gas: C. 0 $\mathrm{kJ}$ of thermal energy is supplied to the gas. Equations for specific heat capacities of ideal gases Since both u and h are functions of temperature, the equations to c p and c v must also be functions of temperature. 9g/mol and assume Cv=5R/2. The value of 3/2 R is derived from the average kinetic energy of an ideal, monatomic gas. The molar specific heat at constant pressure of an ideal gas is (7/2)R. c2, radiation constant hc/k. The SI unit of molar heat capacity is the joule, so molar heat capacity is expressed in terms of J/mol·K. 1] Many aerospace applications involve flow of gases (e. where Cr is the specific heat of the resultant mixture Ideal gas constant. 15 K and a pressure equal to 1. To keep the pressure constant, an amount of heat (ΔQ) has to be added to the system, as indicated by the temperature rise in the diagram. Cp implies that the pressure is constant. The molar specific heat of a gas at constant pressure (Cpis the amount of heat required to raise the temperature of 1 mol of the gas by 1C at the constant pressure. Lee-Kesler Compressibility Chart. Specific Heat for Ideal Monoatomic Gases . Properties of Various Ideal Gases (at 300 K) Specific Heat Capacities of Air. q is not a state function and depend upon the path followed, therefore C is also not a state function. 8: Molar Specific Heat at Constant Volume where CV is a constant called the molar specific heat at constant volume. 667 (for all mono-atomic gases). 49 The data consist of the molar mass, specific heat, specific enthalpy, and specific entropy at standard pressure as a function of temperature. a) Find the specfic heat of air at constant volume in units of J/kgC. ii) Diatomic molecule. The Molar volume is directly proportional to molar mass and inversely proportional to density. 00 moles of an ideal gas at constant pressure, the temperature rises by 2. molar specific heat of ideal gas
ayn, rdl, ipf, sp, esyj, ywy, gfr, uud4, syo, j7q, eter, kbj, 03, q24nj, 7rb, | CommonCrawl |
Towards the restoration of the Mesoamerican Biological Corridor for large mammals in Panama: comparing multi-species occupancy to movement models
Ninon F. V. Meyer ORCID: orcid.org/0000-0002-0775-09261,2,3,
Ricardo Moreno3,4,
Rafael Reyna-Hurtado1,
Johannes Signer2 &
Niko Balkenhol2
Movement Ecology volume 8, Article number: 3 (2020) Cite this article
A Correction to this article was published on 26 May 2020
Habitat fragmentation is a primary driver of wildlife loss, and the establishment of biological corridors is a conservation strategy to mitigate this problem. Identifying areas with high potential functional connectivity typically relies on the assessment of landscape resistance to movement. Many modeling approaches exist to estimate resistance surfaces but to date only a handful of studies compared the outputs resulting from different methods. Moreover, as many species are threatened by fragmentation, effective biodiversity conservation requires that corridors simultaneously meet the needs of multiple species. While many corridor planning initiatives focus on single species, we here used a combination of data types and analytical approaches to identify and compare corridors for several large mammal species within the Panama portion of the Mesoamerican Biological Corridor.
We divided a large mammal assemblage into two groups depending on the species sensitivity to habitat disturbance. We subsequently used cost-distance methods to produce multi-species corridors which were modeled on the basis of (i) occupancy of nine species derived from camera trapping data collected across Panama, and (ii) step selection functions based on GPS telemetry data from white-lipped peccary Tayassu pecari, puma Puma concolor, and ocelot Leopardus pardalis. In addition to different data sources and species groups, we also used different transformation curves to convert occupancy and step-selection results into landscape resistance values.
Corridors modeled differed between sensitive and tolerant species, between the data sets, and between the transformation curves. There were more corridors identified for tolerant species than for sensitive species. For tolerant species, several corridors developed with occupancy data overlapped with corridors produced with step selection functions, but this was not the case for sensitive species.
Our study represents the first comparison of multispecies corridors parametrized with step selection functions versus occupancy models. Given the wide variability in output corridors, our findings underscore the need to consider the ecological requirements of several species. Our results also suggest that occupancy models can be used for estimating connectivity of generalist species. Finally, this effort allowed to identify important corridors within the MBC (i) at a country scale and (ii) for several species simultaneously to accurately inform the local authorities in conservation planning. The approach we present is reproducible in other sites and/or for other species.
To face the deleterious impacts of habitat loss and fragmentation on biodiversity worldwide, conservation efforts have increasingly focused on maintaining and/or restoring functional connectivity among habitat fragments at landscape scales, in particular through the establishment of biological corridors [1, 2]. Biological corridors can have different purposes such as connecting habitat patches within an individual home range, or connecting large habitat areas for seasonal migration. Here we focus on corridors which are specifically designed to facilitate movement and successful dispersal of individuals between populations to increase gene flow and long-term population viability [3, 4].
Many modeling approaches exist to identify areas with high potential functional connectivity, i.e., the degree to which landscapes facilitate or impede the movement of organisms [5]. It is increasingly recognized that an understanding of animal behavior rather than expert opinion alone is of paramount importance to effectively account for environmental effects on functional connectivity [3, 6]. However, to date relatively few studies compared the results obtained with different data sources and methods for assessing connectivity (but see [7, 8]), especially in tropical forests. A common approach to model biological corridors requires to first estimate a resistance surface, i.e., a spatial layer that reflects the degree to which a location in the landscape facilitates or impedes movement of a focal species (e.g., high resistance might be assigned to a road [6, 9]). Ideally, resistance should be estimated from actual dispersal data [10], but collecting a sufficiently large sample size of such data is extremely challenging [8, 11]. Genetic data can also be used to infer successful dispersal and reproduction among populations [12], but genetic data do not directly convey how animals move across the landscape, in addition to not always being available for species of conservation concern. Hence, resistance surfaces are often derived from habitat suitability (HS) values [7, 11, 13], which can be estimated empirically using, for example, occurrence information.
Occurrence data can be obtained in many different ways [9], and several recent studies used presence point data from satellite telemetry collars [7, 8, 11, 13, 14]. An important concern with this approach is defining the availability domain (i.e., what is available to the animal [9]). Moreover, presence points collected via telemetry studies likely represent locations from relatively few individuals, hence the sample size is often low. In contrast, camera trap data analyzed in an occupancy modeling framework explicitly estimate non-detection from true absence [15], and the challenge of having to define habitat availability is lessened. This is one reason why models based on camera-trapping data may be superior in estimating resistance than models derived from presence-only data. Moreover, when the survey is robustly designed, the entire population in the area sampled is assumed to be surveyed, including non-collared animals. Although camera trap data is increasingly widespread and available, because it is often easier and cheaper to obtain at a large scale than satellite collar data, their use in estimating functional connectivity has been very scarce (but see [16]).
Several studies showed the ability of occurrence data to predict dispersal habitat and hence to provide meaningful estimates of functional connectivity [7, 13, 14]. However, a major concern is that with occurrence data, the environmental characteristics of the point locations are assessed, rather than the environment connecting the points [9]. This reflects the assumption that animal choose travel routes on the basis of the same factors they use to choose habitat, although presence at a point versus movement between points are different processes that may be driven by different factors [6]. Therefore, connectivity models based on occurrence data may not always adequately reflect movement across the landscape, and have a tendency to underestimate functional connectivity [17]. As a result, it has often been argued that connectivity models and underlying landscape resistance surfaces based on observed movement data would better capture areas facilitating the dispersal of species [6, 8,9,10,11]. Yet, despite considerable advances in technological tools, acquiring sufficient and accurate GPS locations to infer movement under dense tropical forest canopy remains both costly and challenging [18]. Gaining a better understanding of how data types perform in tropical forests is crucial for ensuring that limited resources are efficiently invested in connectivity conservation [8, 19]. For example, if models derived from occupancy/camera traps data capture the movement process as well as models derived from GPS collar data, then time-consuming and costly data collection efforts may not be necessary. However, if occupancy data perform poorly, then the effort for collaring is well justified [8].
The choice of the focal species is another subject of debate in connectivity modeling, and often depends on the availability of data [20]. Many large-scale corridor initiatives focus on a single species (e.g., Yellowstone to Yukon Conservation Initiative for grizzly bear Ursus arctos, Jaguar Corridor Initiative in Latin America, Panthera onca), also referred to as a surrogate species, because it is assumed that the needs of an entire community are addressed by focusing on the requirements of a surrogate [4]. However, the conservation of a single umbrella species, typically a large-bodied carnivore species with extensive area requirements and high mobility, might not necessarily facilitate conservation of more sensitive, less mobile, or smaller species, given that they may have very different ecological and connectivity requirements [21, 22]. As many species are threatened by fragmentation, conservation corridors may more effectively protect regional biodiversity if they are developed to support the movement of multiple species simultaneously and with the same ecological requirements, rather than movement of a single species [23, 24].
In this study, we address the issues of choosing focal species and data type by comparing a set of connectivity scenarios derived from resistance surfaces that were estimated using varying: (1) species, (2) data sources, and (3) procedures to estimate landscape resistance. Our study was focused in the Mesoamerican Biological Corridor (MBC) which is a large-scale conservation corridor extending from Southeastern Mexico to Panama. In spite of substantial financial effort invested since it initiated in the 1990's [25], its effectiveness has been questioned for large terrestrial mammals [26, 27] including in Panama [28, 29]. This is problematic because the Isthmus of Panama is the last and narrowest portion of the MBC which connects Mesoamerica to South America, and has acted as an intercontinental land bridge for a large suite of taxa -including mammals- for millions of years [30]. Promoting functional connectivity by identifying important areas that would facilitate movement and gene flow in mammals across Panama will support ecosystem function and benefit biodiversity in general, because mammals have important functions within ecosystems [31].
We used a) detection-non detection data from camera trapping surveys, and b) empirical movement data from satellite telemetry to develop multi-species connectivity maps for two groups of medium to large-sized terrestrial mammal species that vary in their sensitive to habitat disturbance. Because species may respond differently to landscapes features, we expected the resistance surfaces and resulting connectivity scenarios to not overlap between the two groups of species. However, as previous studies showed that different data types produce resistance surfaces with similar variables and relationships to resistance [8, 32, 33], we predicted that both data types would produce qualitatively similar resistance surfaces within the same group of species.
The s-shaped Isthmus of Panama is approximately 750 km long and 60 km wide at its narrowest part along the Panama Canal in Central Panama (Fig. 1). The MBC portion in Panama is known as the 'Corredor Biológico Mesoamericano del Atlántico Panameño' (CBMAP) because it overlaps with the Atlantic side of the isthmus where most of the forest remains. Panama lies in the moist tropics with a dominant vegetation that is semi-deciduous or evergreen lowland forest, or sub-montane wet forest [34]. Panama has lost 40% of its forest cover since the 1950's mainly for cultivation and cattle pastures [35]. Today, of the 43% of land that remains forested, 44% are under protection corresponding to 22% of the country's land area [35]. Outside protected areas (PA) the country is a mosaic of both old-growth and secondary forest patches surrounded by agriculture, pastures, and human settlements [34].
Land cover in Panama with primary and secondary mature forest (green), disturbed forest (light green), non-forest cover (beige), urban areas (red), and protected areas within the MBC (black lines). Inset: location of Panama in Central America
Focal species
We used data from the nine largest terrestrial mammal species (i.e., > 12 kg) that we divided into two groups according to their sensitivity to habitat disturbance which was evaluated on the basis of expert opinion. All are mostly forest specialists species and are either herbivorous-frugivorous, i.e., Baird's tapir Tapirus bairdii, white-lipped peccary Tayassu pecari, collared peccary Pecari tajacu, white-tailed deer Odocoileus virginianus, Central American red brocket deer Mazama temama; or carnivorous, i.e., jaguar Panthera onca, puma Puma concolor, ocelot Leopardus pardalis; or insectivorous, i.e., giant anteater Myrmecophaga tridactyla. White-lipped peccary, tapir and giant anteater do no longer occur in as many areas in Panama as the other focal species [36,37,38], are highly threatened by habitat loss and hunting for bush meat, and are typically the first to disappear with habitat disturbance. Hence, we included them in the 'sensitive' group. We categorized the other six species in the 'tolerant' group. While they are also poached for bush meat or killed in retaliation of domestic animal depredation [39] they are less sensitive to habitat disturbance, and some of them are quite vagile in fragmented landscapes (i.e., puma, [33], and white-tailed deer [40]).
Animal locations and movement data
We used two data types in our analysis: a) Detection-non detection data were obtained from large-scale camera trapping surveys scattered across Panama (see [41] for details, Additional file 1); b) GPS telemetry data were obtained from white-lipped peccaries (two females and a male), and a puma (male) that we captured between 2016 and 2018 in the Darién forest in eastern Panama. They were fitted with an iridium GPS collar unit (TGW-4570-4) equipped with a CR-2A automatic release mechanism (Telonics, AZ, USA). The white-lipped peccary is a social species that lives in large herds. As the individuals were from different herds, they each represented the movement of an entire group [41]). We also captured and fitted an iridium GPS collar (Vectronic Aerospace GmbH, Germany) to a male ocelot in August 2017 in Soberania National Park (NP) in Central Panama. All procedures followed standard protocols approved by the Ministry of Environment of Panama (permit No. SE/A-104-15), and the Research Ethics Committee of El Colegio de la Frontera Sur, Mexico (CEI-O − 068/16). The GPS collars were programmed to get a fix every hour during 14 months. Due to the lack of signal from the collars after the release date and the rugged terrain, we could not recover the collars to extract all the data stored on-board. All individuals showed home ranging behavior when using the semi-variance approach developed by Fleming et al. [42], (see [41] for the white-lipped peccary). We used data from white-lipped peccary as a proxy for the sensitive group, and data from puma and ocelot as a proxy for the tolerant group.
Environmental variables
We tested the influence of six environmental covariates on the probability of occupancy and movement of the focal species (Table 1). Variables were chosen on the basis of literature and opinion of experts [29]. We used 30 m as the spatial grain size for all variables, and generated raster layers in ArcMap (v.10.3.1 ESRI, California). All layers were obtained from the Ministry of Environment of Panama (MiAmbiente), except for forest loss and forest cover for which we used freely available high resolution global maps [43]. Since animals may respond to different environmental features at different scales, using a single scale for all the variable may result in inaccurate estimates of landscape resistance [10, 33]. Therefore, we first determined the most appropriate scale for three variables (i.e., village, loss and forest cover) via a univariate analysis, to further combine the results in a multi-scale model of habitat suitability (Table 1; Additional file 2). We centered and scaled the covariates [44], and we performed a Spearman correlation test to avoid multicollinearity (defined here as rho > |0.6|).
Table 1 Environmental variables tested in the habitat suitability models
To design multi-species connectivity scenarios and identify wildlife corridors for each of the two groups of species, we first developed habitat suitability models by estimating the probability of occupancy using camera trapping data, and movement suitability models by calculating the probability of movement through step selection functions using GPS telemetry data. We then transformed the habitat suitability and suitability for movement values into resistance values. The resulting resistances surfaces were used as input for mapping functional connectivity across the MBC in Panama. The workflow we followed is presented in Fig. 2.
Workflow chart to estimate landscape resistance from each data type and create multi-species connectivity scenarios for two groups of species. A suite of suitability models were developped by integrating environmental variables and by using (1) occupancy modeling or (2) step selection functions (SSF). Each suitability model was then predicted across our study area which was the MBC in Panama. Three negative functions (one linear and two exponential) were used to transform the habitat suitability (from occupancy) or suitability for movement (from SSF) to landscape resistance. Each of the 18 landscape resistance surfaces was subsequently used as input for connectivity modeling. Diagram adapted from [8]
Modeling habitat suitability using occupancy and movement data
We conducted a two-step conditional logistic regression to quantify selection for each habitat attribute at the appropriate scale [7, 8, 10]. In a conditional logistic regression, used habitat is compared to available habitat, conditioned on the current position. We estimated the probability of occupancy for each of the nine focal species from detection-non detection data obtained via camera trapping, and by using the multi-species hierarchical occupancy model in a Bayesian framework that was described by [45]. This model estimates species-specific parameters as random effects of a community level distribution which is particularly advantageous for rare species such as jaguar, giant anteater, tapir and white-lipped peccary (see [29] for details). The occupancy model took the form:
$$ \mathrm{logit}\ \left({\Psi}_{\mathrm{i}\mathrm{j}}\right)={\upalpha}_i+{\upalpha}_{\mathrm{i}1}\ast \mathrm{V}1+{\upalpha}_{\mathrm{i}2}\ast \mathrm{V}2+\dots +{\upalpha}_{\mathrm{i}\mathrm{n}}\ast \mathrm{V}\mathrm{n} $$
where Ψij was the probability of occupancy of species i at camera site j, αi was the intercept of the model specific to species i, and αin was the coefficient of variable Vn specific to the species i.
We also developed step selection functions (SSF) to estimate suitability for movement from the GPS telemetry data set. A SSF compares the covariate values at the end point of observed steps (i.e., steps that the animal actually made) with covariate values at the end of control steps (steps that were deemed available to the animal but unused). A step was defined as the straight-line path between two consecutive GPS fixes, here with a sampling rate of 1 hour. Using the R Package 'amt' [46], landscape feature availability was estimated by generating 100 random steps (calculated using a gamma distribution, see [46]) which were compared with the observed ones. Observed and random steps shared the same starting point, but differed in their length and angular deviation.
As environmental variables may confer different levels of resistance to different types of behavior (i.e., traveling, stationary), failing to consider an animal's behavioral state may be insufficient in determining habitat selection during dispersal, and hence may result in misidentification of wildlife corridors [8, 11]. When no dispersal data is available, habitat selection measured during directed movement states (or traveling) may provide a reliable proxy to infer functional connectivity [8, 11]. We therefore developed SSF to quantify resource selection for a combined model which included all available data, SSF-All, and for a traveling model which included only traveling data, SSF-Travel. To focus on traveling behavior, we excluded steps < 100 m, < 150 m, and < 200 m for the ocelot, white-lipped peccary, and puma respectively. Turning angle is also sometimes used to separate movement behavior, i.e. low turning angles steps are classified as travel behavior (e.g., [33, 47]). However, when following the groups of white-lipped peccaries for several days, we noticed that even when they were moving fast (hence traveling), they sometimes took very sharp angle (> 90°). Tapirs are also known to walk in a zigzagging manner [48, 49]. Relying on turning angles to determine the movement mode could therefore be misleading for some of our focal species, so we decided to not take it into account.
Since individuals might respond to the environmental covariates differently, it is common practice to use either mixed effects models with individuals as random terms [50], or to average individual coefficients for obtaining coefficients at the population level [51]. However, with high individual-level differences and relatively small sample size, this approach could lead to overgeneralization and spatial biases [52]. Therefore, we developed a SSF for each individual [52] by testing a set of candidate models that included additive uncorrelated covariates as main effects. The best supported model was selected using AICc [53]. We used the coefficients of the best supported SSF models to create surfaces of suitability for movement along the CBMAP for each individual, and for each behavior ('All' and 'Travel'). As in Keeley et al. [7] the value of suitability for movement of each cell was calculated as:
$$ \mathrm{S}={\upbeta}_1\ast {\mathrm{V}}_1+{\upbeta}_2\ast {\mathrm{V}}_2+\dots +{\upbeta}_{\mathrm{n}}\ast {\mathrm{V}}_{\mathrm{n}} $$
where S was the suitability for movement and βi was the coefficient for the variable Vi.
We rescaled all the movement suitability maps from SSF to a range of 0–1 with the equation:
$$ f(x)=\frac{\left(x-\min \right)}{\max -\min } $$
where x was the value of suitability for movement of a grid cell, and min and max were the minimum and maximum values of suitability for movement of the suitability for movement surface. Values near 1 indicated the most suitable conditions for movement, while values near 0 indicated the least suitable habitat for movement.
Estimating the resistance
It is generally accepted that resistance is the negative inverse of habitat suitability (Fig. 2 [4, 9, 54]). It is also increasingly recognized that during dispersal or prospecting movements, animals may move more readily through lower suitable habitat such that resistance increases only moderately as suitability decreases from its maximum value, and then increases dramatically at lower suitability values [7, 13, 54, 55]. Hence, we tested three transformations to translate habitat suitability into resistance: a negative linear transformation,
$$ \mathrm{R}=100-\left(100\ast \mathrm{HS}\right) $$
and two negative exponential transformations which assigned high resistance values to the lowest habitat suitability values, following the equation developed by Trainor et al. [54]:
$$ R=100-99\frac{\left(1-{e}^{\left(-c\ast HS\right)}\right)}{1-{e}^{-c}} $$
where R was the resistance, HS was the habitat suitability (i.e., the occupancy probability ψ, or the probability of movement S as derived from SSF), and the factor c (3 or 8) determined the shape of the curve (Additional file 3).
Using this transformation, we developed a) species-specific resistance maps based on the occupancy output for each of the nine focal species, and b) individual-specific resistance maps with the best supported SSF models specific to each individual and each movement mode, 'All' and 'Travel'.
From single to multi-species connectivity scenarios
In order to quantify resistance for a combination of species and identify the multi-species connectivity scenarios, we standardized the unscaled resistance surfaces generated for each species (using occupancy), and each individual (using SSF). We subsequently averaged the standardized scores into combinations of sensitive and tolerant species with the raster calculator in ArcGIS (v.10.3.1 ESRI, Redlands, California). We generated 18 resistance surfaces (two groups of species, three types of data, three transformations) that ranged from 1 (lower cost) to 1000 (higher cost).
At this stage, we assigned roads and urban areas a resistance value of 85 and 95% of the maximum resistance estimated for the tolerant and sensitive groups respectively because they present a major barrier for the movement of our focal species [56]. We used the resulting resistance surfaces as input to build functional connectivity networks among the core areas using least-cost path (LCP) and circuit theory methods. The LCP approach estimates the shortest distance between target core areas while accounting for resistance to movement [57]. Circuit-theory connectivity is based on random walk and uses the principles of an electric circuit where a current (animal) flows through nodes (habitat patches or cores) connected by resistors (landscape matrix) with voltage (probability of animal travel) and resistance (permeability of matrix). The resulting product is a prediction of 'current density' or a probability of movement across each pixel of the landscape [58]. We implemented the analysis in Linkage Mapper (v2.0.0 in ArcGIS 10.3.1; [59]). We used the PinchPoint Mapper tool and the All-to-one mode to estimate resistance values within least-cost corridors in order to identify and map pinch points (i.e. bottlenecks) within the resulting corridors. Given the relatively large spatial requirement of the study species, we used a cost-weighted distance cutoff of 25,000 to buffer our least-cost path so corridors had a biologically meaningful width of at least 1 km at their bottleneck.
Defining areas important to connect
Linkage Mapper requires to specify the areas between which to estimate functional connectivity and establish corridors. Intuitively, one could contemplate using the protected areas, but because not all protected areas in Panama still have populations of all study species [36], or conversely species populations could occur in non-protected areas, this approach would lead to inaccurate results. Instead, similar to Hofman et al. [56], we used the output of the occupancy analysis to determine habitat concentration area (henceforth core area) defined as areas known to harbor important populations of the focal species [60]. We plotted the probability of occupancy against the proportion of the study area (Additional file 4). We identified the occupancy threshold where the slope was the highest, and used this occupancy value as the threshold to identify areas where occupancy was at least equivalent or higher to that value. The proportion of area which was considered suitable and which we hence sought to connect was larger for the tolerant group (50% of the study area, 8 core areas; ψ = 0.2) than for the sensitive group (area = 40%; 6 core areas; ψ = 0.3). This seems intuitively correct given that sensitive species are not as widely distributed in the study area compared to the more tolerant ones. We cross-checked the output maps of core habitats for the focal species (Additional file 5) with our opinion and previous studies of species distribution model [27, 37, 38].
Animal locations and scale of analysis
We obtained 5315 unique detections of the nine focal species during 43294 camera trap nights. We also acquired 3400 GPS fixes from the sensitive group, and analyzed 3098 observed steps of which 1133 were classified as 'traveling' mode. We received 2682 GPS fixes for the tolerant group, and analyzed 2311 observed steps of which 759 were classified as 'traveling' mode (Additional file 6).
The AICc ranking of the occupancy and movement models showed that the scale of response varied between the two data sources, among individuals, and whether all the data or only the travel data were used (Additional file 7). The best scale for the forest cover varied the most with no clear pattern for its threshold, but the sensitive group tended to respond to forest cover within a larger area (up to 1 km) than by the tolerant group (150 m). Likewise, the scale of forest loss varied substantially (from 150 m to 2 km) with no clear selection pattern. The scale for density of village also tended to vary between individuals and between data type. However, it remained the same within each individual when using 'traveling' and 'all' data, except for the ocelot and a white-lipped peccary. In general, sensitive species responded to anthropogenic variables (i.e., road and village) at a smaller scale than tolerant species, whereas they responded to forest cover at a larger scale than tolerant species.
Occupancy and movement models
Our occupancy model included all variables but forest loss. The sign and intensity of the variables affecting occupancy differed by species (see [29], Additional file 8). Occupancy of all species but puma tended to increase deeper inside the protected areas, especially white-lipped peccary, white-tailed deer and collared peccary. Most species, but in particular the white-lipped peccary, responded positively to forest cover. The relatively small and non-significant coefficients of density of villages and distance to roads reflect their little effects on the occupancy of most species.
The covariates included in the highest-ranking step selection models remained relatively consistent across movement behavior and individuals (Table 2, Additional files 9 and 10). Forest loss was retained in all the best step selection models, and forest cover too with the exception of puma. The sign of the relationship, indicating preference or avoidance, changed for some variables between individuals and data source (Table 3). Although the sign changed little with movement behaviour ('All' versus 'Travel'), its strength varied but not in a consistent manner. In general, when traveling, the strength of selection for forest cover was higher, and the strength of selection for forest loss lower than when pooling all the relocation data (Table 2). The sensitive group had a tendency to roam at higher elevation, while the tolerant group selected areas at lower elevation. All the species remained deeper inside the protected areas expect for puma, and had a tendency to select forest loss. Road had little influence on the animals, as evidenced by the very small coefficient.
Table 2 Best supported step selection models developed for each individual and using two behavioral movement modes (see Additional file 10 for standard error and confidence intervals)
Table 3 Relationship between habitat suitability and six environmental variables for each individual or species. Suitability models were developed with different data types, (a) detection-non detection data analyzed in an occupancy modeling framework (ψ), and movement data analyzed with step selection functions and based on different movement behavior, (b) all data (SSF-All) and (c) traveling data only (SSF-Travel)
Multi-species connectivity scenarios
As habitat suitability models varied among data source and group of species, the resulting multi-species connectivity scenarios were also different (Fig. 3; Additional file 11). Corridor paths were always different between the two groups of species whether SSF or occupancy were used. Corridors of sensitive species usually passed through mountainous areas. When using traveling data, the corridors identified for the tolerant species were wider than corridors of sensitive species, which reflects a lower landscape resistance to movement of tolerant species than sensitive species. There was no notable difference of corridors widths when using the other data.
Multi-species connectivity scenarios developed to connect core areas for large mammals in Panama. Corridors were developed in the Western part of Panama between the Amistad International Park and the Santa Fé NP-Donoso block (left maps), and in Central Panama (right maps), for two distinct groups of species that were considered tolerant (represented by ocelot and puma) or sensitive to habitat disturbance (represented by white-lipped peccary). These connectivity models were derived from resistance surfaces estimated through step selection functions using all the relocation data (green), step selection functions using relocation data during travel movement (blue), and occupancy modeled at the community level for nine mammal species (red), and using the negative exponential transformation curve (c8). Urban areas are black, and main roads are the black and white lines. See Additional file 11 for maps comparing corridors modeled with varying transformation curves, data type and species
In Western Panama, output corridors of tolerant species were more numerous when using SSF-Travel than when using SSF-All and occupancy. Many of the corridors for tolerant species that were developed on the basis of occupancy overlapped with corridors identified with SSF. However, some corridors based on SSF were identified to pass along the Atlantic coast whereas this was not the case when using occupancy. When using SSF-All, corridors were passing through more forested areas while it was not necessarily the case when using the two other types of data. There was no such difference in Central Panama as most corridors overlapped. Corridors developed with SSF-Travel were much larger than the other corridors.
The corridor paths delineated for sensitive species with different data sets differed widely. Corridor based on SSF-Travel passed through forested areas while corridors modeled on the basis of occupancy and SSF-All were more directional. There was a corridor identified for sensitive species when using occupancy data and SSF-Travel data in the northern part of Central Panama and passing through a heavy urban area, which was not identified when using all the GPS data (i.e. SSF-All). In contrast to the tolerant species, corridors developed with SSF-All were wider than the other corridors.
The type of negative transformation (linear and exponential) had little effect on the output corridor paths for tolerant species (regardless of the data type), and on corridor paths that were modeled with occupancy data for sensitive species (Additional file 11). The only notable difference was the larger width of corridors delineated with a c8 transformation because resistance values to movement of species was lower. In contrast, output corridor paths modeled with SSF of sensitive species varied with the different transformation curves, especially when in traveling mode.
We compared multi-species connectivity scenarios across Panama for two groups of mammal species by using large-scale camera trapping data and GPS telemetry movement data, and a set of analytical procedures and transformation curves to estimate resistance surfaces.
Multi-species scenarios
As expected, our results showed that connectivity scenarios differed depending on the focal species used to parameterize the resistance surface, and this regardless of the analytical approach. In the western part of Panama, the tolerant group was predicted to move with higher intensity along the Atlantic coast. In contrast, the path that would better facilitate movement of the sensitive species was predicted to pass through the Cordillera Central of Panama, most likely because areas at higher elevation are more remote and less disturbed by human activities compared to lowland areas near the coast. These findings corroborate our assumption that the Baird's tapir, giant anteater and white-lipped peccary, which are among the most sensitive species, show a different habitat selection pattern, often less riskier than other more generalist species such as wildcats. Specifically, sensitive species were more strongly associated with larger forest cover habitat in mountainous areas, most likely to avoid riskier areas with higher deforestation and human encroachment. Although the core areas we sought to connect differed slightly between the two groups of species, the corridors identified for tolerant species were more numerous, whichever type of data we used to model them. These results indicate a larger suite of possible paths when moving between core areas, and a greater flexibility and adaptability in the matrix. Moreover, the larger width of corridors parametrized with movement data of tolerant species compared with tolerant species', reflects a lower resistance of the matrix to movement of tolerant species.
The different connectivity scenarios are the results of habitat suitability models or models of suitability for movement, and thus reflect a habitat selection and impact of anthropogenic factors which varied among species. The species sometimes displayed contrasting patterns in the selection of habitat characteristics. For instance, the puma selected lowland areas with less forest cover, while it was the opposite for the sensitive species whose selection for forest cover was stronger and had a tendency to remain further inside protected areas. Hence, our results highlight the importance of considering multiple species with different ecological requirements to effectively estimate functional connectivity, and raise the issue of numerous past connectivity studies which focused on a single, generalist species. For example, the MBC was originally called 'Paseo Pantera', (Path of the Panther in English), because it was designed for jaguar [61]. Nowadays, jaguar is still often the main focal species in habitat protection and connectivity studies (e.g., [62]), given their large area requirements, high mobility, and funding potential as charismatic species. Nevertheless, our study highlights that the effectiveness of carnivores as connectivity umbrellas in tropical forests may fail to conserve community connectivity for threatened species such as the Baird's tapir and white-lipped peccary, similarly to what previous studies found in other ecosystems [21, 22, 63]. Our results support the conclusion that highly sensitive species should be prioritized as the most important focal species for design of multi-species corridors, because less sensitive species which are often habitat generalists can more easily move through landscapes conserved for habitat specialists, whereas the opposite may not be true [4, 7, 22].
Effect of data source
Second, our prediction that both data types would produce qualitatively similar resistance surfaces with many of the models having the same variables influencing the resistance was for the most part supported. Most models included forest cover, forest loss and elevation. Despite qualitative similar models, the choice of data type had an influence on the resulting predictions of connectivity because the sign of the relationship, and/or the strength of selection or avoidance to these variables was different. This outcome was especially striking with the corridors modeled for sensitive species, as none of the analytical approach resulted in the same corridors in western Panama. In contrast, several corridors for tolerant species that were modeled with occupancy data and step selection functions overlapped. These findings suggest that non-invasive sampling with camera traps can provide useful data for estimating functional connectivity at landscape scale, and be as informative as movement data from GPS collars to detect corridor paths for generalist species. This may especially be true when camera-trapping sampling design are spatially widespread and cover habitats with a gradient of disturbance like was our case. We did not test our corridors against dispersal data, but studies showed that models based on point data, e.g., resource selection function, are able to predict species habitat use during dispersal for leopard Panthera pardus, a wide-ranging carnivore [14] and for kinkajou Potos flavus, an arboreal mammal species [13]. Nevertheless, other studies found that resistance estimates from empirical movement data (e.g., SSF) were more similar to resistance estimates from dispersal movements, compared to resistance estimates from point data [8, 11].
A notable outcome from our analysis using the GPS telemetry data is the differences in habitat preference displayed by most individuals when traveling compared to when behavioral state was not considered. This was especially the case for sensitive species for which, and in contrast to our expectations, the SSF models revealed a smaller tolerance of animals to human-modified landscapes when traveling. In particular, when traveling, the strength of selection for forest cover was higher, while it was lower for forest loss. An opposite pattern, i.e., greater tolerance to human disturbance when traveling, was reported for carnivore species in other ecosystems (e.g., African wild dog Lycaon pictus [64]; lion Panthera leo [10]).
Limitations and suggestions
A limitation of our study relies in the relatively restricted number of species and individuals used to parametrize the movement models in spite of considerable effort to collect data over a 2-year period. A further limitation is the lack of observed dispersal paths to validate our models. These limitations highlight the challenges associated with capturing animals and collecting long-distance movement data to evaluate functional landscape connectivity. Testing our connectivity scenarios against genetic data would provide valuable insights on landscape permeability and accuracy of the corridors, because gene flow reflects both successful movement and reproduction [8, 12, 17]. Landscape genetics is also particularly useful for large-scale assessment [65] such as was our study, but genetic data are not yet available in our study area. This said, a shortcoming when using this approach is that connectivity estimates derived from genetic usually reflect past landscape permeability and may not capture current movement and gene flow in a rapidly evolving environment such as Panama [11].
Implications for long-term conservation of mammals in Panama
Panama is a biodiversity hotspot and has long served as a vital habitat corridor between Mesoamerica and South America for broad-ranging neotropical forest species [30]. However, this important linkage between continents is increasingly put in jeopardy by deforestation, human disruption and urban development which impede movement and most likely gene flow of several species [28, 66]. Thus, it is critical to identify areas that can facilitate the movement of multiple species within the Isthmus. While our findings show that an accurate understanding of how animals move through their environment is important for the success of corridor design, it is sociopolitical and economic considerations that will allow the protection of these corridors. For example, one of the corridor that was identified with occupancy data and SSF-Travel for the sensitive group is not realistic given that it borders a large city (Colón), where poaching pressure is very high (pers. obs.). The likelihood that tapirs and white-lipped peccaries use this path and survive is very low. Another corridor that was identified for tolerant species and which effectiveness may be uncertain, is along the Atlantic coast in western Panama. Current construction of a road stretching from the northern end of the Panama Canal all the way to the west near Costa Rica, and which is associated with real estate development, willmost likely hamper the success of the corridor.
Moreover, our modeling exercises sought to connect suitable patches, thereby implying that all the core areas were equally good in harboring healthy populations of the focal species. However, several development projects such as mines, dams, and more roads threaten the biodiversity in these core areas, especially in Santa Fé NP and Donoso. We therefore stress the importance of assessing the impacts of such projects on wildlife connectivity and take adequate measures to mitigate them. It is important to keep in mind that the lack or very small population of some sensitive species in several protected forests, i.e., Damani Guariviara or San Lorenzo NP, does not make these areas unimportant for the long-term conservation of species. They serve as stepping stones between core areas that harbor functional populations as evidenced by least-cost corridors that traverse them.
Finally, poaching remains a significant threat for wildlife in our study region (pers. obs.), especially for dispersing individuals which are key in maintaining gene flow between core populations [12]. Successfully translating this connectivity research into habitat conservation and/or restoration actions will require partnering with the competent authority for land management and planning, but also engaging other partners such as private landowners, corporates, and local indigenous communities to promote active protection of the forests and its biodiversity in general [67].
Our study provides a framework to model wildlife corridors by combining different types of empirical data for multiple species simultaneously. It represents the first effort to estimate functional connectivity and identify optimal corridor locations to facilitate the movement of a suite of mammal species across an entire country in Latin America. Our findings highlight that the focal species, the data source, the analytical approach, and sometimes the transformation curve all influence the resulting connectivity scenarios. Therefore, and given the wide variety of methods employed in connectivity studies, efforts to test corridors designed are crucial (e.g., [68, 69]). Although we were not yet able to test the performance of the corridors modeled, all our multi-species connectivity scenarios show that it is critical to focus on the protection of forest at the landscape level in order to support the long-term movement of large mammals across the Isthmus of Panama. Finally, camera trapping data analyzed in an occupancy framework seems promising for estimating functional connectivity for generalist species, providing a cheaper and logistically less challenging method to telemetry.
Movement data were uploaded on Movebank (www.movebank.org).
An amendment to this paper has been published and can be accessed via the original article.
CBMAP:
Corredor Biológico del Atlántico Panameño
MBC:
Mesoamerican Biological Corridor
NP:
SSF:
Step selection functions
Damschen EI, Brudvig LA, Burt MA Jr, Fletcher RJ, Haddad NM, Levey DJ, et al. Ongoing accumulation of plant diversity through habitat connectivity in an 18-year experiment. Science. 2019;365:1478–80.
Beier P, Noss RF. Do habitat corridors provide connectivity? Conserv Biol. 1998;12(6):1241–52.
Chetkiewicz C-LB, St. Clair CC, Boyce MS. Corridors for conservation: integrating pattern and process. Annu Rev Ecol Evol Syst. 2006;37(1):317–42. https://doi.org/10.1146/annurev.ecolsys.37.091305.110050.
Beier P, Majka DR, Spencer WD. Forks in the road: choices in procedures for designing wildland linkages. Conserv Biol. 2008;22(4):836–51.
Baguette M, Van Dyck H. Landscape connectivity and animal behavior: functional grain as a key determinant for dispersal. Landsc Ecol. 2007;22(8):1117–29.
Cushman SA, McRae B, Adriaensen F, Beier P, Shirley M, Zeller K. Biological corridors and connectivity. In: Key topics in conservation biology 2. Oxford: Wiley; 2013. p. 384–404. https://doi.org/10.1002/9781118520178.ch21.
Keeley ATH, Beier P, Gagnon JW. Estimating landscape resistance from habitat suitability: effects of data source and nonlinearities. Landsc Ecol. 2016;31(9):2151–62 Springer Netherlands.
Zeller KA, Jennings MK, Vickers TW, Ernest HB, Cushman SA, Boyce WM. Are all data types and connectivity models created equal? Validating common connectivity approaches with dispersal data. Divers Distrib. 2018. https://doi.org/10.1111/ddi.12742.
Zeller KA, McGarigal K, Whiteley AR. Estimating landscape resistance to movement: a review. Landsc Ecol. 2012;27(6):777–97.
Elliot NB, Cushman SA, Macdonald DW, Loveridge AJ. The devil is in the dispersers: predictions of landscape connectivity change with demography. J Appl Ecol. 2014;51(5):1169–78.
Abrahms B, Sawyer SC, Jordan NR, McNutt JW, Wilson AM, Brashares JS. Does wildlife resource selection accurately inform corridor conservation? J Appl Ecol. 2016;54(2):412–22.
Robertson EP, Fletcher RJ, Cattau CE, Udell BJ, Reichert BE, Austin JD, et al. Isolating the roles of movement and reproduction on effective connectivity alters conservation priorities for an endangered bird. Proc Natl Acad Sci. 2018;115(34):8591–6. https://doi.org/10.1073/pnas.1800183115.
Keeley ATH, Beier P, Keeley BW, Fagan ME. Habitat suitability is a poor proxy for landscape connectivity during dispersal and mating movements. Landsc Urban Plan. 2017;161:90–102. Elsevier B.V. https://doi.org/10.1016/j.landurbplan.2017.01.007.
Fattebert J, Robinson HS, Balme G, Slotow R, Hunter L. Structural habitat predicts functional dispersal habitat of a large carnivore: how leopards change spots. Ecol Appl. 2015;25(7):1911–21.
MacKenzie DI, Nichols JD, Lachman GB, Droege S, Royle AA, Langtimm CA. Estimating site occupancy rates when detection probabilities are less than one. Ecology. 2002;83(8):2248–55.
Wang F, McShea WJ, Li S, Wang D. Does one size fit all? A multispecies approach to regional landscape corridor planning. Divers Distrib. 2018;24(3):415–25.
Mateo-Sánchez MC, Balkenhol N, Cushman S, Pérez T, Domínguez A, Saura S. A comparative framework to infer landscape effects on population genetic structure: are habitat suitability models effective in explaining gene flow? Landsc Ecol. 2015;30(8):1405–20.
Hofman MPG, Hayward MW, Heim M, Marchand P, Rolandsen CM, Balkenhol N. Right on track ? Performance of satellite telemetry in terrestrial wildlife research. Plos ONE. 2019;14(5):1–26.
McClure ML, Hansen AJ, Inman RM. Connecting models to movements: testing connectivity model predictions against empirical migration and dispersal data. Landsc Ecol. 2016;31(7):1419–32 Springer Netherlands.
Meurant M, Gonzalez A, Doxa A, Albert CH. Selecting surrogate species for connectivity conservation. Biol Conserv. 2018;227:326–34. Elsevier. https://doi.org/10.1016/j.biocon.2018.09.028.
Cushman SA, Landguth EL. Multi-taxa population connectivity in the Northern Rocky Mountains. Ecol Modell. 2012;231:101–12. Elsevier B.V. https://doi.org/10.1016/j.ecolmodel.2012.02.011.
Beier P, Majka DR, Newell SL. Uncertainty analysis of least-cost wildlife modeling for designing linkages. Ecol Appl. 2009;19(8):2067–77.
Brodie JF, Giordano AJ, Dickson B, Hebblewhite M, Bernard H, Mohd-Azlan J, et al. Evaluating multispecies landscape connectivity in a threatened tropical mammal community. Conserv Biol. 2015;29(1):122–32.
Liu C, Newell G, White M, Bennett AF. Identifying wildlife corridors for the restoration of regional habitat connectivity : a multispecies approach and comparison of resistance surfaces. PLoS One. 2018;13:1–14.
Grandia L. Between bolivar and bureaucracy: the Mesoamerican biological corridor. Conserv Soc. 2007;5(4):478–503.
Wultsch C, Caragiulo A, Dias-freedman I, Quigley H, Rabinowitz S, Amato G. Genetic diversity and population structure of Mesoamerican jaguars (Panthera onca): implications for conservation and Management. PloS one. 2016;11:1–25.
Schank CJ, Cove MV, Kelly MJ, Mendoza E, O'Farrill G, Reyna-Hurtado R, et al. Using a novel model approach to assess the distribution and conservation status of the endangered Baird's tapir. Divers Distrib. 2017;23(12):1459–71.
Norton JE, Ashley MV. Genetic variability and population structure among wild Baird's tapirs; 2004. p. 211–20.
Meyer NFV, Moreno R, Sutherland C, la Torre JA, Esser HJ, Jordan CA, et al. Effectiveness of Panama as an intercontinental land bridge for large mammals. Conserv Biol. 2019;0(0):1–13.
Leigh EG, O'Dea A, Vermeij GJ. Historical biogeography of the isthmus of Panama. Biol Rev. 2013;89(1):148–72.
Ripple WJ, Newsome TM, Wolf C, Dirzo R, Everatt KT, Galetti M, et al. Collapse of the world ' s largest herbivores Collapse of the world ' s largest herbivores. 2015.
Mateo Sánchez MC, Cushman SA, Saura S. Scale dependence in habitat selection: the case of the endangered brown bear (Ursus arctos) in the Cantabrian range (NW Spain). Int J Geogr Inf Sci. 2014;28(8):1531–46.
Zeller KA, McGarigal K, Beier P, Cushman SA, Vickers TW, Boyce WM. Sensitivity of landscape resistance estimates based on point selection functions to scale and behavioral state: pumas as a case study. Landsc Ecol. 2014;29(3):541–57.
Condit R, Robinson WD, Ibáñez R, Aguilar S, Sanjur A, Martinez R, et al. The status of the Panama Canal watershed and its biodiversity at the beginning of the 21st century. Bioscience. 2001;51(5):389–98.
FAO (UN Food and Agriculture Organisation). Global forest resources assessment. Rome: FAO; 2010. ISBN 978-92-5-106654-6.
Meyer N, Moreno R, Sanches E, Ortega J, Brown E, Jansen PA. An inventory of the ungulates assemblage in the protected areas of Panama. Therya. 2016;7(1):65–76 Available from: http://132.248.10.25/therya/index.php/THERYA/article/view/341.
Meyer N, Moreno R, Jansen P. Distribution and conservation status of Baird's tapir in Panama. Newsl IUCN/SSC. 2013;22(30):2011–4 Available from: http://www.stri.si.edu/sites/publications/PDFs/2013_Meyer_et_al_TapirConservation.pdf.
Moreno R, Meyer N. Distribution and conservation status of the white-lipped peccary (Tayassu pecari) in Panama. Suiform Sound. 2014;13(1):32–7.
Moreno R, Meyer NFV, Olmos M, Hoogesteijn R, Hoogesteijn AL. Cat News Causes of jaguar killing in Panama – a long term survey using interviews. CAT News. 2015;62:40–2.
Reyna-Hurtado R, Tanner GW. Ungulate relative abundance in hunted and non-hunted sites in Calakmul Forest (southern Mexico). Biodivers Conserv. 2007;16(3):743–56.
Meyer N, Moreno R, Martínez-Ruiz MA, Reyna-Hurtado R. Spatial ecology of a large and endangered tropical mammal: the White-lipped peccary in Darién, Panama. In: Reyna-Hurtado R, Chapman C (editors). Movement Ecology of Neotropical Forest Mammals – Focus on Social Animals. Springer International Publishing; 2019. https://doi.org/10.1007/978-3-030-03463-4_6. ISBN: 978-3-030-03463-4.
Fleming CH, Calabrese JM, Mueller T, Olson KA, Leimgruber P, Fagan WF. From fine-scale foraging to home ranges: a Semivariance approach to identifying movement modes across spatiotemporal scales. Am Nat. 2014;183(5):E154–67. https://doi.org/10.1086/675504.
Hansen MCC, Potapov PV, Moore R, Hancher M, Turubanova SA, Tyukavina A, et al. High-resolution global maps of 21st-century Forest cover change. Science. 2013;342:850–4 http://www.ncbi.nlm.nih.gov/pubmed/24233722.
Schielzeth H. Simple means to improve the interpretability of regression coefficients. Methods Ecol Evol. 2010;1:103–13.
Dorazio RM, Royle JA. Estimating size and composition of biological communities by modeling the occurrence of species. J Am Stat Assoc. 2005;100(470):389–98.
Signer J, Fieberg J, Avgar T. Animal movement tools ( amt ): R package for managing tracking data and conducting habitat selection analyses; 2019. p. 880–90.
Roever CL, Beyer HL, Chase MJ, Van Aarde RJ. The pitfalls of ignoring behaviour when quantifying habitat selection. Divers Distrib. 2013;20(3):322–33.
Terwilliger VJ. Natural history of Baird ' s tapir on Barro Colorado Island. Panama Canal Zone Biotropica. 1978;10(3):211–20.
Jordan CA, Hoover B, Dans AJ, Schank C, Miller JA. The impact of hurricane Otto on Baird's tapir movement in Nicaragua's Indio Maíz biological reserve. In: Reyna-Hurtado RA, Chapman CA, editors. Movement Ecology of Neotropical Forest Mammals – Focus on Social Animals: Springer; 2019.
Muff S, Signer J, Fieberg J. Accounting for individual-specific variation in habitat-selection studies: efficient estimation of mixed-effects models using Bayesian or frequentist computation. J Anim Ecol. 2019;1:411801.
Fieberg J, Matthiopoulos J, Hebblewhite M, Boyce MS, Frair JL. Correlation and studies of habitat selection : problem , red herring or opportunity? Proc R Soc B. 2010;365:2233–44.
Osipova L, Okello MM, Njumbi SJ, Ngene S, Western D, Hayward MW, et al. Using step-selection functions to model landscape connectivity for African elephants: accounting for variability across individuals and seasons. Anim Conserv. 2018;22:35–48.
Burnham KP, Anderson DR. Model selection and multimodal inference: a practical information-theoretic approach. New York: Springer; 2002.
Trainor AM, Walters JR, Morris WF, Sexton J, Moody A. Empirical estimation of dispersal resistance surfaces: a case study with red-cockaded woodpeckers. Landsc Ecol. 2013;28(4):755–67.
Mateo-Sánchez MC, Balkenhol N, Cushman S, Pérez T, Domínguez A, Saura S. Estimating effective landscape distances and movement corridors: Comparison of habitat and genetic data. Ecosphere. 2015;6(4):1–6.
Hofman MPG, Hayward MW, Kelly MJ, Balkenhol N. Landscape and Urban Planning Enhancing conservation network design with graph-theory and a measure of protected area e ff ectiveness : Re fi ning wildlife corridors in Belize , Central America. Landsc Urban Plan. 2018;178:51–9. Elsevier. https://doi.org/10.1016/j.landurbplan.2018.05.013.
Adriensen F, Chardon JP, De Blust G, Swinnen E, Villalba S, Gulinck H, et al. The application of least-cost modelling as a functional landscape model. Landsc Urban Plan. 2003;64(4):233–47.
Mcrae BH, Dickson BG, Keitt TH, Shah VB. Using circuit theory to model connectivity in ecology, evolution, and conservation. Ecology. 2008;89(10):2712–24.
McRae BH, Kavanagh DM. Linkage Mapper Connectivity Analysis Software. 2011;Nat Conserv (Seattle WA). Available from: http://www.circuitscape.org/linkagemapper.
WHCWG. Normalized least-cost corridors, statewide analysis. Washingt Wildl Habitat Connect Work Gr. 2010.
Holland B, Hilty JA, Chester CC, Cross MS. Climate and conservation: landscape and seascape science, planning, and action; 2012. p. 1–373.
Rabinowitz A, Zeller KA. A range-wide model of landscape connectivity and conservation for the jaguar, Panthera onca, Biol Conserv. 2010;143(4):939–45. Elsevier Ltd. https://doi.org/10.1016/j.biocon.2010.01.002.
Abrahms B, Jordan NR, Golabek KA, McNutt JW, Wilson AM, Brashares JS. Lessons from integrating behaviour and resource selection: activity-specific responses of African wild dogs to roads. Anim Conserv. 2016;19(3):247–55.
Wultsch C, Waits LP, Kelly MJ. A Comparative Analysis of Genetic Diversity and Structure in Jaguars ( Panthera onca ), Pumas ( Puma concolor ), and Ocelots ( Leopardus pardalis ) in Fragmented Landscapes of a Critical Mesoamerican Linkage Zone; 2016. p. 1–30.
Eizirik E, Kim J-H, Menotti-Raymond M, Crawshaw PG, O'Brien SJ, Johnson WE. Phylogeography , population history and conservation genetics of jaguars ( Panthera onca , Mammalia , Felidae ). Mol Ecol. 2001;10:65–79.
Keeley ATH, Basson G, Cameron DR, Heller NE, Huber PR, Schloss CA, et al. Making habitat connectivity a reality. Conserv Biol. 2018;0(0):1–12.
Osipova L, Okello MM, Njumbi SJ, Ngene S, Western D, Hayward MW, et al. Validating movement corridors for African elephants predicted from resistance-based landscape connectivity models. Landsc Ecol. 2019;34(4):865–78. Springer Netherlands. https://doi.org/10.1007/s10980-019-00811-0.
Sharma S, Dutta T, Maldonado E, Wood C, Panwar HS, Seidensticker J. Forest corridors maintain historical gene flow in a tiger metapopulation in the highlands of Central India. Proc R Soc B. 2013;280(20131506):1–9.
We are grateful to A. Chami, T. Contreras, U. Contreras, C. Contreras, M. Manyoma, L. Pretelt, A. Angulo, A. Artavia, J. Ortega, J. Padilla, M. Parks and E. Sanches for assistance during captures; E. Sempris, A. Puertes and N. Young for logistics; MiAmbiente for permits and housing; C. Jordan and W. Martinez for training in trapping. NM thanks R. Kays and J. Fieberg from the 2018 Movebank Course at the North Carolina Museum of Natural Sciences, and T. Dutta, L. Osipova, L. Richter and J. Gallo for valuable insights. We thank two anonymous reviewers for valuable comments to improve a previous version of this manuscript.
This work was part of Ninon Meyer's PhD thesis for which she received a scholarship from the National Council of Science and Technology of Mexico (CONACYT, scholar # 576309), and a short-term research grant from the German Academic Exchange Service (DAAD) to conduct part of the analysis at the University of Göttingen, Germany. NM and RM received funding and equipment from the National Secretary of Science, Technology and Innovation of Panama (SENACYT, Proyecto FID 14–145), Gemas/Fondo Darién, Fundación Natura, the Ministry of Environment of Panama, IdeaWild, and the Asociación Panamericana para la Conservación. NM also received two grants from The Rufford Foundation. We acknowledge support by the German Research Foundation and the Open Access Publication Funds of the Göttingen University.
Departamento de Conservación de la Biodiversidad, El Colegio de la Frontera Sur, Lerma, Campeche, Mexico
Ninon F. V. Meyer & Rafael Reyna-Hurtado
Wildlife Sciences, Faculty of Forest Sciences, University of Göttingen, Göttingen, Germany
Ninon F. V. Meyer, Johannes Signer & Niko Balkenhol
Fundación Yaguará Panamá, Ciudad del Saber, Panama
Ninon F. V. Meyer & Ricardo Moreno
Smithsonian Tropical Research Institute, Balboa, Ancón, Panama
Ricardo Moreno
Ninon F. V. Meyer
Rafael Reyna-Hurtado
Johannes Signer
Niko Balkenhol
NM, RM, RRH and NB conceived the study. NM and RM collected the field data. NM processed the data, ran the analysis, produced the corridors, and drafted the first version of the manuscript. JS contributed to the step selection functions analysis. All authors contributed to the writing, read and approved the final manuscript.
Correspondence to Ninon F. V. Meyer.
All procedures followed standard protocols approved by the Ministry of Environment of Panama (permit No. SE/A-104-15), and the Research Ethics Committee of El Colegio de la Frontera Sur, Mexico (CEI-O-068/16).
The original version of this article was revised: an error in the second equation in the 'Estimating the resistance' section has been corrected.
Methods - Locations of camera traps used to conduct occupancy modelling across Panama.
Methods - Environmental variables and selection of characteristic scale.
Methods - Transformation curves used to translate habitat suitability values into landscape resistance values.
Methods- Probability of occupancy and proportion of the study area.
Methods - Core areas of two groups of species.
Results - GPS relocations and steps for each individual.
Results - Determination of the optimal scale for forest cover, forest loss, and density of villages.
Results - Species-specific coefficients for environmental variables estimated with occupancy modeling.
Results - AICc of SSF models for each individual.
Additional file 10.
Results - Estimates of the best supported SSF model for each individual.
Results - Maps with multi-species connectivity scenarios.
Meyer, N.F.V., Moreno, R., Reyna-Hurtado, R. et al. Towards the restoration of the Mesoamerican Biological Corridor for large mammals in Panama: comparing multi-species occupancy to movement models. Mov Ecol 8, 3 (2020). https://doi.org/10.1186/s40462-019-0186-0
Habitat suitability
Least-cost path
Movement behavior
White-lipped peccary
Integrating Movement Ecology with Biodiversity Research | CommonCrawl |
SN Applied Sciences
May 2019 , 1:491 | Cite as
Monitoring and modelling water quality of Loktak Lake catchment
Eliza Khwairakpam
Rakesh Khosa
Ashvani Gosain
Arvind Nema
Part of the following topical collections:
2. Earth and Environmental Sciences (general)
Loktak Lake is an internationally important, Ramsar designated, fresh water wetland system in the state of Manipur, India. The lake has also been listed under Montreux Record on account of the ecological modifications that the lake system has witnessed over time. Discharges from nine rivers namely Khuga, Thongjaorok, Awang Khujairok, Nambol, Nambul, Imphal, Kongba, Iril, and Thoubal have a great impact on the habitats and the overall ecological status of the lake. Monitoring of water quality at the catchment scale can be considered as an essential step towards the eventual goal to design effective conservation and management practices for the entire Loktak Lake ecosystem. This article presents the status of nine rivers draining into the Loktak Lake and correlation with land use patterns which can be used as support for making sound decisions regarding the management of the lake ecosystems. Flows were modelled using a combination of soil and water assessment tool (SWAT) and MIKE SHE, abbreviated as hybrid SHE-SWAT. Water quality models were established using MIKE 11 ECO Lab. Water quality parameters such as biological oxygen demand, dissolved oxygen and water temperature were simulated. Water quality models were calibrated using available measured water quality data procured from State Pollution Control Board and validated using observed water quality collected during the field study.
Loktak Lake MIKE 11 ECO Lab Water quality Land use
Watershed management to improve the habitat of freshwater aquatic life forms is increasingly recognized as a necessary step towards sustainability [1]. Water quality can be assessed by various parameters such as biochemical oxygen demand (BOD), dissolved oxygen (DO), temperature, etc. [2]. DO is a significant parameter for determining water quality and a key point affecting aquatic habitats [3, 4, 5, 6]. High levels of BOD can lead to severe oxygen depletion and affect aquatic habitat [7, 8]. Water temperature also plays an important role in reproduction and metabolic rates of aquatic life forms [9, 10].
Water quality study can be done using field sampling followed by laboratory analysis. However, this method has some disadvantages such as time taking, expensive and mostly confined to few point locations, thus limiting to planning and management. Water quality information in watershed level or a river system can be of paramount use to environmental policies maker. It can help them in targeting area which needs to be emphasized, saving time and resources. It can also help them in stratifying the river into various sections with different water quality which can be treated with different management practices. With advancing computational ability and increasing understanding of the hydrological system, many computer models have been evolved and still evolving for modelling catchment water quality. These models vary from simplified conceptual to empirical and to physically based models [11]. Data availability and objective of the study greatly influence the type of model to be employed for the study of water quality [12]. The semi-distributed soil and water analysis tool (SWAT) [13] has been used extensively for hydrologic modelling and water resources management. Arnold et al. [14] simulated daily water balance in upper Mississipi River basin using this model. Narsimlu et al. [15] simulated stream flow for upper Sind River basin, India using this model. Saha et al. [16] also modelled stream flow for Yass River catchment in south-eastern Australia using this model [17]. Another widely used model for studying hydrological processes is the fully distributed Europeen Hydrological System (MIKE SHE) [18]. The literature has reported various applications of MIKE SHE, such as investigating the hydrological responses to land-use/land-cover changes and climate, irrigation planning, forest fire impact assessment and forestry management, sustainable groundwater management, and hydrological manipulations of grass wetland [19, 20, 21]. ECO Lab module of MIKE 11 hydrodynamic model is used for studying water quality [12, 22]. Butts et al. [23] simulated flows and temperatures in the Lower Wood River Valley, Idaho, US using MIKE ECO Lab. Loinaz et al. [1] employed MIKE ECO Lab for modelling streams temperature in Wood river valley and Silver Creek basin. Results from models can be used to support decision and policy making, e.g., Popescu et al. [24] and Forio et al. [25].
Loktak Lake is a RAMSAR designated, fresh water wetland system situated in the state of Manipur, India. The characteristic feature of this lake is the presence of floating islands covered with vegetation, locally known as "Phumdis" [26, 27]. A contiguous 40 km2 area of Phumdis, on the southern part of the lake is protected as Keibul Lamjao National Park (KLNP) for the conservation of a small and isolated population of Manipur's brow-antlered deer popularly known as Sangai (Rucervus eldii). KNLP is the only floating wildlife sanctuary in the world and the only natural home to the endemic and endangered Sangai deer [28, 29, 30, 31]. The lake is considered to be the lifeline of Manipur due to its importance in the socio-economic and cultural life of the people [32]. The lake sustains rich biological diversity with 428 species of animals and 132 plants species [33, 34]. This lake supports hydro-power generation, provide fisheries as the livelihood to about 8700 fishermen, provides water for irrigation to about 32,400 ha agricultural area [35, 36]. The annual benefits from Loktak Lake are about Rs 600 million, amounting to nearly 2% of the state's gross domestic product [37]. Loktak Lake catchment covers about 22% of the total Manipur state area. Manipur is a rural area (93%) in which occupation is dominated by agriculture [38, 39].
Despite the importance of the lake, research done on water quality of the lake and its catchment are very limited. Deteriorating water quality in its catchment along with reduced fish species can be considered as emerging concerns in Loktak Lake. Some fish species which have been disappeared from the lake and its catchment include Puntius saranacaudi marginatus, Bagarius bagarius, Rotia berdmorei, Labeo boga, Labeo pangusia, Lepidocephalus berdmorei, Lepidocephalus thermalis, Mystus aor, Mystus. tengara Puntius burmanicus, Puntius hexastichus and Tor tor [40, 41, 42, 43, 44, 45]. Laishram and Dey, [46] analyzed water quality of Loktak Lake and found that biological oxygen demand (BOD) was higher than the World Health Organization (WHO) guideline. Tuboi et al. [47] found that the Loktak Lake to be hypertrophic, leading to decrease in water quality causing adverse impacts on the ecosystem. State Pollution Control Board (SPCB), Government of Manipur monitor water quality of Loktak catchment three times annually.
Loktak Lake has been listed under Montreux Record on account of the ecological modifications that the lake system has witnessed over time [48]. Discharges from nine rivers namely Khuga, Thongjaorok, Awang Khujairok, Nambol, Nambul, Imphal, Kongba, Iril and Thoubal have a great impact on the habitats and the overall ecological status of the lake. Figure 1 shows the location of nine rivers draining into the Loktak Lake. Monitoring of water quality at the catchment scale can be considered as an essential step towards the eventual goal to design effective conservation and management practices for the entire Loktak Lake ecosystem.
Loktak Lake, its rivers, and sub-catchments in Northeast India
Acknowledging the importance of the Loktak Lake in terms of socio-economic and biodiversity conservation, this paper presents the study of the status of nine rivers draining into the Loktak Lake and correlation with land use patterns which can be used as support for making sound decisions regarding the management of the lake ecosystem. Water quality models were established for the nine rivers using MIKE 11 ECO Lab. Water quality parameters such as dissolved oxygen (DO), biological oxygen demand (BOD) and water temperature were simulated. Water quality models were calibrated using available measured water quality data procured from SPCB and validated using observed water quality collected during a field study. Further, the obtained spatial water quality was analysed and correlated to land use pattern to gain an understanding of human induced land cover change effect to the catchment water quality, which in turn affects the Loktak Lake ecosystem.
2 Study area
Loktak Lake, the largest fresh water lake in Northeast region is situated in the state of Manipur, India (Fig. 1). The lake covers an area of about 287 km2 with a catchment area of approximately 5040 km2 [27, 49, 50]. Elevation of the catchment ranges from 744 m above mean sea level (amsl) in the valley to 2559 m amsl in hilly regions. The soil of the catchment area consists of mostly clay and silt [51]. Field study indicated that top layer soil ranges from moderate to slightly acidic (pH 4.5 to 5.2). The mean annual temperature is about 24 °C during summer (May–July) and 14 °C during winter (November–January) [31]. The catchment area experiences relative humidity ranging from 51 to 81%, and wind speed ranging between 2 and 5 km/h in average annually [35]. On an average, the catchment receives an annual rainfall of about 1392 mm, within 150 rainy days in a year and pan-evaporation ranges between 19 and 130 mm [35, 52].
Hydrologically, Loktak Lake is dependent on nine major rivers namely Khuga, Thongjaorok, Awang Khujairok, Nambol, Nambul, Imphal, Kongba, Iril, and Thoubal. The lake catchment can be divided into nine sub-catchments namely, Khuga, Western, Nambul, Imphal, Kongba, Iril, Thoubal, Heirok and Sekmai. However, Heirok and Sekmai Rivers no longer contributes due to diversion scheme [33]. Thongjaorok, Awang Khujairok and Nambol Rivers lie in Western sub-catchments as shown in Fig. 1.
The land use of Loktak catchment can be broadly divided into agricultural areas, forest, settlement, water, and phumdis as shown in Table 1. Digital land use map of the year 2003 was procured from Forest Department, Government of Manipur. Agricultural areas consist of about 1406 km2 (27%) of the total Loktak catchment including Heirok and Sekmai sub-catchments. The dominant crop grown in the catchment area is rice. The paddy cultivation practiced in the valley area of the catchment accounts for about 65% of the overall production of the entire state [28]. The forest area can further be sub-divided into dense forest, degraded forest, and jhum. Jhum are initially forested areas which are burned down and cleared by the local people to be able to use for agricultural purposes [28]. The practice of jhum is also known as shifting cultivation. Forest area constitute the largest amount covering about 59% of the total catchment area. Of these forested areas, dense forest constitutes about 27%, degraded forest (16%) and jhum (16%). Tropical semi-evergreen, subtropical pine and montane wet temperate forests are major types of forest found in the catchment area [53, 54]. The practice of jhum can be considered as a major factor for degradation of the dense forest. Settlement constitutes about 222 km2 (4%) of the total catchment area including Heirok and Sekmai sub-catchments. The population of Manipur state is concentrated mainly in the central valley area. Water bodies constitute about 5% of the total catchment area, while phumdis constitute about 3%. A matured phumdis is about 1 to 2 m thick, solid and strong enough which can support the weight of thatched houses built on it [37]. The number of recorded phum huts built by fishermen on phumdis for shelter is 733 [35]. These people who are living on phumdis practice dumping of domestic waste directly on the lake.
Landuse of Loktak Lake sub-catchments
Land use (km2)
Sub-catchments
Nambul
Kongba
Khuga
Iril
Dense forest
Degraded forest
Jhum
Phumdis
3 Brief description of models
Hybrid SHE-SWAT employed for developing a hydrological model for each sub-catchments can be found from the literature [55]. Water quality model of the nine rivers was developed by using MIKE 11 ECO Lab module. ECO Lab is a flexible numerical laboratory for ecological modelling [56]. It is a tool which can be used to customize an ecosystem model to simulate water quality, ecological conditions and so on. This is an ecological modelling module which can simulate DO, BOD, temperature and other parameters. This module has several pre-developed ECO Lab templates that are suitable for various conditions. The ECO Lab model describes the biological, chemical, ecological processes and the interactions between the state variables in addition to the physical process of sedimentation of the components. The State variables in ECO Lab can be transported based on the hydrodynamics through the advection–dispersion process. The differential equation describing the oxygen concentration is given in Eq. (1).
$$\frac{{d\left( {DO} \right)}}{dt} = K_{2} \left( {DO_{sat} - DO} \right) - R + P - B - K_{3} *C\left( {BOD} \right) - Y*K_{4} *C\left( {NH_{4} } \right)^{nl}$$
where K2 is the reaeration constant; DO is dissolved oxygen; DOsat is the oxygen saturation constant; K3 is the degradation constant; K4 is the nitrification rate; Y is the oxygen consumption per nitrification unit; nl is the reaction order of nitrification; R is respiration; P is photosynthesis; B is sediment oxygen demand.
The BOD concentration is described by the following differential Eq. (2).
$$\frac{{d\left( {C\left( {BOD} \right)} \right)}}{dt} = K_{3} C\left( {BOD} \right) + \, resuspension \, - \, sedimentation$$
Resuspension occurs when the flow velocity exceeds a critical value. Resuspension is assumed to be constant in time, and at flow velocities smaller than the critical value, sedimentation will occur.
4 Model development
4.1 Water quality model
Hydrological models for each Loktak Lake sub-catchments, namely Khuga, Awang Khujairok, Nambol, Nambul, Imphal, Kongba, Iril, and Thoubal were developed using hybrid SHE-SWAT. Based on differences in hydrological response, each sub-catchments were delineated into hilly and valley regions except Kongba sub-catchment. Kongba sub-catchment, located in central part of the catchment, consist of only valley region. Hilly regions were modelled using SWAT while valley regions were modeled using MIKE SHE. Further details of hybrid SHE-SWAT models for Loktak Lake sub-catchments can be found from the literature [55]. The flowchart for the flow and water quality models development is shown in Fig. 2. Considering lack of concurrence of river discharges and water quality data, flows from each rivers were simulated for the time period January 2011 to December 2016. Surface water quality of nine major rivers was modelled using ECO Lab module of MIKE 11. The model set up includes river network, river cross sections, boundary condition, hydrodynamic parameters, advection dispersion and ECO Lab parameters. Among the three built-in integration routines of the module, EULER (Euler or Linear solution) was specified during the initial calibration stage considering lesser computing time. Later on, RKQC (Fifth order Runge–Kutta with Quality Control) was used for all nine models. Observed DO, BOD and temperature were obtained from SPCB, Government of Manipur (GoM) for years 2011 to 2016 observed three times annually. In addition, a field survey was also conducted for the two different time periods: 25 January 2016 to 31 January 2016 and 5 August 2016 to 15 August 2016. DO, BOD and temperature were collected for every 1 km in all the rivers. However, out of the nine rivers, upstream of Khuga, Imphal, Iril and Thoubal Rivers were not collected due to inaccessibility as a result of hilly terrain and security problem. Dissolved oxygen probe was used for measuring DO, and five-day test for biochemical oxygen demand test was used for measuring BOD. Water temperature was measured using a thermometer.
Flowchart of the flow and water quality models development
4.2 Model calibration and validation
Calibration and validation of hydrological models of sub-catchments can be found from the literature [55]. DO, BOD and temperature for the nine rivers were simulated from MIKE 11 ECO Lab models at every 1 km for the time period 2011 to 2016. The nine models were calibrated during 2011 to 2015 and validated for dry (25 January 2016 to 31 January 2016) and wet (5 August 2016 to 15 August 2016) seasons. The final calibrated values for dispersion coefficient (D) and reaeration constant (K) for the nine rivers are shown in Table 2. The values for D and K ranges from 3 to 7 m2 s−1 and 0.3 to 0.8 respectively. Each model was validated using observed data collected during the field survey.
Calibrated parameter values of MIKE 11 ECO Lab models
AD* (D, m2 s−1)
WQ** (reaeration, K)
Thongjaorok
Awang Khujairok
Nambol
*Advection dispersion; ** water quality
Figures 3 and 4 show observed versus simulated DO and BOD for nine rivers during two different time periods: 25 January 2016 to 31 January 2016 and 5 August 2016 to 15 August 2016 respectively. The comparison of observed and simulated temperature for nine rivers during two time periods are shown in Figs. 5 and 6. Table 3 also shows the statistical evaluation of the nine rivers during the validation period: dry season (25 January 2016 to 31 January 2016) and wet season (5 August 2016 to 15 August 2016). For the dry season, the Pearson coefficient (R) of DO for the nine models ranges from 0.071 to 0.985 with median 0.845. Higher R represents the higher accuracy of models which may be due to less modelling errors and the ability of the model to capture the processes affecting water quality parameters. In this case, higher R is shown by Khuga, Thongjaorok, Nambol, Nambul, Imphal, Kongba, Iril and Thoubal models. However, Awang Khujairok shows low R which may be due to a different time interval during water quality data collection for calibration and validation period. The models also show less modelling errors with root mean square error (RMSE) ranging from 0.107 to 1.82. mean absolute error (MAE) also varies from 0.082 to 0.954. For wet season also, R varies from to 0.228 to 0.979 with median 0.808. In addition, RMSE varies from 0.137 to 2.575.
Observed versus simulated DO and BOD for nine rivers draining into the Loktak Lake during 25 January 2016 to 31 January 2016
Observed versus simulated DO and BOD for nine rivers draining into the Loktak Lake during 5 August 2016 to 15 August 2016
Observed versus simulated temperature for nine rivers draining into the Loktak Lake 25 January 2016 to 31 January 2016
Observed versus simulated temperature for nine rivers draining into the Loktak Lake during 5 August 2016 to 15 August 2016
Statistical analysis of water quality models for nine rivers during the validation period
Sl. no.
DO (mg/L)
BOD (mg/L)
RMSE root mean square error, MAE mean absolute error, R correlation coefficient
Similarly, evaluation of BOD for both dry and wet seasons are shown in Table 3. For the dry season, R varies from 0.785 to 0.967 with median 0.91. Higher R is shown by Khuga, Nambol, Nambul, Imphal, Iril and Thoubal models indicating higher accuracy of models. However, Thongjaorok, Awang Khujairok and Kongba models show lesser R probably due to a different time interval during data collection for calibration and validation period. RMSE and MAE also vary from 0.142 to 1.833 and 0.12 to 0.926 respectively. During the wet season, the nine models show R ranging from 0.21 to 0.979 with median 0.947. The nine models show low RMSE ranging from 0.104 to 2.689. MAE also varies from 0.10 to 1.39.
Evaluation of water temperature for the validation period is also shown in Table 3. For the dry season, R varies from 0.192 to 0.945 with median 0.839. RMSE and MAE vary from 0.059 to 1.623 and 0.043 to 0.735 respectively. For the wet season, R varies from 0.473 to 0.907 with median 0.786. In terms of errors, RMSE and MAE vary from 0.148 to 3.786 to 0.1 to 1.83 respectively.
Figures 7, 8 and 9 show simulated DO, BOD and temperature for winter, summer and rainy seasons 2016 respectively. For the nine rivers, DO and BOD are inversely proportional in general. During the winter season, downstream of Nambul River shows DO of less than 0.5 mg/L as shown in Fig. 7. DO of upstream of Nambul and Kongba Rivers ranges from 0.5 to 1 mg/L. Conclusively among all nine rivers, Iril, Thoubal and upstream of Khuga show DO of above 4 mg/L. In terms of temperature, Nambul and Kongba Rivers show comparatively higher temperature (21–26 °C) as compared to remaining seven rivers. Nambul, Kongba and downstream of Imphal Rivers show higher BOD greater than 10 mg/L.
Simulated DO, temperature and BOD for winter season 2016
Simulated DO, temperature and BOD for summer season 2016
Simulated DO, temperature and BOD for rainy season 2016
Figure 8 shows simulated DO, temperature and BOD for Khuga, Thongjaorok, Awang Khujairok, Nambol, Nambul, Imphal, Kongba, Iril and Thoubal Rivers during the summer season. This figure shows spatial water quality in Loktak Lake catchment. Khuga, Thongjaorok, Awang Khujairok, Nambol, Nambul, Imphal, Kongba Rivers show DO of less than 4 mg/L. Nambul and Kongba Rivers show high temperature ranging from 31 to 33 °C. In this season, BOD is greater than 10 mg/L in all nine rivers except upstream of Khuga, Iril and Thoubal. As the monsoon period starts from the month of June and continues till September, DO is comparatively higher during the wet season as compared to dry season (Fig. 9). In the same way, BOD is comparatively lower as compared to the dry season. However, DO is still showing less than 4 mg/L for Khuga (downstream), Thongjaorok, Awang Khujairok, Nambol, Nambul, Imphal and Kongba Rivers. In addition, BOD is also higher than 10 mg/L in Nambul, Kongba and Imphal (downstream) Rivers. Water temperature for all nine rivers ranges from 21 to 31 °C.
It is evident from the Figs. 7, 8 and 9 that there are relatively seasonal and spatial changes of surface water quality in Loktak catchment. The two rivers namely Nambul and Kongba Rivers show DO of less than 4 mg/L throughout the year. BOD is also found to be higher than 10 mg/L. High BOD in Nambul and Kongba indicates high pollution loads due to use of agricultural pesticides and fertilizers and dumping of domestic waste. Landuse map of Nambul and Kongba sub-catchments shows high coverage of agricultural area as shown in Table 1. The agricultural area covers about 91 km2, constituting about 47% of the total Nambul sub-catchment. In Kongba sub-catchment also, agricultural area covers about 50 km2 (41%). The dominant crop of agriculture in this area is paddy cultivation. Nitrogen and Phosphorus fertilizers are commonly used in paddy cultivation. This, in turn, leads to an increase of nutrients in water contributing to eutrophication of rivers. In addition, Nambul and Kongba Rivers flow through the residential areas such as Imphal, Sagolband, Singjamei which have a major contribution to pollution loads due to dumping of domestic waste. Settlement areas in Nambul and Kongba sub-catchments cover about 20 km2 (11%) and 25 km2 (21%) respectively. DO of Imphal River is found to be lesser than 4 mg/L, while BOD is found to be higher than 10 mg/L. Landuse of Imphal sub-catchment shows forest area covering about 132 km2 (38%), degraded forest of about 58 km2 (17%), settlement area of about 18 km2 (5%) and Jhum cultivation of 41 km2 (12%). Downstream of Khuga is found to show DO of less than 4 mg/L in contrast to higher occupancy by dense forest (34%). This is probably due to high contribution of jhum cultivation area in Khuga sub-catchment, covering about 215 km2 (42%). Jhum cultivation leads to soil erosion of catchment area with slightly acidic soil (pH 4.5 to 5.2). Nambol, Awang Khujairok and Thongjaorok Rivers located in western sub-catchment are also found to have DO lesser than 4 mg/L. Nambol, Awang Khujairok and Thongjaorok Rivers located in western sub-catchment are also found to have DO lesser than 4 mg/L. Western sub-catchment consist of mostly agriculture (36%) and water bodies (26%). Among all the nine rivers, Thoubal and Iril Rivers are found to have better water quality in terms of DO and BOD. These two rivers show DO of higher than 4 mg/L throughout the year. BOD is also lesser than 10 mg/L throughout the year. Landuse of Thoubal sub-catchment shows high contribution of dense forest (24%) degraded forest (33%) and agriculture (29%). Landuse of Iril sub-catchment consist of mainly dense forest (44%), degraded forest (20%) and agriculture (23%). Further, in order to understand the change of landuse in the Loktak catchment area, landuse map for two different time period (2001 and 2012) was downloaded from Moderate Resolution Imaging Spectroradiometer (MODIS). Landuse maps show that there is increase of cropland area from about 631 km2 (2001) to 637 km2 (2012) while Deciduous Broadleaf forest has decreased from about 13 km2 (2001) to 3 km2 (2012). Thus, there is an influence of land use to the water quality of Loktak Lake catchment. A proper waste management practices is needed in Khuga downstream (30 to 66 km), Thongjaorok, Awang Khujairok, Nambol, Nambul, Imphal and Kongba sub-catchments so that BOD and DO can be brought to an acceptable level where aquatic life can flourish. There is a need to improve the water quality of rivers mainly Nambul and Kongba for conserving the Loktak Lake ecosystem.
The study was conducted on Loktak Lake catchment located in Northeast India to assess the water quality of nine rivers draining into the Loktak Lake and to correlate to its land use to support decision and policy making. Flows were simulated using hybrid SHE-SWAT model. Water quality models were developed for nine rivers namely, Khuga, Thongjaorok, Awang Khujairok, Nambol, Nambul, Imphal, Kongba, Iril and Thoubal draining into the Loktak Lake using MIKE 11 ECO Lab during January 2011 to December 2016. The models were calibrated using observed water quality obtained from SPCB and validated using observed water quality data obtained using field survey. The statistical evaluation of the used models indicated less modelling errors and applied successfully. Water quality parameters such as DO, BOD and water temperature were simulated. Spatial distribution of simulated DO, BOD and temperature were analysed and correlated with landuse patterns of the catchment. The analysis found that Nambul and Kongba Rivers showed DO of less than 4 mg/L and BOD of greater than 10 mg/L throughout the year. Low DO and High BOD in Nambul and Kongba Rivers can be due to high pollution loads contributed by use of agricultural pesticides and fertilizers and dumping of domestic waste. In Khuga catchment, there is less settlement (1%) while dense forest covers about 34%. However, downstream of Khuga River shows DO of less than 4 mg/L mostly due to high coverage of jhum cultivation (42%). Jhum cultivation leads to soil erosion of catchment area with slightly acidic soil (pH 4.5 to 5.2). Among all the nine rivers, Thoubal and Iril Rivers are found to have higher DO (higher than 4 mg/L) and lesser BOD (higher than 10 mg/L) throughout the year. This may be due to high coverage by forest in Thoubal and Iril sub-catchments. Thus, it can be concluded that spatial distribution of water quality in Loktak Lake catchment has high influence by land use patterns. There is need of waste management practices in Khuga downstream (30 to 66 km), Thongjaorok, Awang Khujairok, Nambol, Nambul, Imphal and Kongba sub-catchments so that BOD and DO can be brought to an acceptable level. There is a need of proper waste management practices in which water quality of rivers mainly Nambul and Kongba need to be focused. The present analysis can be of great importance in making a sound decision and policy-making for the entire Loktak ecosystem.
We would like to thank State Pollution Control board, Government of Manipur (GoM); State Forest Department, GoM and Loktak Development Authority for sharing observed data and extending support. The research was financed by University Grant Commission, Government of India and India Institute of Technology Delhi.
Compliance with ethical standards
There is no conflict of interest in this study.
Loinaz MC, Davidsen HK, Butts M, Bauer-Gottwein P (2013) Integrated flow and temperature modeling at the catchment scale. J Hydrol 495:238–251CrossRefGoogle Scholar
Bhateria R, Jain D (2016) Water quality assessment of lake water: a review. Sustain Water Resour Manag 2(2):161–173CrossRefGoogle Scholar
Even S, Mouchel JM, Servais P, Flipo N, Poulin M, Blanc S, Chabanel M, Paffoni C (2007) Modelling the impacts of Combined Sewer Overflows on the river Seine water quality. Sci Total Environ 375(1–3):140–151CrossRefGoogle Scholar
Abbaspour KC et al (2012) Modelling hydrology and water quality in the pre-alpine/alpine Thur watershed using SWAT. Environ Monit Assess 40(1):17–24Google Scholar
Ay M, Özgür K (2017) Estimation of dissolved oxygen by using neural networks and neuro fuzzy computing techniques. KSCE J Civ Eng 21(5):1631–1639CrossRefGoogle Scholar
Yin Z, Yu N, Liang B, Zeng J, Xie S (2016) Experimental study of dissolved oxygen transport by regular waves through a perforated breakwater. J Ocean Univ China 15(1):78–82CrossRefGoogle Scholar
Kramer DL (1987) Dissolved oxygen and fish behavior. Environ Biol Fishes 18(2):81–92CrossRefGoogle Scholar
Penn M, Pauer (2003) Environmental and ecological chemistry 2. Encyclopedia of life support systems. United Nations Educational, Scientific and Cultural OrganizationGoogle Scholar
Allan JD, Castillo MM (2007) Stream ecology: structure and function of running waters, 2nd edn. Springer, DordrechtCrossRefGoogle Scholar
Roth TR, Westhoff MC, Huwald H, Huff JA, Rubin JF, Barrenetxea G, Vetterli M, Parriaux A, Selker JS, Parlange MB (2010) Stream temperature response to three riparian vegetation scenarios by use of a distributed temperature validated model. Environ Sci Technol 44(6):2072–2078CrossRefGoogle Scholar
Willems P (2009) A time series tool to support the multi-criteria performance evaluation of rainfall-runoff models. Environ Model Softw 24(3):311–321CrossRefGoogle Scholar
Chibole OK (2013) Modeling River Sosiani's water quality to assess human impact on water resources at the catchment scale. Ecohydrol Hydrobiol 13(4):241–245CrossRefGoogle Scholar
Arnold JG, Srinivasan R, Muttiah RS, Williams JR (1998) Large area hydrologic modeling and assessment part I: model development. JAWRA J Am Water Resour Assoc 34(1):73–89CrossRefGoogle Scholar
Arnold JG, Muttiah RS, Srinivasan R, Allen PM (2000) Regional estimation of base flow and groundwater recharge in the Upper Mississippi river basin. J Hydrol 227(1):21–40CrossRefGoogle Scholar
Narsimlu B, Gosain AK, Chahar BR (2013) Assessment of future climate change impacts on water resources of Upper Sind River Basin, India using SWAT model. Water Resour Manag 27(10):3647–3662CrossRefGoogle Scholar
Saha PP, Zeleke K, Hafeez M (2014) Streamflow modeling in a fluctuant climate using SWAT: Yass River catchment in south eastern Australia. Environ Earth Sci 71(12):5241–5254CrossRefGoogle Scholar
Abbott MB, Bathurst JC, Cunge JA, O'Connell PE, Rasmussen J (1986) An introduction to the European Hydrological System-Systeme Hyrdrologique European, 'SHE', 2: structure of a physically-based, distributed modelling system. J Hydrol 87:61–77CrossRefGoogle Scholar
Demetriou C, Punthakey JF (1998) Evaluating sustainable groundwater management options using the MIKE SHE integrated hydrogeological modelling package. Environ Model Softw 14(2–3):129–140CrossRefGoogle Scholar
Jayatilaka CJ, Storm B, Mudgway LB (1998) Simulation of water flow on irrigation bay scale with MIKE-SHE. J Hydrol 208(1–2/2):108–130CrossRefGoogle Scholar
Rahim BEEA, Yusoff I, Jafri AM, Othman Z, Abdul Ghani A (2012) Application of MIKE SHE modelling system to set up a detailed water balance computation. Water Environ J 26(4):490–503CrossRefGoogle Scholar
Girbaciu A, Girbaciu C, Rosu S (2016) Water quality modeling of Bega River using mike 11. Mater Plast 53(3):533–536Google Scholar
Butts M, Loinaz M, Gottwein PB, Unnasch R, Gross D (2012) Mike She-Ecolab—an integrated catchment-scale eco-hydrological modelling tool. In: XIX international conference on water recourses CMWR 2012 University of Illinois at Urbana-Champaign, June 17–22, 2012Google Scholar
Popescu I, Cioaca E, Pan Q, Jonoski A, Hanganu J (2015) Use of hydrodynamic models for the management of the Danube Delta wetlands: the case study of Sontea-Fortuna ecosystem. Environ Sci Policy 46:48–56CrossRefGoogle Scholar
Forio MAE, Mouton A, Lock K, Boets P, Nguyen THT, Damanik Ambarita MN, Musonge PLS, Dominguez-Granda L, Goethals PLM (2017) Fuzzy modelling to identify key drivers of ecological water quality to support decision and policy making. Environ Sci Policy 68:58–68CrossRefGoogle Scholar
WAPCOS (1988) Identification report on development of Loktak Lake, Manipur. Water and Power Consultancy Services (India) Limited, New DelhiGoogle Scholar
LDA and WISA (2003) Extension proposal. Sustainable development and water resource management of Loktak Lake. Manipur: Loktak Development Authority and New Delhi: Wetlands International—South AsiaGoogle Scholar
Trisal T, Manihar (2004) The atlas of Loktak Lake. Manipur: Loktak Development Authority and New Delhi: Wetlands International—South AsiaGoogle Scholar
Dey SC (2002) Conservation of biodiversity in the Keibul Lamjao National Park. Consultancy Report: Sustainable Development and Water Resource Management of Loktak Lake. New Delhi: Wetlands International—South AsiaGoogle Scholar
Angom D (2005) Ecological studies of vegetation in Keibul Lamjao National Park, Manipur. Thesis (PhD), School of Sciences, Manipur University, Imphal, IndiaGoogle Scholar
Yadav B, Eliza K (2017) A hybrid wavelet-support vector machine model for prediction of Lake water level fluctuations using hydro-meteorological data. Meas J Int Meas Confed 103:294–301CrossRefGoogle Scholar
LDA and WISA (1999) October 1999, vol 1Google Scholar
Singh CR, Thompson JR, Kingston DG, French JR (2011) Modelling water-level options for ecosystem services and assessment of climate change: Loktak Lake, northeast India. Hydrol Sci J 56(8):1518–1542CrossRefGoogle Scholar
WISA (Wetlands International—South Asia) (2005) Conservation and management of Loktak Lake and associated wetlands integrating Manipur River basin: detailed project report. New Delhi: Wetlands International—South AsiaGoogle Scholar
LDA and WISA (2002) Loktak Phumdis management 2. Manipur: Loktak Development Authority. Wetlands International South Asia, New DelhiGoogle Scholar
IFCD (2016) Irrigation Projects report under Irrigation and Flood Control Department, Manipur 2015–2016. Gov. ManipurGoogle Scholar
LDA and WISA (2010) Caring for wetlands: an answer to climate change. LDA vol V, no. February, 2010Google Scholar
Government of India (2011) Census 2011Google Scholar
Government of Manipur (2012) Annual plan (2012–13)Google Scholar
Hora SL (1921) Fish and fisheries of Manipur with some observations on those of the Naga Hills. Rec Indian Museum 22:165–214CrossRefGoogle Scholar
Viswas S (1978) Fishes of Manipur, M.Sc Dissertation, J.N.UGoogle Scholar
Bhatia B (1979) Ecological study of Loktak Lake,Final Technical Report, DST, Government of India, New DelhiGoogle Scholar
Singh MP (1996) Ecology of Loktak Lake with special reference to fish and fisheries of the lake. PhD thesis, Manipur UniversityGoogle Scholar
FD (1996) Fishery Department, Government of ManipurGoogle Scholar
G. of Manipur (2016) Fishery DepartmentGoogle Scholar
Laishram J, Dey M (2014) Water quality status of Loktak Lake, Manipur, Northeast India and need for conservation measures: a study on five selected villages. Int J Sci Res Publ 4(6):1–5Google Scholar
Tuboi C, Irengbam M, Hussain SA (2017) Seasonal variations in the water quality of a tropical wetland dominated by floating meadows and its implication for conservation of Ramsar wetlands. Phys Chem Earth Parts A/B/C 103:107–114CrossRefGoogle Scholar
Eliza K, Khosa R, Gosain AK, Nema AK (2018) Developing a management plan for Loktak Lake considering Keibul Lamjao National Park and hydropower demand using a data driven modeling approach. Curr Sci 115(9):1793–1798Google Scholar
Singh T, Shyamananda RK (1994) Ramsar sites of India: Loktak Lake. World Wide Fund for Nature, New DelhiGoogle Scholar
Singh CR, Thompson JR, French JR, Kingston DG, MacKay AW (2010) Modelling the impact of prescribed global warming on runoff from headwater catchments of the Irrawaddy River and their implications for the water level regime of Loktak Lake, northeast India. Hydrol Earth Syst Sci 14(9):1745–1765CrossRefGoogle Scholar
NBSS and LUP (2001) Land capability classes of catchment area of Loktak Lake, Manipur. National Bureau of Soil Survey and Land Use Planning, Regional Centre, Jorhat and KolkataGoogle Scholar
G. of M (2013) Directorate of Environment. Manipur State Action Plan on Climate ChangeGoogle Scholar
FSI (2003) The state of forest report. Forest Survey of India, Ministry of Environment and Forests, Dehradun, IndiaGoogle Scholar
FSI (2015) State Forest Department, Government of ManipurGoogle Scholar
Eliza K, Khosa R, Gosain AK, Nema AK, Mathur S, Yadav B (2018) Modeling simulation of river discharge of Loktak Lake catchment in Northeast India. J Hydrol Eng 23(8):1–13Google Scholar
DHI (2009) MIKE 11 water quality (Ecolab) reference manual, vol cGoogle Scholar
1.Department of Civil EngineeringIndian Institute of Technology DelhiNew DelhiIndia
Khwairakpam, E., Khosa, R., Gosain, A. et al. SN Appl. Sci. (2019) 1: 491. https://doi.org/10.1007/s42452-019-0517-1
Accepted 22 April 2019 | CommonCrawl |
How We Can Make Sense of Chaos
By David S. Richeson
Dynamical systems can be chaotic and impossible to predict, but mathematicians have discovered tools to help understand them.
Maggie Chiang for Quanta Magazine
chaos theorydynamical systemshistory of sciencemathematicsQuantized ColumnsAll topics
In 1885, King Oscar II of Sweden announced a public challenge consisting of four mathematical problems. The French polymath Henri Poincaré focused on one related to the motion of celestial bodies, the so-called n-body problem. Will our solar system continue its clocklike motion indefinitely, will the planets fly off into the void, or will they collapse into a fiery solar death?
Poincaré's solution — which indicated that at least some systems, like the sun, Earth and moon, were stable — won the prestigious prize, and an accompanying article was printed for distribution in 1889. Unfortunately, his solution was incorrect.
Poincaré admitted his error and paid to have the copies of his solution destroyed (which cost more than the prize money). A month later, he submitted a corrected version. He now saw that even a system with only three bodies could behave too unpredictably — too chaotically — to be modeled. So began the field of dynamical systems.
For our purposes, a dynamical system is simply a function whose possible outputs can also be inputs. This allows us to repeatedly plug the outputs of the function back in, allowing for evolving behavior. As Poincaré's work shows, this simple premise can produce examples so complex and random that they are literally called chaotic.
An elegant way to understand Poincaré's conclusion, and bring some order to chaos, came some 70 years later. Shortly after the brilliant young topologist (and future Fields medalist) Stephen Smale wrote his first article on dynamical systems, he received a letter that led him to discover a relatively simple and ubiquitous function that explains the chaos Poincaré observed in the three-body problem. Smale called it the horseshoe.
To understand it, let's start with a simple example of a dynamical system that is not chaotic. Suppose you want to calculate the square root of 2 with only a simple calculator. A process called Newton's method says you should start with any guess — let's say 3 — and plug it into the function f (x) = $latex \frac{x}{2}$ + $latex \frac{1}{x}$. The output, f (3) = 1.8333333, is closer to the true value than the input. To get even closer, plug the output back into the function: f (1.8333333) = 1.4621212. Doing this three more times yields 1.4142136, the likely limit of your calculator's accuracy.
Writing this sixth approximation as $latex f \bigg( f \Big( f \big ( f ( f (3) \big ) \Big) \bigg) $ is awkward, so instead we write f 5 (3), and we call the infinite sequence of outputs the "orbit"of x. It helps to think of each iteration as marking ticks of a clock, and to think of the orbit as hopping along the number line, approaching $latex \sqrt{2}$.
In this example, we call $latex \sqrt{2}$ an attracting fixed point: a fixed point because it yields the fixed orbit $latex \sqrt{2}$, $latex \sqrt{2}$, $latex \sqrt{2}$ …, and attracting because, like a black hole, it sucks in the orbits of nearby points.
But again, not all dynamical systems exhibit such simple and predictable behavior. A dynamical system can have orbits that cycle periodically through a finite set of points, march off to infinity, or exhibit no apparent order.
To understand these concepts, which are central to chaotic systems, consider a particularly illuminating example called the tent map, T, defined for values of x between 0 and 1. Much like a candymaker pulling taffy, it stretches that interval to twice its length, folds it in half, and sets it back on the original interval. That means 0 and 1 both map to 0, and $latex \frac{1}{2}$ maps to 1. Because values produced by the tent map are also between 0 and 1, it can be a dynamical system. Iterating the function, as with Newton's method, means repeating this process of stretching and folding.
As in the $latex \sqrt{2}$ example, the tent map has fixed points, 0 and $latex \frac{2}{3}$. But it also has an orbit that alternates between two points, $latex \frac{2}{5}$ and $latex \frac{4}{5}$ — we call this a period-2 orbit — and a period-3 orbit, which cycles through $latex \frac{2}{9}$, $latex \frac{4}{9}$ and $latex \frac{8}{9}$. And surprisingly, because the tent map has a point that produces an orbit of period 3, we can prove that it has points of every period — no matter what positive integer you pick, there will be a repeating orbit with that many stops in the path.
The first to discover this fact about functions on the real number line was the Ukrainian mathematician Alexander Sharkovsky. However, his 1964 paper on the subject remained unknown outside Eastern Europe, and the result only became known when the University of Maryland mathematicians Tien-Yien Li and James Yorke independently rediscovered it in 1975. They proved that such a dynamical system also has orbits with no discernible pattern, like the orbit of the point $latex \sqrt{2}$ – 1 for the tent map. They wrote that "period 3 implies chaos," coining the mathematical term "chaos" in the process.
More interestingly, even though the points $latex \sqrt{2}$ – 1 and $latex \sqrt{2}$ – 0.999 are close together, their orbits separate quickly: For instance, T9($latex \sqrt{2}$ – 1) = 0.07734 while T9($latex \sqrt{2}$ – 0.999) = 0.58934. This phenomenon is known as "sensitive dependence on initial conditions," or more informally as the butterfly effect. Small initial changes can lead to big outcome changes. As the mathematician and meteorologist Edward Lorenz put it, "Does the flap of a butterfly's wings in Brazil set off a tornado in Texas?" While there is no settled definition of chaos, this sensitive dependence is one of its hallmarks.
To help understand these chaotic systems — and Smale's horseshoe — let's employ what at first might seem like a crude technique. First, divide the interval of possible values into halves labeled L and R. Then, as an orbit progresses, simply note which half the next iteration lands on. This sequence is the orbit's "itinerary." For instance, the itinerary of the period-3 orbit of $latex \frac{2}{9}$ is LLRLLRLLR… since $latex \frac{2}{9}$ and $latex \frac{4}{9}$ are in L and $latex \frac{8}{9}$ is in R. The itinerary for the orbit of $latex \sqrt{2}$ – 1 begins LRLRRRRRLL.
Representing orbits by their itineraries looks like a huge loss of information, but it's not. That's because every possible sequence of L's and R's corresponds to one and only one point. The orbit of $latex \frac{2}{9}$ is the only one with itinerary LLRLLRLLR…, for example. This feature provides a convenient tool for analyzing the dynamics of the tent map. It reveals that points are periodic precisely when their itineraries are. It also allows us to determine the precise location of a point from any given itinerary.
Now let's expand the idea of the tent map into more dimensions, and finally meet Smale's horseshoe function, h. Start with a square, stretch it into a skinny rectangle, fold it into a horseshoe, and place it over the original square.
As with all dynamical systems, we can iterate this process — stretch, fold, stretch, fold, stretch, fold — yielding horseshoes inside horseshoes.
The horseshoe map is invertible — in addition to knowing where a point x is going, as described by h(x), we know where it came from, as described by h-1 (x). Applying h-1 to the original square results in a new horseshoe at right angles to the first. If you keep going, you'll get more horseshoes inside the new horseshoe.
Now place the images of these maps on top of each other:
There's a set of points, which we'll call H, that consists of the intersection of all the horizontal and vertical horseshoes. This is where the interesting action happens.
Just like the tent map, the horseshoe map can be analyzed using itineraries. Let's define L to be the left side of the vertical horseshoe, and R to be the right side.
Now if we take any point in H, we can compute the itinerary of its forward orbit. And because the horseshoe is invertible, we can determine the itinerary for the backward orbit as well.
For instance, let's say we start with a point in region L and when we run the forward orbit, we get LRRLRR…, onward to infinity. When we run the backward orbit, we get LRRLRR…. So we can write its itinerary as …LRRLRRLRRLRR…, with the underline denoting our starting point. This is a period-3 orbit.
Now do this for every point in H.
With those itineraries in hand, we have a complete description of the horseshoe map — we understand it fully — even though (as with the tent map) it possesses chaotic dynamics: points of every period, sensitive dependence on initial conditions, and so on.
Now we can see how Smale's horseshoe can describe more clearly the chaos in Poincaré's three-body problem. In his chaotic horseshoe, there must be a fixed point (let's call it p) with the itinerary …LLLLLLL…, because there exist points of every possible itinerary. That means there must also be a point — let's call this one q — with the itinerary …LLLRLLL…. The forward orbit of this point approaches p (we say "into the future"), as does its backward orbit ("into the past").
Meanwhile, Poincaré had observed that the fixed points of some functions possess an attracting and a repelling direction. This means there is a curve of points moving toward the fixed point, like a vein returning blood to the heart, and a curve of points moving away, like an artery sending blood into the body. If these curves cross, the points of intersection, called homoclinic points, have the curious property that they approach the fixed point both in the future and in the past.
Smale pointed out that q is a homoclinic point, since its orbit approaches p in the future and the past. Crucially, Smale also proved the converse: If you have a homoclinic point (as Poincaré did), then you have a horseshoe. And since we know that horseshoes are chaotic, Poincaré's system must be similarly chaotic. In other words, Poincaré's complicated system — and any system with a homoclinic point — behaves like Smale's simpler one. Understand the horseshoe, and you can get a handle on chaos itself.
Smale also proved that this chaos is robust. If we were to map the square to a slightly different horseshoe, the resulting map would possess identical chaotic behavior. Despite the local instability in the system, the global behavior is extremely stable. That is, this chaos is not fleeting, even under small perturbations. Chaos itself turns out to be stable.
Chaos theory would go on to grab the public's attention. It was presented as "a new paradigm in scientific modeling" in a 1986 article in Scientific American, and the subtitle of James Gleick's bestselling 1987 book Chaos is provocative: "Making a New Science." Chaos sprang up in pop culture such as in the 1990 novel Jurassic Park and in Tom Stoppard's 1993 play Arcadia.
While some mathematicians bristled at the hype — dynamical systems was nothing new, after all — the impact of chaotic systems on mathematics and science was profound. The existence of chaos showed that even in a deterministic system, we may be unable to accurately predict the future because of its sensitive dependence on initial conditions. But because of tools like Smale's horseshoe, we can still extract useful information from these systems.
A Deepening Crisis Forces Physicists to Rethink Structure of Nature's Laws | CommonCrawl |
InvBFM: finding genomic inversions from high-throughput sequence data based on feature mining
Zhongjia Wu1,
Yufeng Wu2 &
Jingyang Gao1
Genomic inversion is one type of structural variations (SVs) and is known to play an important biological role. An established problem in sequence data analysis is calling inversions from high-throughput sequence data. It is more difficult to detect inversions because they are surrounded by duplication or other types of SVs in the inversion areas. Existing inversion detection tools are mainly based on three approaches: paired-end reads, split-mapped reads, and assembly. However, existing tools suffer from unsatisfying precision or sensitivity (eg: only 50~60% sensitivity) and it needs to be improved.
In this paper, we present a new inversion calling method called InvBFM. InvBFM calls inversions based on feature mining. InvBFM first gathers the results of existing inversion detection tools as candidates for inversions. It then extracts features from the inversions. Finally, it calls the true inversions by a trained support vector machine (SVM) classifier.
Our results on real sequence data from the 1000 Genomes Project show that by combining feature mining and a machine learning model, InvBFM outperforms existing tools. InvBFM is written in Python and Shell and is available for download at https://github.com/wzj1234/InvBFM.
It is widely known that genomic variation plays an important role in shaping the genetic diversity of populations. Recently, high-throughput sequencing data becomes the mature type of genomic data used in research. Finding genomic variations from high-throughput sequence data has become a major objective for large-scale genomics studies, such as the 1000 Genomics Project [1]. There are various types of genomic variations, including single nucleotide polymorphisms (SNPs), short (say 50 bp or less) deletions or insertions (indels) and SVs (which are usually longer than 50 bp). There are different types of SVs, including insertion, deletion, copy number variation, and inversion. While some types of SVs (e.g. deletion) have been very actively studied (e.g. [2,3,4,5,6]), other types of SVs such as inversion are less studied. Different from deletion calling where there are a growing list of deletion calling tools, there are less tools for finding inversions. We note that the impact of inversions can have large effect on an organism [7]. For example, inversion inhibits recombination in heterokaryons, which may lead to distinct gene-expression patterns. Inversion may also directly influence gene structure or regulation in different ways as well as secondary mutations in the offspring. In addition, inversion may cause diseases such as hemophilia A [8], Hunter syndrome [9] and increase the risk of infertility or miscarriage [10]. Therefore, developing effective inversion calling tools may potentially be very useful.
We focus on calling inversions from mapped sequence data (i.e. paired-end reads). Calling genomic variations from mapped sequence reads is usually based on the following three mapped sequence properties (called signatures): insert size from mapped paired-end reads (ISPE), split-mapped reads, and read depth. Note that there are also approaches performing sequence assembly. Existing inversion calling methods usually use these signatures. Pindel [11] only uses split-mapped reads. Delly [12] and Lumpy [13] are based on paired-end reads and split-mapped reads. All these three tools have been used in the 1000 Genomes Project. We note that although Delly and Lumpy use the same sets of signatures, they appear to perform differently. This implies that these tools are individually engineered in different aspects in order to call inversions more accurately. Our experience indicates that none of these tools clearly outperforms the others. A natural approach for calling inversions accurately is using machine learning: we extract various sequence features and treat inversion calling as a classification problem in machine learning. Previously, we have developed machine learning approaches for finding deletions from sequence data [14,15,16,17]. A main challenge for finding inversions from sequence data with machine learning is that inversions are relatively rare. There are not many known inversions in the benchmark data (e.g. from the 1000 Genomes Project).
In this paper, we develop a new inversion calling approach, called InvBFM. InvBFM uses multiple relevant sequence properties (called features). InvBFM mines features that are unique to both wild-type sequence and inversions, and trains a model based on these features using simulated data. Then InvBFM calls inversions based on the model with real data by examining each candidate inversion site found by multiple inversion calling results. We demonstrate that InvBFM outperforms existing inversion calling tools on real data. InvBFM in Python and Shell is available for download at https://github.com/wzj1234/InvBFM.
Analysis of features
We first analyze the correlation between the numerical features and the target value. This helps to evaluate the feasibility of using the features of simulated data to train the SVM classifier and then generalize the real data features. InvBFM converts the 15-dimensional feature space which the initially extracted features of inversion are mapped into two dimensions via the principal component analysis (PCA) in order to be visualized as shown in Fig.1. In Fig.1, the blue dots represent the inversions' features of the simulated inversions, which are extracted from simulated BAM files and then converted into 2-dimensional feature. The red dots indicate the features of wild-type in simulated data, which are also extracted from simulated data and then mapped into 2-dimensional features. The green dots correspond to the converted 2-dimensional features which are extracted from the 102 real samples near the inversion area recorded in the benchmark. It is evident that the blue and green dots, representing the inversions' features, are well clustered. The red dots, which represent wild-type features, are clearly separated. This shows that the 15 extracted features are justifiable. That is, the features used by InvBFM are correlated well to whether the target value of the inversion occurs or not.
Visualization of features in simulation and benchmark. All features involved in this image are processed by PCA. The red dots mean wild-type features from simulated data, and the blue dots mean the inversions' features from simulated data. The green dots' features come from benchmark of inversion in real data. The green and blue dots are clustered and separated from the red dots, which indicate the features mined by InvBFM are effective
In order to further demonstrate that the feature selection indeed selects more effective features, we compare the detection results by choosing a different number of inversion features, as shown in Table 1. The threshold here is chosen to be three times ISPE. Features15 refers to the results of calling inversion for 15 features extracted initially, and Feature8 refers to the results of 8 features selected only by chi-square test. InvBFM uses 8 features selected in Feature8 and two additional features. It is obvious that the Feature8 has 2% improvement in precision, and about 1% improvement in recall over the original Feature15. Although InvBFM only leads to a small improvement in precision and recall compared to Feature8, it is the best of the three. Thus, feature selection in InvBFM using both chi-square test and experience indeed makes the detection of inversion more effective.
Table 1 Comparison of different features
For measuring the difference in performance between selected the 8 and 10 features, we use a 10-fold cross-validation and take the average values of 100 times from all simulated data and the 204 real samples are downloaded from 1000 Genomes Project to confirm the effort of the two additional features based on experience. The dataset contains a total of 5491 breakpoints of inversion and wild-type from simulated data and real data. The comparison of classification results of the 10 features by InvBFM and the 8 features by chi-square test is shown in Table 2, and we just focus on the occurrence of inversion. In this experiment, we consider it is the true inversion and set label 1 of the breakpoints within 3 times ISPE of the benchmark, otherwise set label 0 as wild-type. Then, we extract the 8 features and the 10 features around each breakpoint respectively mentioned above. For 10-fold cross-validation, we random shuffle the whole dataset, doing 100 times of 10-fold cross-validation and evaluating the average results of the validation set. In this part, we only verify the critical SVM process. Our results here are based on the comparison of expected labels. The parameter settings of the SVM in Table 2 are the same as those in Table 1. As shown by the mean results of 100 times at 10-fold cross-validations, it is verified that the 10 features selected by InvBFM are better than the 8 features in recall and F1-score, although their precisions are very similar.
Table 2 Result of 10-fold cross-validation on the 8 features and 10 features
SVM is used by InvBFM to generalize real samples' features. Different parameters in SVM may lead to different results. Table 3 shows the performance about precision, recall and F1-score of InvBFM in SVM with different parameters. Meanwhile, in Radial Basis Function (RBF), the kernel of SVM, penalty factor sets 8 and gamma sets 0.01 give the best result.
Table 3 Precision, recall and F1-score of InvBFM in SVM with different parameter
In order to evaluate the impact of different inversion frequencies in tools, we analyze the tools' sensitivity in different inversion frequencies. The result is shown in Table 4. InvBFM gets the best results than existing inversion callers.
Table 4 Sensitivity of multiple tools on different frequencies on 102 samples of 1000 Genomes Project
Moreover, the sensitivity of inversion length on the detected results are shown in Table 5. Lumpy and LumpyEP are the most unstable. InvBFM also performs the best.
Table 5 Sensitivity of multiple tools on inversion length on 102 samples of the 1000 Genomes Project
Accuracy of inversion calling
The inversion in our study is assumed to be longer than the mean ISPE of the sample. We ignore the case where the inversion length of candidate inversions is less than the sample mean ISPE. In addition, the basis for determining whether a predicted region is a true inversion is to compare the left and right breakpoints of the predicted inversion with those of the benchmark, with the threshold being 1, 2, or 3 times the average ISPE of the corresponding sequence reads. That is, if the distance between the called breakpoints is more than this threshold, we consider that the predicted inversion is not true. Experiments show that all tools perform better when the threshold is 3 times of the sample mean ISPE.
The comparison of the experimental results of different thresholds with different tools of inversion calls is shown in Table 6. The denominator value in the InvBFM row represents the union of the three tools. The numerator is a set of true inversion generalized by the SVM classifier of InvBFM, which is the value involved in the calculation. In addition, LumpyEP is the abbreviation of Lumpy express tool also released by Lumpy. Since LumpyEP results are not exactly the same as Lumpy, its results of calling inversion are also shown in Table 6. TP0 indicates the number of non-repetitive regions in the reference benchmark that have records and are judged to be true inversion by tools, i.e., regions of TP after removing repeat inversions. F1-score is a comprehensive evaluation indicator combining precision and recall. The relevant indicators in Table 6 are calculated by the formulas shown in eq. (1) and (2). In addition, InvBFM represents the training result of 10 features selected by the feature selection method mentioned above.
Table 6 Result of different tools with different threshold values (used to determine when a called inversion matches benchmark)
We can see from the results that Delly's recall achieves the best results among the three existing tools, regardless of different thresholds. Lumpy is the best in precision of the three existing tools in precision but with recall less than 30%, which is the lowest. Pindel is the worst in the precision and F1-score indicators. The recall and F1-score of InvBFM perform the best with the 10 features. InvBFM improves the F1-score by more than 10% than Delly, which achieves the highest recall among the existing tools. Therefore, InvBFM performs better in inversion calling compared with existing tools.
$$ FP= No. Calls- TP, FN= benchmark-{TP}_0 $$
$$ Precision=\frac{TP}{No. Calls}, Recall=\frac{TP_0}{benchmark},F1- score=\frac{2\ast Precision\ast Recall}{Precision+ Recall} $$
10-fold cross-validation was used to compare the performances of InvBFM and other tools for calling inversion. The dataset we used in this part is the same as the data in Table 2 from the simulated data and 204 real samples, which contains a total of 5491 inversion and wild-type breakpoints. The result of detected breakpoints of real data from all the tools are shown in Table 7. Under the threshold of 3 times ISPE, when the breakpoints deviation is within the threshold, it is considered to be a true inversion and we label it as 1; otherwise, it is set to 0. On this basis, we use Delly, Pindel, Lumpy, LumpyEP and InvBFM to call inversions and compare their results with the expected label to calculate various indicators, rather than comparing predicted inversion to the benchmark in Table 6. In each 10-fold cross-validation process, we randomly shuffled all the data, and divided the data set into 10 parts averagely. We used 9 parts for training and 1 part for validation. We repeat the process 10 times to make the validation set fully cover the whole data set. Only InvBFM involves the training process. So, the SVM classifier is trained using the inversion features of the training set for each training step. Since the validation set is used to assess the calling results of each tool, all the tools in Table 7 need to verify that each inversion is correctly detected in the validation set. The specific results of mean values in 100 times of 10-fold cross-validation are shown in Table 7. It is worth mentioning that since the InvBFM is based on modelling SVM using features from simulated data, Table 7 filters out the breakpoints in simulated data and only verifies the performance of the breakpoints from real data comparing their expected labels. The results in Table 7 verify that our InvBFM does perform optimally on the comprehensive performance of F1-score for detecting inversion without overfitting.
Table 7 Result of 10-fold cross-validation on the different tools
This paper proposes InvBFM as a new approach to detect inversion. Firstly, InvBFM uses Pindel, Delly, and Lumpy to generate inversion candidates. From the candidate inverting regions, the most significant features of inversions such as read pair orientation, one end unmapped and so on are assigned to specific values, and then the most effective 10 features are selected by combining chi-square test and experience. Finally, we use the SVM classifier to determine candidates as true inversions or not. All the real data in this paper comes from the 1000 Genomics Project. Because the inversions in real data are too few to train the classifiers, we use simulated data for model training and the real data for validation. The results show that our method is better than the existing three tools on recall and F1-score, ranks as second on precision, which is a little lower than Lumpy. In the future work, we will consider further mining the inversion features and exploiting the full use of real data to make inversion calling more effective.
There are only a small number of validated inversions that have been released so far. The 1000 Genomes Project released a number of inversions. However, the number of called inversions from the 1000 Genomes Project is not very large. There are only 238 inversions recorded in Chromosome 11 in 102 samples. This is far from being enough to train the SVM classifier. Therefore, simulated data is used for training.
Simulated data
In this experiment, the simulated data is used as the training set of SVM classifier. The simulated data uses the reference genome (hs37d5.fa) from the 1000 Genomes Project. SimulateSeq [18] is used to simulated BAM files with different length, ISPE, and error rate on the reference genome. In order to avoid overfitting by the specific value of the parameters, we generated 13 sets of parameters in SimulateSeq, each parameter in every set is randomly taken within a range of values. For details, the range of inversion length is 500 to 6000, the ISPE is set in 300 to 500, and the error rate range is set from 0.003 to 0.005. There are also some parameters that affect the SimulateSeq less, we set the read length range from 70 to 150, the offset from 20 to 30, and the depth from 4 to 25. It is worth mentioned that the obvious limitation of SimulateSeq is that even if the error rate is introduced, the inversion area created is relatively clean, unlike the real inversion surrounded repetition or other SVs provided in 1000 Genomes Project. However, the results of cross-validation in the previous paper shown that the clean inversion can also train a better SVM classifier without overfitting, this is the reason why we choose the SimulateSeq. The 10 selected numerical features are extracted from these BAM files. Furthermore, these features from simulated bam are normalized with the candidate features of real bam, the specific approach is to employ scale function from preprocessing of sklearn. And then put scaled simulated features into the SVM classifier for training. This leads to a trained SVM classifier.
Real data from the 1000 Genomes Project is used as the test data. Inversions released by the 1000 Genomes Project are used as the benchmark in this paper. There are not many called inversions in the benchmark: the maximum number of inversions recorded in the benchmark for each sample on chromosome 11 is no more than four. There are totally 102 samples (BAM files) as the original data source. Chromosome 11 of the first 100 samples have more frequent inversions and chromosome 11 of the last two samples both have only one inversion according to the benchmark. The BAM files for these real data are low coverage, and they are released by the 1000 Genomes Project. The benchmark used in this paper is from the 8th version of vcf file updated in May 2017. A total of 238 inversions are reported on chromosome 11 of 102 samples. In addition, in order to verify the performance of InvBFM, we added another 102 samples of chromosome 11 (a total of 204 real data samples) also from 1000 Genomes Project to complete the cross-validation.
High-level idea
InvBFM calls inversions by examining candidate inversions found by multiple inversion calling tools. The inversion calling is model-based. That is, InvBFM trains a classification model using SVM with various sequence-based features. In this paper, we use simulated data for model training. This is because there are only very limited real inversions available in the 1000 Genomes Project data release, and these real inversions are needed for validation. Our experience indicates that the model trained by the simulation data can still be useful when calling inversions in the real data.
Workflow of InvBFM
Our new method InvBFM takes mapped sequence reads on a given reference genome in the BAM format as input. There are two main parts for using InvBFM.as Fig.2 shown (i) Training model. InvBFM trains a classification model by SVM on simulated data. This model classifies a candidate inversion site to be either true inversions or wild-type from a set of collected sequence features. The sequence features are collected from the mapped reads near the inversion site. These features are informative about the presence of the inversion. (ii) Calling inversion. InvBFM extracts the same set of sequence features from sequence reads and calls inversion using the trained classification model. For calling, InvBFM runs multiple existing inversion calling tools, including Pindel, Delly and Lumpy. InvBFM then merges inversion calls from these tools to form the candidate inversion sites. InvBFM then calls inversions by examining each candidate site and classifying with the trained model. In order to improve accuracy, InvBFM mines the features of inversions and chooses a subset of more informative features in model training.
Workflow of InvBFM. It includes two major parts: (i) Training model. Bench-mark file is used to locate true inversion regions and non-SV regions, and then InvBFM extracts features from sequence reads around each label region to train a classification model by SVM. (ii) Calling inversion. Results of several tools are integrated as candidate inversion sites, then InvBFM extracts the same set of sequence features from sequence reads and calls inversion using the trained classification model
Compared with other genomic variants, inversion has some unique features. For example, read depth has been used as a main signature for calling deletions. Due to inversion is a balanced variation, read depth is not very informative. InvBFM uses some features, including the read pair orientation, one end unmapped, soft-clipped reads, concordance of ISPE and so on. Some of them are shown in Fig.3, which is produced by the Integrative Genomics Viewer (IGV) [19] on sequence data to visualize the features of inversions. It is important to note that we focus on diploid organisms in this paper. Therefore, an inversion may be presented in one or both copies of the chromosome. Also recall that we assume the sequence data have pairs (i.e. paired-end reads).
Inversion visualization in IGV. a Inversion region produces a large number of read pairs with the same orientation. The green read pair means both of the ends are mapped on the forward strand. The blue part means both of the ends are mapped on the reverse strand. b One end unmapped with the red line surrounded happens near the inversion, which means that an end mapping to the reference genome fails in the read pair near the inversion breakpoint. c When a read is mapped on a breakpoint of inversion, soft-clipped read occurs, in which the continuous bases are unmapped reflection as the color block. d ISPE is the distance between two ends in a read pair, ISPE is abnormally called discordant read pair perhaps indicates inversion occurs. e The overall effect of inversion on a genomic region
Read pair orientation
One of the most important features of inversion is that the two ends of a read pair has the same orientation while they mapped to the reference gene, which is different from the usual paired reads (where the two ends are oriented in the opposite direction). This happens when one end is outside the inversion and the other end is inside the inverted region. When reads are obtained from the paired-end sequencing technology, a read originates from the forward strand, and its mate originates from the reverse strand. In other words, two reads in a read pair usually are mapped on different strand. However, when inversion occurs, a read within the inversion area are mapped on the same strand as its mate. See Fig.3a for an illustration, the two reads linked by a straight line represent a read pair. The green reads linked by a straight line indicate that the reads in a read pair are both mapped to the forward strand of the reference and the blue indicates they are both mapped to the reverse strand. It can be observed that multiple reads mapped on the same strand are produced in the inversion region. Therefore, read orientation is an important feature of inversion.
One end unmapped
One end unmapped means that only one read of a read pair is successfully mapped to the reference. This occurs when one end of a read pair overlaps with the boundary of the inversion and becomes unmapped. As shown in Fig.3b, two reads are wrapped by a red line that are not linearly connected to another read. This means the read has a mate that is not mapped to the reference, the occurrence of inversions causes this abnormality. Within the inversion region, there are an increasing number of one end unmapped read pairs, which can be an important feature for inversion.
Soft-clipped reads
Soft-clipped read refers to a read partly mapped to the reference. Soft-clipped reads occur when one end of a read pair overlaps with the boundaries of the inversion region. Different from the one end unmapped case, the read is partially mapped (i.e. becomes a soft-clipped read). Thus, there tends to be more soft-clipped reads near the inversion boundary. This is shown in Fig.3c Color blocks indicate continuous bases that are mismatched with the reference. Each streak represents a mismatched base, and the soft-clipped read contains the color bar blocks.
ISPE stands for the insert size of pair-end reads, indicating the distance between the two ends in a read pair. In a wild-type, ISPE usually has a range, which depends on the sequencing technologies. If the observed ISPE of mapped paired-end reads falls into this normal range, we say the read pair is concordant read pair. Otherwise, we say the read pair is discordant read pair. When one end of a read pair is outside the inversion area and the other end is inside the inversion area, the observed ISPE may be different from the true ISPE and the read becomes discordant. The concordant and discordant read pairs are shown in Fig.3d. This is more likely when the inversion is longer.
Besides the above features, other features can also be valuable for calling inversions, such as a read is uniquely or multiply mapped to the reference, whether perfectly mapped to reference, mapped quality and so on.
The features described above are specific features of inversions. Figure 3e shows inversion's overall performance in IGV. It shows the case of inversion and also the case of wild-type. We can see the inversion features are enriched at the inversion site.
Feature expression
InvBFM maps the features of inversion mentioned above into numerical features for subsequent processing. Table 8 shows more details of these features. The extraction of numerical features is based on the overlapping paired-end reads of the BAM file. Since some inversion features overlap with inversion breakpoints, the scope of the InvBFM fetch features is defined to be the left and right breakpoints of the ISPE of the paired-end reads, as shown in Fig.4. We use pysam [20] to extract features from the overlapping reads. Regarding the numerical features expressed in inversion, first, we extract the key information of overlapping reads. The information includes the XT value under the XT tag of read, which indicates the read is uniquely or multiply mapped to reference. Cigartuples indicate the specific bases' mapped situations in reads. We mainly extract the number of bases of total mapped and soft-clip. Both of them contribute to the number of middle mapped quality and the number of clipped reads. We also look for its mate based on the read so that we can get the mapped direction of its mate. In addition, we also extract the length of the read, the mapped quality and its ISPE. The collection of the above information constitutes the set of read information required for the features, and the corresponding 15 numerical feature sets can be obtained by integrating the corresponding quantities of the information in the order of Table 8.
Table 8 Features. Each feature is assigned to an ID
Extracted features in InvBFM. The range of extracted features is defined as the enlargement of ISPE by both left and right breakpoints of inversion area
Feature mining
Not all the features are equally informative about finding inversions. In order to mine the most efficient features, InvBFM performs feature selection on the initial extracted numerical features. InvBFM tests the correlation between each feature and its target value by using the chi-square test. The higher chi-square value is, the closer the relationship between the feature and the target is. When calculating the chi-squared value of a feature, we set O to be the observed frequency of features. E is the expected frequency of features. O is calculated from all the features and its output. E is calculated from the mean features and mean output. To estimate the difference between the observation frequency and the expected frequency, we use the chi-square test (3) to calculate the final chi-square value.
$$ {x}^2=\sum \frac{{\left(O-E\right)}^2}{E} $$
The highest 8 chi-square values obtained by the chi-square test are features 2, 4, 6, 8, 11, 13, 14 and 15 in Table 8. These 8 features are reserved as valid features. The reason why only using the 8 features with the highest chi-square value is that the feature whose chi-square value at the 9th indicates discordant pair. It is related to concordant pair which is already selected, and also selecting discordant pair as an effective feature causes feature redundancy. In addition, according to our experience, the number of one end unmapped and the sum of mapping quality are both important features for inversions. These are not included in the above 8 features selected by chi-square test. So, these two features are also added to the list of chosen features. The final feature set selected by InvBFM contains the following 10 features: 2, 3, 4, 6, 8, 9, 11, 13, 14, 15. The 10 features of InvBFM are more effective than the 8 features of chi-square test, which have been verified in previous results.
Calling inversions
We first collect the called inversions as candidate inversions from existing tools, including Pindel, Delly and Lumpy. We use multiply tools here because existing tools for detecting inversion use different signatures: Pindel only uses split-mapped reads, and both Delly and Lumpy use ISPE of paired-end reads and split-mapped reads. This helps to find candidate inversions that are more likely to contain true inversions.
InvBFM uses the SVM classification to examine each candidate inversion to predict whether inversion occurs to get the final inversion set. Because there are a few validated inversions, the SVM classifier is trained with the simulated inversion. The SVM classifier then treats the generalization candidate inversions' features from real data to get final inversion set.
In more details, InvBFM sets the number of simulated samples Ns. Each sample has Ms1 simulated inversions. So InvBFM extracts the number of Ns*Ms1 numerical features from the simulated inversions. These features from the inversions are set to 1. Similarly, features from the wild-type region are set to 0. The extracted features from the simulated data with the labels constitute the training data, which are scaled to train the SVM classifier. On the other hand, the numerical features of the 10 features are extracted according to the candidate inversion set of the real samples. These real data features are scaled and then put into the SVM classifier modeled by the simulated data to judge whether inversion occurs. The result of 1 indicates that InvBFM determines that the region is a true inversion, and 0 indicates that the region is a wild-type. Finally, InvBFM gets the final inversion set from the candidates with called label 1. The SVM classifier of InvBFM chooses a linear kernel with the penalty factor of 0.1 and the gamma of 20.
All of the data mentioned to support our results in this paper are released by 1000 Genomes Project. For the detail, the reference genome in fa format is available at ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5.fa.gz, the real data of BAM format file can be downloaded at ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data, the benchmark is available at ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/integrated_sv_map/ALL.wgs.mergedSV.v8.20130502.svs.genotypes.vcf.gz.
IGV:
Integrative Genomics Viewer
ISPE:
Insert size from mapped paired-end reads
LumpyEP:
Lumpy express
PCA:
RBF:
Radial basis function
SNPs:
Single nucleotide polymorphisms
SVM:
Support vector machine
SVs:
Structural variations
The Genomes Project Consortium. A global reference for human genetic variation. Nature. 2015;526(7571):68–74.
Parikh H, Mohiyuddin M, Lam HY, Iyer H, Chen D, Pratt M, et al. Svclassify: a method to establish benchmark structural variant calls. BMC Genomics. 2016;17(1):64.
Zhang Z, Wu FX, Wang J, Li Q, Zheng R, Li M. Prioritizing disease genes by using search engine algorithm. Curr Bioinforma. 2016;11(2):195–202.
Ye K, Wang J, Jayasinghe R, et al. Systematic discovery of complex indels in human cancers. Nat Med. 2016;22(1):97.
Geng Y, Zhao Z, Zhang X, Wang W, Cui X, Ye K, et al. An improved burden-test pipeline for identifying associations from rare germline and somatic variants. BMC Genomics. 2017;753(Suppl 7):55–62.
Puig M, Casillas S, Villatoro S, Cáceres M. Human inversions and their functional consequences. Brief Funct Genomics. 2015;14(5):369–79.
Bagnall RD, Waseem N, Green PM, et al. Recurrent inversion breaking intron 1 of the factor VIII gene is a frequent cause of severe hemophilia a. Blood. 2002;99(1):168–74.
Bondeson ML, Dahl N, Malmgren H, Kleijer WJ, Tönnesen T, Carlberg BM, et al. Inversion of the ids gene resulting from recombination with ids-related sequences is a common cause of the hunter syndrome. Hum Mol Genet. 1995;4(4):615.
Arun M, Manipriya R, Aravind C, Chandralekha S. Pericentric inversion of chromosome 9 causing infertility and subsequent successful in vitrofertilization. Niger Med J. 2016;57(2):142–4.
Ye K, Schulz MH, Long Q, Apweiler R, Ning Z. Pindel: a pattern growth approach to detect break points of large deletions and medium sized insertions from paired-end short reads. Bioinformatics. 2009;25(21):2865.
Rausch T, Zichner T, Schlattl A, Stütz AM, Benes V, Korbel JO. Delly: structural variant discovery by integrated paired-end and split-read analysis. Bioinformatics. 2012;28(18):i333.
Layer RM, Chiang C, Quinlan AR, Hall IM. Lumpy: a probabilistic framework for structural variant discovery. Genome Biol. 2012;15(6):R84.
Cai L, Chu C, Zhang X, Wu Y, Gao J. Concod: an effective integration framework of consensus-based calling deletions from next-generation sequencing data. Int J Data Mining Bioinforma. 2017;17(2):153. https://doi.org/10.1109/BIBM.2016.7822495.
Cai L, Gao J, Wu Y, Zhang X, Chu C. Concod: an effective integration framework of consensus-based calling deletions from next-generation sequencing data. Int J Data Mining Bioinform. 2017;17(2):153.
Chu C, Li X, Wu Y. Splicejumper: a classification-based approach for calling splicing junctions from rna-seq data. BMC Bioinformatics. 2015;16(S17):S10.
Chu C, Zhang J, Wu Y. Gindel: accurate genotype calling of insertions and deletions from low coverage population sequence reads. PLoS One. 2014;9(11):e113324.
NGS Sequence Simulator. https://sourceforge.net/projects/simulateseq/files/0.2.2. Accessed 17 Sept 2018.
Thorvaldsdóttir H, Robinson JT, Mesirov JP. Integrative genomics viewer (IGV): high-performance genomics data visualization and exploration. Brief Bioinform. 2013;14(2):178–92.
Pysam. https://github.com/pysam-developers/pysam/releases/tag/v0.13.0. Accessed 17 Sept 2018.
Project and publication costs are supported by Beijing Natural Science Foundation (5182018) and the Fundamental Research Funds for the Central Universities (PYBZ1834) of J.G.; Y.W. is partly supported by a grant from US National Science Foundation (III-1526415). The funding body didn't play any roles in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
College of Information Science and Technology, Beijing University of Chemical Technology, Beijing, People's Republic of China
Zhongjia Wu & Jingyang Gao
Department of Computer Science and Engineering, University of Connecticut, Storrs, Connecticut, USA
Yufeng Wu
Zhongjia Wu
Jingyang Gao
ZJW, YFW and JYG designed this study. ZJW wrote and performed the InvBFM in Python and Shell. YFW, JYG and ZJW wrote the manuscript. All authors read and approved the final manuscript.
Correspondence to Jingyang Gao.
Wu, Z., Wu, Y. & Gao, J. InvBFM: finding genomic inversions from high-throughput sequence data based on feature mining. BMC Genomics 21, 173 (2020). https://doi.org/10.1186/s12864-020-6585-1
High-throughput sequencing
Structural variation | CommonCrawl |
Communications on Pure & Applied Analysis
January 2016 , Volume 15 , Issue 1
Average number of lattice points in a disk
Sujay Jayakar and Robert S. Strichartz
2016, 15(1): 1-8 doi: 10.3934/cpaa.2016.15.1 +[Abstract](1696) +[PDF](816.0KB)
The difference between the number of lattice points in a disk of radius $\sqrt{t}/2\pi$ and the area of the disk $t/4\pi$ is equal to the error in the Weyl asymptotic estimate for the eigenvalue counting function of the Laplacian on the standard flat torus. We give a sharp asymptotic expression for the average value of the difference over the interval $0 \leq t \leq R$. We obtain similar results for families of ellipses. We also obtain relations to the eigenvalue counting function for the Klein bottle and projective plane.
Sujay Jayakar, Robert S. Strichartz. Average number of lattice points in a disk. Communications on Pure & Applied Analysis, 2016, 15(1): 1-8. doi: 10.3934/cpaa.2016.15.1.
Average error for spectral asymptotics on surfaces
Robert S. Strichartz
2016, 15(1): 9-39 doi: 10.3934/cpaa.2016.15.9 +[Abstract](2095) +[PDF](562.2KB)
Let $N(t)$ denote the eigenvalue counting function of the Laplacian on a compact surface of constant nonnegative curvature, with or without boundary. We define a refined asymptotic formula $\widetilde N(t)=At+Bt^{1/2}+C$, where the constants are expressed in terms of the geometry of the surface and its boundary, and consider the average error $A(t)=\frac 1 t \int^t_0 D(s)\,ds$ for $D(t)=N(t)-\widetilde N(t)$. We present a conjecture for the asymptotic behavior of $A(t)$, and study some examples that support the conjecture.
Robert S. Strichartz. Average error for spectral asymptotics on surfaces. Communications on Pure & Applied Analysis, 2016, 15(1): 9-39. doi: 10.3934/cpaa.2016.15.9.
Large time behavior of solutions for a nonlinear damped wave equation
Hiroshi Takeda
2016, 15(1): 41-55 doi: 10.3934/cpaa.2016.15.41 +[Abstract](2095) +[PDF](437.7KB)
We study the large time behavior of small solutions to the Cauchy problem for a nonlinear damped wave equation. We proved that the solution is approximated by the Gauss kernel with suitable choice of the coefficients and powers of $t$ for $N+1$ th order for all $N \in \mathbb{N}$. Our analysis is based on the approximation theorem of the linear solution by the solution of the heat equation [37]. In particular, as pointed out by Galley-Raugel [4], we explicitly observe that from third order expansion, the asymptotic behavior of the solutions of a nonlinear damped wave equation is different from that of a nonlinear heat equation.
Hiroshi Takeda. Large time behavior of solutions for a nonlinear damped wave equation. Communications on Pure & Applied Analysis, 2016, 15(1): 41-55. doi: 10.3934/cpaa.2016.15.41.
Existence and nonuniqueness of homoclinic solutions for second-order Hamiltonian systems with mixed nonlinearities
Dong-Lun Wu, Chun-Lei Tang and Xing-Ping Wu
In this paper, we study the existence of homoclinic solutions to the following second-order Hamiltonian systems \begin{eqnarray} \ddot{u}(t)-L(t)u(t)+\nabla W(t,u(t))=0,\quad \forall t\in R, \end{eqnarray} where $L(t)$ is a symmetric and positive definite matrix for all $t\in R$. The nonlinear potential $W$ is a combination of superlinear and sublinear terms. By different conditions on the superlinear and sublinear terms, we obtain existence and nonuniqueness of nontrivial homoclinic solutions to above systems.
Dong-Lun Wu, Chun-Lei Tang, Xing-Ping Wu. Existence and nonuniqueness of homoclinic solutions for second-order Hamiltonian systems with mixed nonlinearities. Communications on Pure & Applied Analysis, 2016, 15(1): 57-72. doi: 10.3934/cpaa.2016.15.57.
Large-time behavior of liquid crystal flows with a trigonometric condition in two dimensions
Jishan Fan and Fei Jiang
In this paper, we study the large-time behavior of weak solutions to the initial-boundary problem arising in a simplified Ericksen-Leslie system for nonhomogeneous incompressible flows of nematic liquid crystals with a transformation condition of trigonometric functions (called by trigonometric condition for simplicity) posed on the initial direction field in a bounded domain $\Omega\subset \mathbb{R}^2$. We show that the kinetic energy and direction field converge to zero and an equilibrium state, respectively, as time goes to infinity. Further, if the initial density is away from vacuum and bounded, then the density, and velocity and direction fields exponential decay to an equilibrium state. In addition, we also show that the weak solutions of the corresponding compressible flows converge {an equilibrium} state.
Jishan Fan, Fei Jiang. Large-time behavior of liquid crystal flows with a trigonometric condition in two dimensions. Communications on Pure & Applied Analysis, 2016, 15(1): 73-90. doi: 10.3934/cpaa.2016.15.73.
Multiple nontrivial solutions to a $p$-Kirchhoff equation
Anran Li and Jiabao Su
2016, 15(1): 91-102 doi: 10.3934/cpaa.2016.15.91 +[Abstract](1902) +[PDF](406.4KB)
In this paper, by computing the relevant critical groups, we obtain nontrivial solutions via Morse theory to the nonlocal $p$-Kirchhoff-type quasilinear elliptic equation \begin{eqnarray} (P)\quad\quad &&\displaystyle\bigg[M\bigg(\int_\Omega|\nabla u|^p dx\bigg)\bigg]^{p-1}(-\Delta_pu) = f(x,u), \quad x\in\Omega,\\ && u=0, \quad x\in \partial \Omega, \end{eqnarray} where $\Omega \subset \mathbb R^N$ is a bounded open domain with smooth boundary $\partial \Omega$ and $N \geq 3$.
Anran Li, Jiabao Su. Multiple nontrivial solutions to a $p$-Kirchhoff equation. Communications on Pure & Applied Analysis, 2016, 15(1): 91-102. doi: 10.3934/cpaa.2016.15.91.
Concentrating ground-state solutions for a class of Schödinger-Poisson equations in $\mathbb{R}^3$ involving critical Sobolev exponents
Yi He, Lu Lu and Wei Shuai
2016, 15(1): 103-125 doi: 10.3934/cpaa.2016.15.103 +[Abstract](2192) +[PDF](508.3KB)
We are concerned with standing waves for the following Schrödinger-Poisson equation with critical nonlinearity: \begin{eqnarray} && - {\varepsilon ^2}\Delta u + V(x)u + \psi (x)u = \lambda W(x){\left| u \right|^{p - 2}}u + {\left| u \right|^4}u\;\;{\text{ in }}\mathbb{R}^3, \\ && - {\varepsilon ^2}\Delta \psi = {u^2}\;\;{\text{ in }}\mathbb{R}^3, u>0, u \in {H^1}(\mathbb{R}^3), \end{eqnarray} where $\varepsilon $ is a small positive parameter, $\lambda > 0$, $3 < p \le 4$, $V$ and $W$ are two potentials. Under proper assumptions, we prove that for $\varepsilon > 0$ sufficiently small, the above problem has a positive ground-state solution ${u_\varepsilon }$ by using a monotonicity trick and a new version of global compactness lemma. Moreover, we use another global compactness method due to [C. Gui, Commun. Partial Differential Equations 21 (1996) 787-820] to show that ${u_\varepsilon }$ concentrates around a set which is related to the set where the potential $V(x)$ attains its global minima or the set where the potential $W(x)$ attains its global maxima as $\varepsilon \to 0$. As far as we know, the existence and concentration behavior of the positive solutions to the Schrödinger-Poisson equation with critical nonlinearity $g(u): = \lambda W(x)|u{|^{p - 2}}u + |u{|^4}u$ $(3
Yi He, Lu Lu, Wei Shuai. Concentrating ground-state solutions for a class of Sch\u00F6dinger-Poisson equations in $\\mathbb{R}^3$ involving critical Sobolev exponents. Communications on Pure & Applied Analysis, 2016, 15(1): 103-125. doi: 10.3934/cpaa.2016.15.103.
Riemann problem for the relativistic generalized Chaplygin Euler equations
Meixiang Huang and Zhi-Qiang Shao
The Riemann problem for the relativistic generalized Chaplygin Euler equations is considered. Its two characteristic fields are genuinely nonlinear, but the nonclassical solutions appear. The formation of mechanism for $\delta-$shock is analyzed, that is the one-shock curve and the two-shock curve do not intersect each other in the phase plane. The Riemann solutions are constructed, and the generalized Rankine-Hugoniot conditions and the $\delta-$entropy condition are clarified. Moreover, under the generalized Rankine-Hugoniot conditions and entropy condition, we constructively obtain $\delta-$shock waves.
Meixiang Huang, Zhi-Qiang Shao. Riemann problem for the relativistic generalized Chaplygin Euler equations. Communications on Pure & Applied Analysis, 2016, 15(1): 127-138. doi: 10.3934/cpaa.2016.15.127.
Curved fronts of monostable reaction-advection-diffusion equations in space-time periodic media
Zhen-Hui Bu and Zhi-Cheng Wang
This paper is to study traveling fronts of reaction-diffusion equations with space-time periodic advection and nonlinearity in $\mathbb{R}^N$ with $N\geq3$. We are interested in curved fronts satisfying some ``pyramidal" conditions at infinity. In $\Bbb{R}^3$, we first show that there is a minimal speed $c^{*}$ such that curved fronts with speed $c$ exist if and only if $c\geq c^{*}$, and then we prove that such curved fronts are decreasing in the direction of propagation. Furthermore, we give a generalization of our results in $\mathbb{R}^N$ with $N\geq4$.
Zhen-Hui Bu, Zhi-Cheng Wang. Curved fronts of monostable reaction-advection-diffusion equations inspace-time periodic media. Communications on Pure & Applied Analysis, 2016, 15(1): 139-160. doi: 10.3934/cpaa.2016.15.139.
Serrin-type blowup criterion for full compressible Navier-Stokes-Maxwell system with vacuum
Xiaofeng Hou and Limei Zhu
In this paper, we establish a Serrin-type blowup criterion for the Cauchy problem of the three dimensional compressible Navier-Stokes-Maxwell system, which states a classical solution exists globally, provided that the velocity satisfies Serrin's condition and that the $L_t^\infty L_x^\infty$ of density $\rho$ and the $L^2_tL_x^2$ of $\nabla^2 E$ are bounded. In particular, this criterion is analogous to the well-known Serrin's blowup criterion for the three-dimensional compressible Navier-Stokes equations. Moreover, it is independent of the temperature and magnetic field. It should be noted that it is the first result about the possibility of global existence of classical solution for the full Navier-Stokes-Maxwell system.
Xiaofeng Hou, Limei Zhu. Serrin-type blowup criterion for full compressible Navier-Stokes-Maxwell system with vacuum. Communications on Pure & Applied Analysis, 2016, 15(1): 161-183. doi: 10.3934/cpaa.2016.15.161.
One Class of Sobolev Type Equations of Higher Order with Additive "White Noise"
Angelo Favini, Georgy A. Sviridyuk and Alyona A. Zamyshlyaeva
Sobolev type equation theory has been an object of interest in recent years, with much attention being devoted to deterministic equations and systems. Still, there are also mathematical models containing random perturbation, such as white noise; these models are often used in natural experiments and have recently driven a large amount of research on stochastic differential equations. A new concept of ``white noise", originally constructed for finite dimensional spaces, is extended here to the case of infinite dimensional spaces. The main purpose is to develop stochastic higher-order Sobolev type equation theory and provide some practical applications. The main idea is to construct ``noise" spaces using the Nelson -- Gliklikh derivative. Abstract results are applied to the Boussinesq -- Lòve model with additive ``white noise" within Sobolev type equation theory. Because of their usefulness, we mainly focus on Sobolev type equations with relatively p-bounded operators. We also use well-known methods in the investigation of Sobolev type equations, such as the phase space method, which reduces a singular equation to a regular one, as defined on some subspace of the initial space.
Angelo Favini, Georgy A. Sviridyuk, Alyona A. Zamyshlyaeva. One Class of Sobolev Type Equations of Higher Order with Additive \"White Noise\". Communications on Pure & Applied Analysis, 2016, 15(1): 185-196. doi: 10.3934/cpaa.2016.15.185.
Existence and uniqueness of a solution for a class of parabolic equations with two unbounded nonlinearities
Dominique Blanchard, Olivier Guibé and Hicham Redwane
In this paper we prove the existence and uniqueness of a renormalized solution for nonlinear parabolic equations whose model is \begin{eqnarray} \frac{\partial b(u)}{\partial t} - div\big(a(x,t,u,\nabla u)\big)=f+ div (g), \end{eqnarray} where the right side belongs to $L^{1}(Q)+L^{p'}(0,T;W^{-1,p'}(\Omega))$, where $b(u)$ is a real function of $u$ and where $-div(a(x,t,u,\nabla u))$ is a Leray-Lions type operator with growth $|\nabla u|^{p-1}$ in $\nabla u$, but without any growth assumption on $u$.
Dominique Blanchard, Olivier Guib\u00E9, Hicham Redwane. Existence and uniqueness of a solution for a class of parabolic equations with two unbounded nonlinearities. Communications on Pure & Applied Analysis, 2016, 15(1): 197-217. doi: 10.3934/cpaa.2016.15.197.
On the Swift-Hohenberg equation with slow and fast dynamics: well-posedness and long-time behavior
Andrea Giorgini
We propose a mathematical analysis of the Swift-Hohenberg equation arising from the phase field theory to model the transition from an unstable to a (meta)stable state. We also consider a recent generalization of the original equation, obtained by introducing an inertial term, to predict fast degrees of freedom in the system. We formulate and prove well-posedness results of the concerned models. Afterwards, we analyse the long-time behavior in terms of global and exponential attractors. Finally, by reading the inertial term as a singular perturbation of the Swift-Hohenberg equation, we construct a family of exponential attractors which is Hölder continuous with respect to the perturbative parameter of the system.
Andrea Giorgini. On the Swift-Hohenberg equation with slow and fast dynamics: well-posedness and long-time behavior. Communications on Pure & Applied Analysis, 2016, 15(1): 219-241. doi: 10.3934/cpaa.2016.15.219.
Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species
Miaoqing Tian and Sining Zheng
This paper deals with two-species quasilinear parabolic-parabolic Keller-Segel system $ u_{it}=\nabla\cdot(\phi_i(u_i)\nabla u_i)-\nabla\cdot(\psi_i(u_i)\nabla v)$, $i=1,2$, $v_t=\Delta v-v+u_1+u_2$ in $\Omega\times (0,T)$, subject to the homogeneous Neumann boundary conditions, with bounded domain $\Omega\subset\mathbb{R}^n$, $n\geq2$. We prove that if $\frac{\psi_i(u_i)}{\phi_i(u_i)}\leq C_iu_i^{\alpha_i}$ for $u_i>1$ with $0<\alpha_i<\frac{2}{n}$ and $C_i>0$, $i=1,2$, then the solutions are globally bounded, while if $\frac{\psi_1(u_1)}{\phi_1(u_1)}\geq C_1u_1^{\alpha_1}$ for $u_1>1$ with $\Omega=B_R$, $\alpha_1>\frac{2}{n}$, then for any radial $u_{20}\in C^0(\overline{\Omega})$ and $m_1>0$, there exists positive radial initial data $u_{10}$ with $\int_\Omega u_{10}=m_1$ such that the solution blows up in a finite time $T_{\max}$ in the sense $\lim_{{t\rightarrow T_{\max}}} \|u_1(\cdot,t)+u_2(\cdot,t)\|_{L^{\infty}(\Omega)}=\infty$. In particular, if $\alpha_1>\frac{2}{n}$ with $0<\alpha_2<\frac{2}{n}$, the finite time blow-up for the species $u_1$ is obtained under suitable initial data, a new phenomenon unknown yet even for the semilinear Keller-Segel system of two species.
Miaoqing Tian, Sining Zheng. Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species. Communications on Pure & Applied Analysis, 2016, 15(1): 243-260. doi: 10.3934/cpaa.2016.15.243.
Blow-up scaling and global behaviour of solutions of the bi-Laplace equation via pencil operators
Pablo Álvarez-Caudevilla and V. A. Galaktionov
As the main problem, the bi-Laplace equation \begin{eqnarray} \Delta^2 u=0 \quad (\Delta=D_x^2+D_y^2) \end{eqnarray} in a bounded domain $\Omega \subset R^2$, with inhomogeneous Dirichlet or Navier-type conditions on the smooth boundary $\partial \Omega$ is considered. In addition, there is a finite collection of curves \begin{eqnarray} \Gamma = \Gamma_1\cup...\cup\Gamma_m \subset \Omega, \end{eqnarray} on which we assume homogeneous Dirichlet conditions $u=0$, focusing at the origin $0 \in \Omega$ (the analysis would be similar for any other point). This makes the above elliptic problem overdetermined. Possible types of the behaviour of solution $u(x,y)$ at the tip $0$ of such admissible multiple cracks, being a singularity point, are described, on the basis of blow-up scaling techniques and spectral theory of pencils of non self-adjoint operators. Typical types of admissible cracks are shown to be governed by nodal sets of a countable family of harmonic polynomials, which are now represented as pencil eigenfunctions, instead of their classical representation via a standard Sturm--Liouville problem. Eventually, for a fixed admissible crack formation at the origin, this allows us to describe all boundary data, which can generate such a blow-up crack structure. In particular, it is shown how the co-dimension of this data set increases with the number of asymptotically straight-line cracks focusing at 0.
Pablo \u00C1lvarez-Caudevilla, V. A. Galaktionov. Blow-up scaling and global behaviour of solutions of thebi-Laplace equation via pencil operators. Communications on Pure & Applied Analysis, 2016, 15(1): 261-286. doi: 10.3934/cpaa.2016.15.261.
The "hot spots" conjecture on higher dimensional Sierpinski gaskets
Xiao-Hui Li and Huo-Jun Ruan
In this paper, using spectral decimation, we prove that the ``hot spots" conjecture holds on higher dimensional Sierpinski gaskets.
Xiao-Hui Li, Huo-Jun Ruan. The \"hot spots\" conjecture on higher dimensional Sierpinski gaskets. Communications on Pure & Applied Analysis, 2016, 15(1): 287-297. doi: 10.3934/cpaa.2016.15.287. | CommonCrawl |
horticulture research
Tomato lncRNA23468 functions as a competing endogenous RNA to modulate NBS-LRR genes by decoying miR482b in the tomato-Phytophthora infestans interaction
Ning Jiang1,
Jun Cui1,
Yunsheng Shi2,
Guanglei Yang1,
Xiaoxu Zhou1,
Xinxin Hou1,
Jun Meng2 &
Yushi Luan1
Horticulture Research volume 6, Article number: 28 (2019) Cite this article
Biotic
Non-coding RNAs
Our previous studies indicated that tomato miR482b could negatively regulate the resistance of tomato to Phytophthora infestans and the expression of miR482b was decreased after inoculation with P. infestans. However, the mechanism by which the accumulation of miR482b is suppressed remains unclear. In this study, we wrote a program to identify 89 long noncoding RNA (lncRNA)-originated endogenous target mimics (eTMs) for 46 miRNAs from our RNA-Seq data. Three tomato lncRNAs, lncRNA23468, lncRNA01308 and lncRNA13262, contained conserved eTM sites for miR482b. When lncRNA23468 was overexpressed in tomato, miR482b expression was significantly decreased, and the expression of the target genes, NBS-LRRs, was significantly increased, resulting in enhanced resistance to P. infestans. Silencing lncRNA23468 in tomato led to the increased accumulation of miR482b and decreased accumulation of NBS-LRRs, as well as reduced resistance to P. infestans. In addition, the accumulation of both miR482b and NBS-LRRs was not significantly changed in tomato plants that overexpressed lncRNA23468 with a mutated eTM site. Based on the VIGS system, a target gene of miR482b, Solyc02g036270.2, was silenced. The disease symptoms of the VIGS-Solyc02g036270.2 tomato plants were in accordance with those of tomato plants in which lncRNA23468 was silenced after inoculation with P. infestans. More severe disease symptoms were found in the modified plants than in the control plants. Our results demonstrate that lncRNAs functioning as eTMs may modulate the effects of miRNAs in tomato and provide insight into how the lncRNA23468-miR482b-NBS-LRR module regulates tomato resistance to P. infestans.
MicroRNAs (miRNAs) are noncoding RNAs of 20–24 nucleotides1 that transcriptionally and posttranscriptionally regulate gene expression in various biological processes of plants2. Many miRNAs act in plant responses to various biotic challenges. For example, miR482, belonging to the miR482/2118 family, functions in various plants3,4. After Verticillium dahliae infection, the accumulation of miR482e in potato significantly decreases5. Similarly, after infection with cucumber mosaic virus and V. dahliae, the expression of miR482 is suppressed in tomato and cotton, respectively6,7.
To show the biological function of miRNAs, identification of their target genes is necessary and important8. MiR482 can silence members of the NBS-LRR gene family4. An NBS-LRR protein is a disease resistance (R) protein that is involved in the effector-triggered immunity (ETI) of the plant innate immune system9. An NBS-LRR protein contains a nucleotide-binding site (NBS), a leucine-rich repeat (LRR) and a toll-interleukin receptor-like (TIR) domain or a coiled coil (CC) domain10. An NBS-LRR gene, GbRVd, was cloned and characterized in Gossypium barbadense, and its silencing enhances the susceptibility of G. barbadense to Verticillium wilt11. The resistance of Nicotiana benthamiana to Phytophthora parasitica is effectively enhanced by the overexpression of the grapevine TIR-NB-LRR gene VaRGA112. Plants infected with pathogens show an increased level of NBS-LRR transcripts and reduced level of miR4827,13. Potato miR482 acts by suppressing NBS-LRR genes to regulate the potato resistance against V. dahliae infection5.
Recently, long noncoding RNA (lncRNA), another type of ncRNA, has been identified and analyzed in various biological processes14. LncRNAs are a set of RNA transcripts (>200 nt length) that have no protein-coding ability15. LncRNAs play important roles in flowering time regulation16, fruit development17,18, photomorphogenesis19, gene silencing20,21 and biotic and abiotic stress responses22,23. In plant-pathogen interactions, using an RNA sequencing approach, 125 putative stress-responsive lncRNAs that are induced by powdery mildew infection have been identified in wheat 24. A number of lncRNAs have been identified in Paulownia witches' broom-infected Paulownia tomentosa by high-throughput sequencing25. In addition, Arabidopsis thaliana lncRNAs have been found to act in Arabidopsis resistance to Fusarium oxysporum22. In tomato, a comprehensive set of lncRNAs was identified26,27,28. These lncRNAs were primarily involved in the regulation of tomato fruit ripening18,28,29,30, the ethylene signal transduction pathway31, chilling injury32, and the tomato-potato spindle tuber viroid (PSTVd)/tomato yellow leaf curl virus (TYLCV) interaction33,34,35.
Tomato is the second most important vegetable crop in the world, constituting a major agricultural industry36. Late blight (LB), caused by Phytophthora infestans, is one of the most serious diseases of tomato. In the early 2000s, LB occurred and hindered tomato production, causing serious economic losses in the USA and China37. In our previous work, we focused on the effects of lncRNAs and miRNAs during tomato resistance to P. infestans. A number of miRNAs were identified by next-generation sequencing38, including miR482b. MiR482b was downregulated after infection with P. infestans and negatively regulated tomato resistance8. Tomato lncRNA16397 was found to induce glutaredoxin expression to enhance resistance to P. infestans39. However, the mechanisms by which lncRNAs, as competing endogenous RNAs (ceRNAs), suppress the accumulation of miRNAs in the tomato-P. infestans interaction are unknown. To determine whether lncRNAs can suppress miR482b accumulation in tomato resistance to pathogen infection, RNA-Seq data were used to identify and characterize a number of lncRNAs, and bioinformatics analysis was used to predict the endogenous target mimics (eTMs) of these lncRNAs. We found that lncRNA23468 modulated the accumulation of NBS-LRRs by suppressing miR482b expression in tomato plants infected with P. infestans. These results will improve our understanding of the regulatory mechanism of miR482b in the response of tomato to P. infestans infection and help future molecular-based breeding approaches of pathogen resistance.
Bioinformatics pipeline for identifying lncRNAs
To identify the lncRNAs, we used two RNA-Seq datasets obtained from our previous studies. These two datasets were constructed by LC Biotech, Hangzhou, China, using the leaves of miR482b-overexpressing and Zaofen No. 2 tomatoes. The clean reads were de novo assembled using Cufflinks. TopHat was used to align assembled transcripts to the tomato genome iTAGv2.3 (http://phytozome.jgi.doe.gov/pz/portal.html#!info?alias = Org_Slycopersicum). All transcripts were required to be more than 200 bp in length. The lncRNAs were identified according to the method of Cui et al.39. According to their genomic locations, the lncRNAs were classified into four categories. The fragments per kilobaseof exon per million fragments mapped (FPKM) value was applied to represent the normalized expression value of the lncRNAs.
Prediction of ceRNAs
All tomato miRNAs were collected from miRBase (http://www.mirbase.org/), and the lncRNAs identified above were used as the ceRNA prediction libraries. CeRNAs for the selected tomato miRNAs were predicted using RNAhybrid software with the following rules: (i) P-value < 0.05 and minimum free energy (mfe) <−25 kcal/mol; (ii) bulges were only permitted at the ninth to 12th positions of the 5' end of a miRNA sequence; (iii) the bulge in ceRNAs should be composed of 2–4 nucleotides; (iv) G/U pairs were allowed with the ceRNA and miRNA pairing region, and perfect nucleotide pairing was required at the second to eighth positions of the 5' end of the miRNA sequence; and (v) except for the central bulge, the total mismatches within the ceRNA and miRNA pairing regions should be no more than four, with no more than two consecutive mismatches. Interaction networks among the ceRNAs and miRNAs were constructed using the software Cytoscape. WebLogo software was used to analyze the conserved residues (http://weblogo.berkeley.edu/logo.cgi).
Tomato material and P. infestans inoculation
Tomato Zaofen No. 2, bred by the Institute of Vegetables and Flowers, Chinese Academy of Agricultural Sciences, Beijing, China, is an accession susceptible to P. infestans. The tomato was grown in a greenhouse under 16 h light within a temperature range of 22–28 °C. P. infestans strain P12103 was cultured in oat medium in the dark at 20 °C. The tomato plants (4–5-leaf stage) were inoculated with P. infestans spores according to the method of Jiang et al.8. The whole fifth leaves of each sample were collected at the indicated times (0, 1, 2, 3 and 4 dpi). All samples were quickly frozen in liquid nitrogen and stored at −80 °C until RNA isolation.
Cloning of lncRNA23468, mutation of lncRNA23468 and construction of the overexpression plasmid
According to the tomato genome and lncRNA prediction results, a pair of primers (l23468F and l23468R) were designed and used to clone lncRNA23468 from tomato plant (Table S1).
We introduced six point mutations to lncRNA23468 within sequences pairing with miR482b. LncRNA23468 mutation was generated by PCR, which involved amplification and mutagenesis using lncRNA23468 as the backbone. Two more primers were used for this: ml23468-1R and ml23468-2F (Table S1). Three rounds of PCR were performed to amplify the mutated lncRNA23468 (mlncRNA23468). First, the primers l23468F and ml23468-1R were used to amplify a fragment containing mutation points, and then ml23468-2F and l23468R were used to amplify another fragment. Finally, the PCR products of the first and second rounds were used as the template along with l23468F and l23468R.
The PCR fragments of lncRNA23468 or mlncRNA23468 were subcloned into binary vector pBI121, replacing the GUS gene. In plasmids, lncRNA23468 and mlncRNA23468 were controlled by the Cauliflower mosaic virus (CaMV) 35S promoter.
Virus-induced gene silencing (VIGS) constructs
TVR-based vectors (pTRV1 and pTRV2), which were provided by Prof. Liu from Tsinghua University of China, were used for VIGS. The VIGS sequence was designed according to the SGN VIGS Tool (http://vigs.solgenomics.net/) and cloned into the pTRV2 vector by the ligation-independent cloning method40,41.
Agrobacteria infiltration
All the plasmids were transformed into A. tumefaciens strain GV3101 by the freeze-thaw method37. Agrobacteria infiltration was performed according to Jiang's method8.
A. tumefaciens containing pBI-121-lncRNA23468 or pBI-121-mlncRNA23468 plasmids was introduced into the leaves of Zaofen No. 2 tomato by infiltration. A. tumefaciens with an empty vector was used as a control. The leaves were harvested for the next experiments at 3 dpi.
In the tobacco (N. tabacum) system, we introduced Agrobacterium harboring pBI121-miR482b into tobacco leaf cells. After 3 days, the Agrobacterium harboring pBI121-lncRNA23468 and mlncRNA23468 were introduced into the tobacco leaves that expressed miR482b. The accumulation of miR482b was examined.
Agrobacterium cultures containing pTRV2 derivatives and pTRV1 were mixed at a 1:1 ratio and then infiltrated into a 2-3-leaf-stage tomato. The pTRV2 empty vector was used as the negative control. The plants were maintained for 3 weeks in a greenhouse under 16 h light within 20 °C, and leaflets were harvested from several plants for the isolation of RNA and qRT-PCR analysis to assess the degree of silencing.
P. infestans resistance analysis
For the tomato plants that overexpressed lncRNA23468 or mlncRNA23468, the infiltrated leaf regions were inoculated with 20 μl of P. infestans (106 zoospores/ml) and then were placed at 20 ± 1 °C in a 100%-relative-humidity environment without light. The lesions were observed at the fifth day, and the sizes of the lesions were also calculated.
3 weeks after VIGS, detached leaves from the VIGS tomato plants were inoculated with 20 μl of the P. infestans zoospore suspension (1 × 106 zoospores/ml) according to Jiang's method8. The whole plants were sprayed to run-off with the same zoospore suspension by the method described above. At 5 dpi, the diameters of the lesions and the abundance of P. infestans were calculated according to Jiang's method8.
The disease index (DI) was calculated according to disease grade (DG). The DGs were sorted from 0 to 6 on the basis of the lesion area (Table S2). The DI was calculated according to the following formula:
$${\mathrm{DI}}({\mathrm{\% }}) = \frac{{{\sum} {\left( {DG_i \times n_i} \right) \times 100} }}{{n \times DG_{imax}}}$$
Where DGi is the value of DG, ni is the number of plants in each DG, and n is the total number of plants. Each experiment was carried out at least three times.
RNA isolation, reverse transcription, and qRT-PCR analysis
Total RNA was extracted using RNAiso Plus (TaKaRa, Dalian, China). Reverse transcription and the qRT-PCR reactions of miRNAs were performed with TransScript Green miRNA Two-Step qRT-PCR SuperMix (Transgen Biotech, Beijing, China) according to the manufacturer's instructions. We used PrimeScriptTM RT Master Mix (TaKaRa, Dalian, China) to synthesize the cDNAs of the mRNAs and lncRNAs. The qRT-PCR reactions were performed by using a SYBR Premix Ex TaqTM II kit (TaKaRa, Dalian, China). The qRT-PCR reactions of the selected genes, miRNAs and lncRNAs were performed on an ABI7500. The tomato actin gene was used as an internal reference gene. All primer sequences are shown in Table S1. The gene sample Ct values were standardized, and the 2–ΔΔCt method was used to analyze the relative changes in expression. Of the nine leaves sampled in each experiment, three leaves were pooled into one biological replicate, resulting in three biological replicates.
All statistical analyses of the data were performed with SPSS19.0, and all data were expressed as the means ± SEs from three independent experiments. We used the Tukey method to estimate significance.
Identification of lncRNAs
Two of our group's RNA-Seq datasets, miR482b-overexpressing (OE482) and Zaofen No. 2 tomatoes (Slz), were used to identify the expressed lncRNAs. After assembly and mapping to the tomato genome, the noncoding transcripts were considered "putative" lncRNAs and filtered according to length, coding potentials and coverage of reads. From these analyses, approximately 9742 unique lncRNAs were obtained in two samples, and 9562 of these lncRNAs were expressed in the OE482 sample (Table S3). Among these lncRNAs, 6,785 were natural antisense transcripts (x); 2,669 were long intergenic noncoding RNAs (u); 72 were generic exonic overlaps with a reference transcript (o); and 36 were potentially novel isoforms (j) in the OE482 sample (Fig. 1a). These lncRNAs were evenly distributed across the 12 chromosomes in tomato (Fig. 1b).
Fig. 1: Identification and characterization of lncRNAs.
a Composition of different types of lncRNAs. b The expression levels of the lncRNAs (log10 FPKM) along 12 tomato chromosomes. c Cytoscape results of lncRNAs and miRNAs. The 89 lncRNAs that may act as ceRNAs could be bound by 46 miRNAs. The red round and green square nodes represent lncRNAs and miRNAs, respectively
Identification of ceRNAs
LncRNAs may act as ceRNAs via the eTMs of miRNAs. Therefore, a program was written to identify the lncRNAs acting as eTMs of miRNAs in tomato. In total, 89 lncRNAs were found to be bound by 46 miRNAs (Table S4). Interaction networks showed that 38 miRNA-lncRNA duplexes were formed (Fig. 1c). In these duplexes, miR482d-5p was putatively sequestered by 16 lncRNAs, and lncRNA34658 decoyed three miRNAs, including miR168a-5p, miR168b-5p, and miR477-3p. In addition, more than one-third of the regulated relationships had only two nodes, as shown in the networks in Fig. 1c.
The eTMs of miR482b and expression analysis of miR482b ceRNAs after infection with P. infestans
The predicted binding sites of miR482b among these lncRNAs (lncRNA23468, lncRNA13262 and lncRNA01308) were well conserved (Fig. 2a). In our prediction, a two or three-nucleotide bulge on the eTMs located between the ninth and 12th positions at the 5' end of miR482b was required for the eTMs to decoy miR482b (Fig. 2b). The minimum free energies (mfes) were −33.8 kcal/mol, −35.8 kcal/mol and −33.2 kcal/mol in lncRNA13262-miR482b, lncRNA01308-miR482b and lncRNA23468-miR482b, respectively. The values of the mfes in lncRNA13262-miR482b and lncRNA01308-miR482b were close to that in lncRNA23468-miR482b. This suggests that the abilities of the three lncRNAs to suppress miR482b were not significantly different.
Fig. 2: Three lncRNAs decoy miR482b in tomato.
a Conservation analysis of the eTM sites of three lncRNAs that decoy miR482b. The conservation status of the sequence was analyzed by WebLogo. The red letters represent the conserved sequence at the second to eighth positions of the 5' end of the miRNA sequence. The purple letters represent bulges permitted at the ninth to 12th positions of the 5' end of a miRNA sequence. b Predicted base-pairing interactions between miR482b and the eTM sites of three lncRNAs. c Expression of the three lncRNAs in tomato inoculated with P. infestans. All data are the means ± SE of three independent experiments. Different letters among the groups indicate a significant difference at the P = 0.05 level
To explore whether the ceRNAs of the miR482b, lncRNA23468, lncRNA13262 and lncRNA01308 had an effect on tomato response to P. infestans, the relative levels of accumulation were examined in tomato leaves infected by P. infestans. These three lncRNAs responded to P. infestans infection. The changes in the expression of both lncRNA23468 and lncRNA01308 followed the same pattern. Both first increased and then decreased, reaching a peak at 2 days after P. infestans infection. LncRNA13262 was downregulated from 0 dpi to 1 dpi and then reached a peak at 2 dpi (Fig. 2c).
Construction of the lncRNA23468 mutant
To examine whether lncRNA23468 indeed functioned via pairing with miR482b, a six-point mutation was introduced into lncRNA23468 by using sequences that paired with miR482b (mlncRNA23468; Fig. 3a). The design of the mlncRNA23468 is shown in Fig. 3b. The eTM of miR482b (5′-AGAUUGGGCGGAAUAGGUAAGA-3′) in lncRNA23468 was replaced with the mutation sequence (5′-AGAUUCGCCCGAAUAGCUAUCA-3′) by oligonucleotide-directed mutagenesis. After three rounds of PCR, the sequence of mlncRNA23468 was amplified (Fig. 3c). As shown in Fig. 3d, the sequencing results showed that six point mutations were introduced into the eTM from lncRNA23468. In addition, except for the eTM region of miR482b, the lncRNA23468 and mlncRNA23468 sequences were conserved after sequence alignment analysis (Fig. 3e). Overall, mlncRNA23468 was successfully constructed by using lncRNA23468 as the backbone.
Fig. 3: Construction of the mutant eTM of lncRNA23468.
a Predicted base-pairing interactions between miR482b and the eTM of lncRNA23468 with the designed mutations. b Strategy for introducing mutations by PCR. c PCR products from the different stages of the mutant eTM of lncRNA23468. M, DL2000 DNA Marker; 1, 2, and 3, products from the first, second and third rounds of PCR, respectively. d The sequencing results of the eTM regions of lncRNA23468 (top) and its mutated sequence (bottom). The red box represents the eTM region. Blue triangles represent the six-point mutation. e Sequence alignment of lncRNA23468 and mlncRNA23468 performed with ClustalX. The red box represents the eTM region
LncRNA23468 as a miRNA decoy suppresses the expression of miR482b
We investigated the possible function of lncRNA23468 using infiltration to upregulate lncRNA23468 expression in Zaofen No. 2 tomato. The plasmids for the overexpression of lncRNA23468 and mlncRNA23468 were carried out on the basis of the pBI121 vector (Fig. 4a). The introduction of A. tumefaciens harboring pBI121-lncRNA23468 and pBI121-mlncRNA23468 into tomato leaf cells resulted in significant upregulation of lncRNA23468 and mlncRNA23468 expression at 3 days: the expression levels of lncRNA23468 and mlncRNA23468 were approximately 6.1-fold and 6.0-fold greater than the tomato leaves that overexpressed empty vector (EV), respectively (Fig. 4b). The expression of miR482b was dramatically suppressed to approximately 40% in the tomato plants that overexpressed lncRNA23468 (OE23468), and the target genes of miR482b, NBS-LRRs, were significantly increased (Fig. 4b). The levels of the transcripts of Solyc02g036270.2, Solyc04g009070.1, Solyc05g008070.2 and Solyc12g016220.2 in the leaves of the OE23468 tomato plants were approximately 2.9-fold, 2.1-fold, 2.4-fold, and 2.1-fold, respectively, to the values in the leaves of the EV tomato plants. The overexpression of mlncRNA23468 in tomato (mOE23468) did not cause expression changes in either miR482b or its target genes (Fig. 4b).
Fig. 4: lncRNA23468 functions to modulate NBS-LRR genes by decoying miR482b.
a Schematic diagram of the gene cassette containing lncRNA23468 and mlncRNA23468. b qRT-PCR analysis of lncRNA23468, miR482b and the target genes in the EV, mOE23468 and OE23468 tomato plants. c Disease signs on the detached leaves from the EV, mOE23468 and OE23468 tomato plants at 5 dpi. Scale bars = 0.5 cm. d The diameter of the lesion of the detached leaves from EV, mOE23468 and OE23468 tomato plants at 5 dpi. All data are the means ± SE of three independent experiments. Different letters among the groups indicate a significant difference at the P = 0.05 level
In our previous work, miR172 and miR396 were identified to play important roles in tomato resistance to P. infestans42,43. We examined the accumulation of miR172, miR396 and their target genes in the tomato leaves that overexpressed lncRNA23468 by using qRT-PCR. The overexpression of lncRNA23468 did not cause expression changes in miR172, miR396 and their target genes compared to the control samples (Fig. S1).
Disease resistance tests were performed on the OE23468, mOE23468 and EV plants using the oomycete P. infestans. Five days after infection, the physical appearance of these tomato plants was assessed. Compared with the severe disease symptoms that appeared on the leaves of EV plants, the disease symptoms on the leaves of mOE23468 plants were similar, while fewer disease symptoms were exhibited on the leaves of OE23468 plants (Fig. 4c). The diameter of the lesion was significantly smaller in the leaves of P. infestans-infected OE23468 plants than in those of EV or mOE23468 plants (Fig. 4d).
As overexpression is not enough to fully understand lncRNA functions, lncRNA silencing should be performed to explore these biological functions. LncRNA23468 was silenced in Zaofen No. 2 tomato by VIGS. The empty TRV2 vector and PHYTOENE DESATURASE (PDS)-VIGS were used as the negative control and the positive control, respectively. At 21 days of VIGS treatment, the PDS-VIGS tomato showed visible bleaching, indicating that the lncRNA23468 gene was also silenced (Fig. S2). The expression of lncRNA23468 was suppressed to approximately 40%, and the accumulation of miR482b was upregulated approximately 2-fold, while the four target genes showed decreased abundance (Fig. 5a).
Fig. 5: Silencing of lncRNA23468 and NBS-LRR gene decreases tomato resistance to P. infestans.
a Relative expression levels of lncRNA23468, miR482b and the target genes of miR482b in the TRV2 and lncRNA23468-VIGS tomato plants. b The silencing efficiency of NBS-LRR in NBS-LRR-VIGS tomato plants. c Phenotypes of the detached leaves from TRV2, lncRNA23468-VIGS and NBS-LRR-VIGS tomato plants at 5 dpi. Scale bars = 0.5 cm. Top: disease symptoms; Bottom: trypan blue staining for detection of dead cells. d The diameter of the lesion of the detached leaves. e Leaf phenotypes at 5 dpi after whole-plant inoculation with P. infestans. Scale bars = 0.5 cm. f Disease index of VIGS-tomato plants at 5 dpi. g Transcript accumulation of the P. infestans actin gene in these inoculated plants at 5 dpi. All data are the means ± SE of three independent experiments. Different letters among the groups indicate a significant difference at the P = 0.05 level
The disease phenotypes for the control (TRV2) and lncRNA23468-VIGS tomato plants were assessed after infection with P. infestans. Compared to the leaves of TRV2 tomato plants, the detached leaves of the lncRNA23468-VIGS tomato plants showed more serious symptoms of the disease. The decreased resistance to P. infestans in the leaves of lncRNA23468-VIGS tomato plants was also revealed by trypan blue staining, as shown by the increased number of necrotic cells relative to those in the leaves of TRV2 tomato plants (Fig. 5c). In addition, Fig. 5d shows that the leaves of the lncRNA23468-VIGS tomato plants had larger lesions than did the leaves of the TRV2 tomato plants.
The leaves of the lncRNA23468-VIGS tomato plants also exhibited more serious symptoms of the disease after whole-plant inoculation with P. infestans (Fig. 5e), with a higher disease index than that observed for the TRV2 tomato plants (Fig. 5f). In addition, there were significant increases in the abundance of P. infestans in the lncRNA23468-VIGS tomato plants compared to the TRV2 tomato plants (Fig. 5g). These results suggest that lncRNA23468 suppressed the accumulation of miR482b, resulting in an increased expression of NBS-LRRs and enhanced tomato resistance to P. infestans.
The regulation between lncRNA23468 and miR482b was also validated using the Nicotiana system. We introduced Agrobacterium harboring pBI121-miR482b into tobacco leaf cells, which led to a significant increase in miR482b expression at 3 days compared to tobacco leaves overexpressing the empty vector (Fig. S3a). Then, the Agrobacterium harboring pBI121-lncRNA23468 and mlncRNA23468 were introduced into the tobacco leaves that overexpressed miR482b. The accumulation of miR482b was significantly decreased after the overexpression of lncRNA23468. The overexpression of mlncRNA23468 did not cause expression changes in miR482b (Fig. S3b). These results suggested that lncRNA23468 as a ceRNA may decoy miR482b.
Silencing of NBS-LRR enhances tomato susceptibility to P. infestans
We also employed the VIGS system to test the role of the target gene of miR482b (Solyc02g036270.2) in tomato resistance to P. infestans. At 21 days after hand-infiltration, a further examination of Solyc02g036270.2 indicated that the abundance of transcripts was significantly suppressed in the tomato plants that silenced Solyc02g036270.2 (NBS-LRR-VIGS) (Fig. 5b).
More serious disease symptoms were shown in the detached leaves of the NBS-LRR-VIGS plants than in the TRV2 plants after infection with P. infestans (Fig. 5c). The darker blue marks from trypan blue staining in the detached leaves indicated that a greater number of dead cells occurred in the NBS-LRR-VIGS tomato plants than in the TRV2 tomato plants (Fig. 5c). Furthermore, these NBS-LRR-VIGS tomato leaves also had larger diameter lesions (approximately 1.91-fold increase compared to TRV2 tomato plants infected with P. infestans) (Fig. 5d). In the whole-plant inoculation assay, the susceptibility of the silenced plants manifested as more serious disease symptoms (Fig. 5e), higher DI (Fig. 5f) and an increased abundance of P. infestans (Fig. 5g).
LncRNA as ceRNA regulates the accumulation of miRNA
MicroRNAs (miRNAs), a class of approximately 22 nt endogenous small noncoding RNAs, can direct the posttranscriptional repression of target genes by base-pairing to mRNAs. The mature miRNA is separated from the miRNA/miRNA* duplex, which is processed from a pre-miRNA1.
One focus of miRNA research is the regulation of miRNA accumulation. Various transcription factors can promote the expression of pre-miRNAs at the transcriptional level. For example, in A. thaliana, transcription factor APETALA2 (AP2) positively or negatively regulates the expression of miR156 or miR172 by binding to these miRNA genes44. Another Arabidopsis transcription factor, AtMYB2, has been identified for its roles in the activation of miR399f expression in the context of phosphate homeostasis through sequence-specific interactions with a MYB-binding site in the promoter of the miR399f precursor45. In addition, tomato ripening inhibitor (RIN) binds to a RIN-binding site in the promoter of miR172a to regulate its accumulation46.
Another regulatory mechanism has been proposed to underlie the accumulation of miRNA. During plant-pathogen interactions, pathogen effectors can suppress host defense47. The effectors, RNA-silencing suppressors from the pathogens, affect the resistance of plants by suppressing the accumulation of host miRNAs. P. sojae encoded two RNA-silencing suppressors (PsPSR1 and PsPSR2) that enhance plant susceptibility by impairing the small RNA-mediated defense of the host48. The PsPSR1 virulence target acts in the assembly of sRNA-processing complexes in Arabidopsis and soybean49. PSR2 is involved in modifying plant gene regulation early during Phytophthora infection, and the overexpression of PsPSR2 in Arabidopsis enhances hypersusceptibility to P. capsici50.
Some specific endogenous lncRNAs can act as ceRNAs to interfere with miRNA pathways51. Acting as a ceRNA is an effective posttranscriptional regulatory mechanism by which lncRNAs interfere with target transcripts. The disruption of the equilibrium between ceRNAs and miRNAs could be important to ceRNA activity in diseases such as cancer52. For example, the lncRNA HOTAIR decoys miR-152-3p to promote malignant melanoma progression53. In addition, the lncRNA CRNDE prevents miR-136-5p-mediated downregulation of Bcl-2 and Wnt2 to promote glioma malignancy54. However, studies on ceRNA in plants are limited. In maize, a number of lncRNAs have been identified as miRNA decoys by using computational methods55. Tobacco eTMX27 inhibits the expression of miRX27, resulting in enhanced nicotine biosynthesis56. Arabidopsis IPS1 contains the eTM of ath-miR399 and inhibits the expression of ath-miR39920, and PDIL1 reduces the accumulation of miR399, thus regulating Pi-related pathways in Medicago truncatula57. Wang et al.21 reported that the eTMs of miR160 from lncRNAs can serve as decoys for miR160 and function as the regulator in rice development.
In this study, we wrote a program that systematically identified 89 lncRNAs as 46 miRNA decoys in tomato (Fig. 1c and Table S4). Three of these lncRNAs, lncRNA23468, lncRNA01308 and lncRNA13262, contained an eTM site of miR482b (Fig. 2b). The expression of these three lncRNAs was initially increased but then decreased with P. infestans infection (Fig. 2c). This result was in contrast to the expression trend of miR482b upon infection by P. infestans, which was detected in our previous study8. The expression of miR482b was decreased in the tomato plants that overexpressed lncRNA23468 (Fig. 4a), while silencing lncRNA23468 led to increased miR482b accumulation (Fig. 5a). In addition, the accumulation of miR482b was not significantly changed in the tomato plants that overexpressed mlncRNA23468 with a mutated eTM site (Fig. 4a). These results suggest that the lncRNAs functioned as an eTM that may modulate the effects of miRNAs in tomato.
The lncRNA23468-miR482b-NBS-LRR network in tomato resistance to P. infestans
The miR482b family is involved in various plant interactions with pathogens6,58. The overexpression of miR482e in potato decreased plant resistance to V. dahliae infection5. Similar results were reported in our previous study, showing that miR482b negatively regulates tomato resistance to P. infestans8. To understand the biological functions of miRNAs, the identification of their target genes is an important step. In our previous study, four members of the NBS-LRR family were identified as the target genes of miR482b using the degradome data of tomato infected with P. infestans8. NBS-LRRs are active in the resistance of other plants to pathogens, such as tobacco resistance to P. parasitica12, wheat resistance to powdery mildew59, and rice resistance to blast60. In this study, NBS-LRR accumulation was suppressed by silencing lncRNA23468 and VIGS technology in tomato, resulting in the enhancement of tomato susceptibility to P. infestans (Figs. 4 and 5). Similarly, cotton susceptibility is also increased by silencing a CC-NBS-LRR gene11.
LncRNAs are involved in many biological processes, including biotic stresses. For example, Joshi et al.61 found that 13 lncRNAs were predicted to be the precursors of 96 miRNAs that affect the Brassica napus and Sclerotinia sclerotiorum interaction. The silencing of GhlncNAT-ANX2 and GhlncNAT-RLP7 in cotton seedlings can induce LOX1 and LOX2 expression to increase the resistance to V. dahliae and Botrytis cinerea infection62. In addition, many lncRNAs act in the response of Arabidopsis to F. oxysporum and Pseudomonas syringe pv. tomato DC3000, wheat response to powdery mildew pathogen, tomato response to TYLCV and P. tomentosa response to Paulownia witches' broom22,24,25,33,35. The overexpression of lncRNA16397 in tomato induced glutaredoxin expression to increase the resistance to P. infestans39. Another tomato lncRNA, lncRNA23468, was also responsive to P. infestans infection (Fig. 2c). In addition, the overexpression of lncRNA23468 in tomato positively modulated the P. infestans defense response (Fig. 4), while the resistance was impaired after lncRNA23468 silencing (Fig. 5).
The overexpression of lncRNA23468 in tomato suppressed the accumulation of miR482b, resulting in the increased abundance of NBS-LRRs, and after lncRNA23468 silencing, the opposite was observed (Figs. 4a and 5a). In other words, lncRNA23468 functions as a ceRNA to modulate NBS-LRR genes by decoying miR482b, which enhances tomato resistance to P. infestans. Similarly, the lncRNA Slylnc0195 might function as a ceRNA to protect miR166 targets, class III HD-Zip transcription factor genes, by binding to miR166 via target mimicry in the tomato response to TYLCV33.
In summary, lncRNA23468 increases resistance to P. infestans infection in tomato. From our results in the lncRNA23468-silencing, lncRNA23468-overexpressing and mlncRNA23468-overexpressing tomato, we propose that lncRNA23468 increases the expression levels of NBS-LRRs by the repression of miR482b. This mechanism would allow for a response to P. infestans stress through lncRNA23468-induced downregulation of miR482b, which increases NBS-LRRs, thus promoting tomato resistance to P. infestans. Our results provide insight into an effective posttranscriptional regulation mechanism of lncRNA and demonstrate that the lncRNA23468-miR482b-NBS-LRR network is an important component of the P. infestans network in tomato.
Bartel, D. P. MicroRNAs: target recognition and regulatory functions. Cell 136, 215–233 (2009).
Janga, S. C. & Vallabhaneni, S. MicroRNAs as post-transcriptional machines and their interplay with cellular networks. Adv. Exp. Med. Biol. 722, 59–74 (2011).
Yin, X., Wang, J., Cheng, H., Wang, X. & Yu, D. Detection and evolutionary analysis of soybean miRNAs responsive to soybean mosaic virus. Planta 237, 1213–1225 (2013).
de Vries, S., Kloesges, T. & Rose, L. E. Evolutionarily dynamic, but robust, targeting of resistance genes by the miR482/2118 gene family in the Solanaceae. Genome Biol. Evol. 7, 3307–3321 (2015).
Yang, L. et al. Overexpression of potato miR482e enhanced plant sensitivity to Verticillium dahliae infection. J. Integr. Plant Biol. 57, 1078–1088 (2015).
Feng, J., Liu, S., Wang, M., Lang, Q. & Jin, C. Identification of microRNAs and their targets in tomato infected with Cucumber mosaic virus based on deep sequencing. Planta 240, 1335–1352 (2014).
Zhu, Q. H. et al. MiR482 regulation of NBS-LRR defense genes during fungal pathogen infection in cotton. PLoS ONE 8, e84390 (2013).
Jiang, N., Meng, J., Cui, J., Sun, G. & Luan, Y. Function identification of miR482b, a negative regulator during tomato resistance to Phytophthora infestans. Hortic. Res. 5, 9 (2018).
Yang, X. & Wang, J. Genome-wide analysis of NBS-LRR genes in Sorghum genome revealed several events contributing to NBS-LRR gene evolution in grass species. Evol. Bioinform. 12, 9–21 (2016).
Zhao, Y. et al. Bioinformatics analysis of NBS-LRR encoding resistance genes in Setaria italica. Biochem. Genet. 54, 232–248 (2016).
Yang, J. et al. Molecular cloning and functional analysis of GbRVd, a gene in Gossypium barbadense that plays an important role in conferring resistance to Verticillium wilt. Gene 575, 687–694 (2016).
Li, X., Zhang, Y., Yin, L. & Lu, J. Overexpression of pathogen-induced grapevine TIR-NB-LRR gene VaRGA1 enhances disease resistance and drought and salt tolerance in Nicotiana benthamiana. Protoplasma 254, 957–969 (2017).
Ouyang, S. et al. MicroRNAs suppress NB domain genes in tomato that confer resistance to Fusarium oxysporum. PLoS Pathog. 10, e1004464 (2014).
Heo, J. B., Lee, Y. S. & Sung, S. Epigenetic regulation by long noncoding RNAs in plants. Chromosome Res. 21, 685–693 (2013).
Mercer, T. R., Dinger, M. E. & Mattick, J. S. Long non-coding RNAs: insights into functions. Nat. Rev. Genet. 10, 155–159 (2009).
Liu, F., Marquardt, S., Lister, C., Swiezewski, S. & Dean, C. Targeted 3' processing of antisense transcripts triggers Arabidopsis FLC chromatin silencing. Science 327, 94–97 (2010).
Wang, X. et al. Expression and diversification analysis reveals transposable elements play important roles in the origin of Lycopersicon-specific lncRNAs in tomato. New Phytol. 209, 1442–1455 (2016).
Zhu, B. et al. RNA sequencing and functional analysis implicate the regulatory role of long non-coding RNAs in tomato fruit ripening. J. Exp. Bot. 66, 4483–4495 (2015).
Wang, H. et al. Genome-wide identification of long noncoding natural antisense transcripts and their responses to light in Arabidopsis. Genome Res. 24, 444–453 (2014).
Franco-Zorrilla, J. M. et al. Target mimicry provides a new mechanism for regulation of microRNA activity. Nat. Genet. 39, 1033–1037 (2007).
Wang, M., Wu, H. J., Fang, J., Chu, C. C. & Wang, X. J. A long noncoding RNA involved in rice reproductive development by negatively regulating osa-miR160. Sci. Bull. 62, 470–475 (2017).
Zhu, Q. H., Stephen, S., Taylor, J., Helliwell, C. A. & Wang, M. B. Long noncoding RNAs responsive to Fusarium oxysporum infection in Arabidopsis thaliana. New Phytol. 201, 574–584 (2014).
Qin, T., Zhao, H., Cui, P., Albesher, N. & Xiong, L. A nucleus-localized long non-coding RNA enhances drought and salt stress tolerance. Plant Physiol. 175, 1321–1336 (2017).
Xin, M. et al. Identification and characterization of wheat long non protein coding RNAs responsive to powdery mildew infection and heat stress by using microarray analysis and SBS sequencing. BMC Plant Biol. 11, 61 (2011).
Wang, Z., Zhai, X., Cao, Y., Dong, Y. & Fan, G. Long non-coding RNAs responsive to witches' broom disease in Paulownia tomentosa. Forests 8, 348 (2017).
Scarano, D., Rao, R. & Corrado, G. In Silico identification and annotation of non-coding RNAs by RNA-seq and De Novo assembly of the transcriptome of Tomato Fruits. PLoS ONE 12, e0171504 (2017).
Dai, Q. et al. Comparative transcriptome analysis of the different tissues between the cultivated and wild tomato. PLoS ONE 12, e0172411 (2017).
Wang, M., Zhao, W., Gao, L. & Zhao, L. Genome-wide profiling of long non-coding RNAs from tomato and a comparison with mRNAs associated with the regulation of fruit ripening. BMC Plant Biol. 18, 75 (2018).
Sun, Y. & Xiao, H. Identification of alternative splicing events by RNA sequencing in early growth tomato fruits. BMC Genom. 16, 948 (2015).
Li, R., Fu, D., Zhu, B., Luo, Y. & Zhu, H. CRISPR/Cas9-mediated mutagenesis of lncRNA1459 alters tomato fruit ripening. Plant J. 94, 513–524 (2018).
Wang, Y. et al. Analysis of long-non-coding RNAs associated with ethylene in tomato. Gene 674, 151–160 (2018).
Wang, Y. et al. Integrative analysis of long non-coding RNA acting as ceRNAs involved in chilling injury in tomato fruit. Gene 667, 25–33 (2018).
Wang, J. et al. Genome-wide analysis of tomato long non-coding RNAs and identification as endogenous target mimic for microRNA in response to TYLCV infection. Sci. Rep. 5, 16946 (2015).
Zheng, Y., Wang, Y., Ding, B. & Fei, Z. Comprehensive transcriptome analyses reveal that Potato Spindle Tuber Viroid triggers genome-wide changes in alternative splicing, inducible trans-acting activity of phased secondary small interfering RNAs, and immune responses. J. Virol. 91, e00247–17 (2017).
Wang, J. et al. Re-analysis of long non-coding RNAs and prediction of circRNAs reveal their novel roles in susceptible tomato following TYLCV infection. BMC Plant Biol. 18, 104 (2018).
Bhattarai, K., Louws, F. J., Williamson, J. D. & Panthee, D. R. Differential response of tomato genotypes to Xanthomonas-specific pathogen-associated molecular patterns and correlation with bacterial spot (Xanthomonas perforans) resistance. Hortic. Res. 3, 16035 (2016).
Cui, J. et al. Transcriptome signatures of tomato leaf induced by Phytophthora infestans and functional identification of transcription factor SpWRKY3. Theor. Appl. Genet. 131, 787–800 (2018).
Luan, Y. et al. High-throughput sequencing reveals differential expression of miRNAs in tomato inoculated with Phytophthora infestans. Planta 241, 1405–1416 (2015).
Cui, J., Luan, Y., Jiang, N., Bao, H. & Meng, J. Comparative transcriptome analysis between resistant and susceptible tomato allows the identification of lncRNA16397 conferring resistance to Phytophthora infestans by co-expressing glutaredoxin. Plant J. 89, 577–589 (2017).
Liu, Y., Schiff, M. & Dinesh-Kumar, S. P. Virus-induced gene silencing in tomato. Plant J. 31, 777–786 (2002).
Zhao, J. et al. An efficient Potato virus X-based microRNA silencing in Nicotiana benthamiana. Sci. Rep. 6, 20573 (2016).
Luan, Y. et al. Effective enhancement of resistance to Phytophthora infestans by overexpression of miR172a and b in Solanum lycopersicum. Planta 247, 127–138 (2018).
Chen, L., Luan, Y. & Zhai, J. Sp-miR396a-5p acts as a stress-responsive genes regulator by conferring tolerance to abiotic stresses and susceptibility to Phytophthora nicotianae infection in transgenic tobacco. Plant Cell Rep. 34, 2013–2025 (2015).
Yant, L. et al. Orchestration of the floral transition and floral development in Arabidopsis by the bifunctional transcription factor APETALA2. Plant Cell 22, 2156–2170 (2010).
Baek, D., Park, H. C., Kim, M. C. & Yun, D. J. The role of Arabidopsis MYB2 in miR399f-mediated phosphate-starvation response. Plant Signal. Behav. 8, e23488 (2013).
Gao, C. et al. MicroRNA profiling analysis throughout tomato fruit development and ripening reveals potential regulatory role of RIN on microRNAs accumulation. Plant. Biotechnol. J. 13, 370–382 (2015).
Kong, L. et al. A Phytophthora effector manipulates host histone acetylation and reprograms defense gene expression to promote infection. Curr. Biol. 27, 981–991 (2017).
Qiao, Y. et al. Oomycete pathogens encode RNA silencing suppressors. Nat. Genet. 45, 330–333 (2013).
Qiao, Y., Shi, J., Zhai, Y., Hou, Y. & Ma, W. Phytophthora effector targets a novel component of small RNA pathway in plants to promote infection. Proc. Natl Acad. Sci. USA 112, 5850–5855 (2015).
Xiong, Q. et al. Phytophthora suppressor of RNA silencing 2 is a conserved RxLR effector that promotes infection in soybean and Arabidopsis thaliana. Mol. Plant Microbe Interact. 27, 1379–1389 (2014).
Liu, D., Mewalal, R., Hu, R., Tuskan, G. A. & Yang, X. New technologies accelerate the exploration of non-coding RNAs in horticultural plants. Hortic. Res. 4, 17031 (2017).
Sen, R., Ghosal, S., Das, S., Balti, S. & Chakrabarti, J. Competing endogenousRNA: the key to posttranscriptional regulation. ScientificWorldJ 2014, 896206 (2014).
Luan, W. et al. Long non-coding RNA HOTAIR acts as a competing endogenous RNA to promote malignant melanoma progression by sponging miR-152-3p. Oncotarget 8, 85401–85414 (2017).
Li, D. X. et al. The long non-coding RNA CRNDE acts as a ceRNA and promotes glioma malignancy by preventing miR-136-5p-mediated downregulation of Bcl-2 and Wnt2. Oncotarget 8, 88163–88178 (2017).
Zhu, M. et al. Transcriptomic analysis of long Non-coding RNAs and coding genes uncovers a complex regulatory network that is involved in maize seed development. Genes 8, (274 (2017).
Li, F. et al. Regulation of nicotine biosynthesis by an endogenous target mimicry of microRNA in tobacco. Plant Physiol. 169, 1062–1071 (2015).
Wang, T. et al. Novel phosphate deficiency-responsive long non-coding RNAs in the legume model plant Medicago truncatula. J. Exp. Bot. 68, 5937–5948 (2017).
Bao, D. et al. Down-regulation of genes coding for core RNAi components and disease resistance proteins via corresponding microRNAs might be correlated with successful soybean mosaic virus infection in soybean. Mol. Plant Pathol. 19, 948–960 (2017).
Hurni, S. et al. Rye Pm8 and wheat Pm3 are orthologous genes and show evolutionary conservation of resistance function against powdery mildew. Plant J. 76, 957–969 (2013).
Ma, J. et al. Pi64, encoding a novel CC-NBS-LRR protein, confers resistance to leaf and neck blast in rice. Mol. Plant Microbe Interact. 28, 558–568 (2015).
Joshi, R. K., Megha, S., Basu, U., Rahman, M. H. & Kav, N. N. Genome wide identification and functional prediction of long non-coding RNAs responsive to Sclerotinia sclerotiorum infection in Brassica napus. PLoS ONE 11, e0158784 (2016).
Zhang, L. et al. Long non-coding RNAs involve in resistance to Verticillium dahliae, a fungal disease in cotton. Plant. Biotechnol. J. 16, 1172–1185 (2018).
This work is supported by grants from the National Natural Science Foundation of China (Nos. 31471880 and 61472061). We thank Prof. Weixing Shan from Northwest A & F, University of China, for providing P. infestans strain P12103 and Yule Liu from Tsinghua University for providing the VIGS system.
School of Life Science and Biotechnology, Dalian University of Technology, 116024, Dalian, China
Ning Jiang, Jun Cui, Guanglei Yang, Xiaoxu Zhou, Xinxin Hou & Yushi Luan
School of Computer Science and Technology, Dalian University of Technology, 116024, Dalian, China
Yunsheng Shi & Jun Meng
Ning Jiang
Jun Cui
Yunsheng Shi
Guanglei Yang
Xiaoxu Zhou
Xinxin Hou
Jun Meng
Yushi Luan
Correspondence to Jun Meng or Yushi Luan.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Jiang, N., Cui, J., Shi, Y. et al. Tomato lncRNA23468 functions as a competing endogenous RNA to modulate NBS-LRR genes by decoying miR482b in the tomato-Phytophthora infestans interaction. Hortic Res 6, 28 (2019). https://doi.org/10.1038/s41438-018-0096-0
Revised: 07 September 2018
Dynamic characteristics and functional analysis provide new insights into long non-coding RNA responsive to Verticillium dahliae infection in Gossypium hirsutum
Guoning Wang
Xingfen Wang
Zhiying Ma
BMC Plant Biology (2021)
LncRNA TCONS_00021861 is functionally associated with drought tolerance in rice (Oryza sativa L.) via competing endogenous RNA regulation
Jiajia Chen
Yuqing Zhong
Xin Qi
Genome-wide identification and characterization of long non-coding RNAs conferring resistance to Colletotrichum gloeosporioides in walnut (Juglans regia)
Shan Feng
Hongcheng Fang
Ke Qiang Yang
BMC Genomics (2021)
Identification and characterization of early Fusarium wilt responsive mRNAs and long non-coding RNAs in banana root using high-throughput sequencing
Chunzhen Cheng
Fan Liu
Zhongxiong Lai
Comparative analysis of long noncoding RNAs in angiosperms and characterization of long noncoding RNAs in response to heat stress in Chinese cabbage
Xiaoming Song
Jingjing Hu
Nan Li
Horticulture Research (2021)
Editorial Summary
A barrier against blight
The interplay between various non-protein-coding RNA molecules plays a critical role in managing tomato plants' defenses against infection. Long non-coding RNA (lncRNA) molecules can modulate gene function by binding to other, complementary RNA molecules. Researchers led by Jun Meng and Yushi Luan at the Dalian University of Technology recently set out to identify lncRNAs that contribute to tomato resistance to the pathogen that causes late blight. Their team specifically sought lncRNAs targeting another RNA molecule known as miR482b, which they had previously found to weaken plant resistance to infection. They found that one such molecule, lncRNA23468, can counter the effects of miR482b and protect tomatoes against late blight by boosting the activity of genes involved in the immune response. These results thus offer important insights into the mechanisms underlying disease resistance in this important crop.
Reviews & Analysis
Horticulture Research (Hortic Res) ISSN 2052-7276 (online) ISSN 2662-6810 (print) | CommonCrawl |
How can I calculate the sum of 2 random dice out of a 3d6 pool in AnyDice?
With AnyDice it's pretty easy to calculate probalities for highest and lowest 2 of a 3d6 pool, namely with:
output [highest 2 of 3d6]
output [lowest 2 of 3d6]
However, this has a bias towards the highest and lowest thrown dice. What I want to calculate is the possible results, without bias. Reasoning behind this is that I want my players to control the outcome. It's not necessarily that the highest or lowest outcome are worse or better, it's simply that I want to offer them a decision. They choose two of the dice, add them together and there is a result. I want to give the luck d20 roll with result such as an encounter more meaning and mental impact ("why did you pick those dice!").
I had hoped AnyDice to have a random function, something like [random 2 of 3d6] but that doesn't exist. My hypothesis was that I could simply add the percentages of [highest 2 of 3d6] and [lowest 2 of 3d6] and divide that number by 2 (since I'm adding two probability calculations with a total of 100%).
But somehow this doesn't feel right. It doesn't include the possibility of a player picking the highest and the lowest number instead of the two highest or lowest.
I've been doing some tutorials in AnyDice and I reckon this definitely CAN be done with a function where the following would happen:
Roll 3d6. Then also roll a d3 twice (not 2d3 as it would add up).
If the d3 rolls are equal, reroll one until you get two unique d3 rolls.
Then use the unique d3 rolls and take those dice from the 3d6 pool.
Add those dice together, show results.
An approach of this chance could be that I take simply the average of a single die in the 3d6 pool and then multiply by 2, theoretically approaching all the possible results. This is incorrect as well as it includes all three dice and thus the average can go higher than the max of 2d6.
Perhaps I'm overthinking this calculation by using AnyDice. As the the dice order isn't relevant at all, I simply need to know all possible dice combinations a 3d6 pool can have. Not the sum, but the combinations. This is super simple, because every dice has 6 sides. So 3d6 has 6 * 6 * 6 = 216 total combinations, this includes repetition as I am interested in the probability of each throw. However, I again don't need all three dice. Only 2, which for the sake of calculation can be presumed to be picked randomly.
Another option I can think of in AnyDice is:
Roll 3d6 and 1d3.
Remove from 3d6 sequence the number in position 1d3.
Add the remaining sequence's result and output probabilities.
Okay, long wall of text, but I am just not familiar enough with AnyDice to figure this out. Any help is greatly appreciated.
statistics anydice
V2Blast♦
ToppleTopple
\$\begingroup\$ Welcome! You can take the tour as an introduction to the site and check the help center if you need further guidance. Good luck and happy gaming! \$\endgroup\$ – Sdjz Jul 10 '19 at 10:15
\$\begingroup\$ What do you mean by "I want my players to control the outcome"? What would be the mechanism of control here? \$\endgroup\$ – kviiri Jul 10 '19 at 10:38
\$\begingroup\$ Okay, it is more about perceived control. They choose the dice and the sum of those dice decides what's going to happen. In some small tests this gives players the feeling of more control over the situation, even though it's still largely random. It's more of a gimmick than anything else, but it got me curious about the statistics and that's when I found AnyDice. \$\endgroup\$ – Topple Jul 10 '19 at 10:42
The sum of 2 random dice from 3d6 is the same as just rolling 2d6
If you are rolling 3d6 and then picking 2 dice at random, the results are statistically the same as just adding 2d6. I am having trouble finding a way to explain this at all as it is just intuitive to me. The fact that you add a third die, then just discard a random one still means that you are just taking two results from 2d6 dice and adding them together. The extra step of adding a d6 to the pool then discarding one d6 does nothing (for any number of added and discarded from the pool, actually).
Naturally you can't just trust my intuition so consider this anydice program:
function: X:n choice {
if X = 1 {
result: {2,3}@3d6
output [d3 choice] named "random 2 of 3d6"
output 2d6 named "2d6"
Which gives this result:
What that choice function (called with a d3) is doing is picking randomly between 3 cases: choosing the 2 highest dice from the 3d6, choosing the 2 lowest, or choosing the highest and lowest. As you can see, the results end up being the same as just rolling 2d6.
SdjzSdjz
\$\begingroup\$ Wow, this is now so obvious. I focused completely on the impact of the 3rd die on the choice, while it is in fact not relevant at all from a probability aspect. It is only perceived relevant for a player. Thanks for giving the proof as well. I overthought this so hard that I completely missed the simplicity of it. Cheers! \$\endgroup\$ – Topple Jul 10 '19 at 11:21
\$\begingroup\$ The jargon/proof version of this answer is: "No matter which die you remove, you are still convolving two density functions of 1d6 into a single density of 2d6. The result is always the same and the player choice is completely illusory." +1 for communicating this without jargon. \$\endgroup\$ – Novak Jul 11 '19 at 2:26
\$\begingroup\$ Well, the third dice might be relevant to the probability since it's player choice, not random choice. If the players can see the numbers rolled on 3d6 and pick 2 of them, that isn't really well modelled by picking a random 2 of 3d6 if they have any reason to prefer particular outcomes (regardless of whether those outcomes are actually better). If they think they want high numbers then it'll be equivalent to [highest 2 of 3d6]. If they want a particular number (or range) then that will have a significantly higher probability than it does in a regular 2d6 distribution. \$\endgroup\$ – Ben Jul 11 '19 at 4:07
\$\begingroup\$ @Ben yes, this is my expectation of the roll 3 pick 2 mechanic. However, it is incredibly difficult to predict what players want to do. A specific range could be implemented in an AnyDice function, however it goes a bit outside of the scope of my question. Would be interesting though. \$\endgroup\$ – Topple Jul 11 '19 at 8:51
Two random dice from 3d6 is the same as 2d6
No matter how many dice you roll, each die roll is stochastically independent. This means none of the individual dice alters the probability distribution of the outcome of any other die in any way, and it follows that rolling dice that are simply ignored in the final result can simply be not rolled at all. Intuitively, you could give the player the choice before rolling and the result wouldn't be changed at all.
For a simpler example case, consider that you want the distribution of a random 1d6 from 2d6 (so one die less rolled, and one die less in result). Let's call the dice \$R\$ and \$G\$ for red and green respectively, and assume they're identical otherwise. This is what the usual distribution of the 2d6 would look like:
\$ \begin{array}{|c|c|c|c|c|c|c|} \hline & \textbf{G=1} & \textbf{G=2} & \textbf{G=3} & \textbf{G=4} & \textbf{G=5} & \textbf{G=6} \\ \hline \textbf{R=1} & {2} & {3} & {4} & {5} & {6} & {7}\\ \hline \textbf{R=2} & {3} & {4} & {5} & {6} & {7} & {8}\\ \hline \textbf{R=3} & {4} & {5} & {6} & {7} & {8} & {9}\\ \hline \textbf{R=4} & {5} & {6} & {7} & {8} & {9} & {10}\\ \hline \textbf{R=5} & {6} & {7} & {8} & {9} & {10} & {11}\\ \hline \textbf{R=6} & {7} & {8} & {9} & {10} & {11} & {12}\\ \hline \end{array} \$
Each cell has an equal probability: 1/36. If we ignore one of the dice, say \$G\$, it'll look like this:
\$ \begin{array}{|c|c|c|c|c|c|c|} \hline & \textbf{G=1} & \textbf{G=2} & \textbf{G=3} & \textbf{G=4} & \textbf{G=5} & \textbf{G=6} \\ \hline \textbf{R=1} & {1} & {1} & {1} & {1} & {1} & {1}\\ \hline \textbf{R=2} & {2} & {2} & {2} & {2} & {2} & {2}\\ \hline \textbf{R=3} & {3} & {3} & {3} & {3} & {3} & {3}\\ \hline \textbf{R=4} & {4} & {4} & {4} & {4} & {4} & {4}\\ \hline \textbf{R=5} & {5} & {5} & {5} & {5} & {5} & {5}\\ \hline \textbf{R=6} & {6} & {6} & {6} & {6} & {6} & {6}\\ \hline \end{array} \$
Each cell still has a 1/36 probability, but there are now only six distinct outcomes, each with a total probability of 1/6. The \$G\$ die doesn't matter anymore and we might as well have not rolled it at all. The reverse case where we omit \$R\$ from the result is identical; swap the labels of the second array if you don't believe me! You can construct a similar, three dimensional grid to assure yourself that this remains true in the "3d6, pick two at random" case.
kviirikviiri
\$\begingroup\$ Thank you for your answer and some more in-depth literature. \$\endgroup\$ – Topple Jul 11 '19 at 8:51
I don't know much about AnyDice, but here's how I would
Calculate this in Python
from collections import defaultdict as dd
result = dd(int)
for x in range(1,7):
for y in range(1,7):
for z in range(1,7):
a,b,c = sorted((x,y,z))
result[a+b, a+c, b+c] += 1
accum = dd(int)
for key,value in result.items():
for a in set(key):
accum[a]+=value
for k in sorted(accum.keys()):
print(k, accum[k], accum[k]/216)
for k in sorted(result.keys()):
print(k, result[k], result[k]/216)
Which (with some formatting) gives the below tables. The first lists the number of times each value is an option (that is, rolling three 1s gives you the option to choose 2 once, not three times). The probability column doesn't sum to 1 because of the inherent overlap caused by the choice you give the players.
The second, much longer, table lists the choice of three options first, then how many times it shows up, then its ultimate probability.
\$ \begin{array}{|c|c|c|} \hline \textbf{Value} & \textbf{Is an option N times} & \textbf{Probability} \\ \hline 2 & 16 & 0.07407407407407407 \\ \hline 3 & 30 & 0.1388888888888889 \\ \hline 4 & 46 & 0.21296296296296297 \\ \hline 5 & 60 & 0.2777777777777778 \\ \hline 6 & 76 & 0.35185185185185186 \\ \hline 7 & 90 & 0.4166666666666667 \\ \hline 8 & 76 & 0.35185185185185186 \\ \hline 9 & 60 & 0.2777777777777778 \\ \hline 10 & 46 & 0.21296296296296297 \\ \hline 11 & 30 & 0.1388888888888889 \\ \hline 12 & 16 & 0.07407407407407407 \\ \hline \end{array} \$
\$ \begin{array}{|c|c|c|} \hline \textbf{Options} & \textbf{Count} & \textbf{Probability} \\ \hline (2, 2, 2) & 1 & 0.004629629629629629 \\ \hline (2, 3, 3) & 3 & 0.013888888888888888 \\ \hline (2, 4, 4) & 3 & 0.013888888888888888 \\ \hline (2, 5, 5) & 3 & 0.013888888888888888 \\ \hline (2, 6, 6) & 3 & 0.013888888888888888 \\ \hline (2, 7, 7) & 3 & 0.013888888888888888 \\ \hline (3, 3, 4) & 3 & 0.013888888888888888 \\ \hline (3, 4, 5) & 6 & 0.027777777777777776 \\ \hline (3, 5, 6) & 6 & 0.027777777777777776 \\ \hline (3, 6, 7) & 6 & 0.027777777777777776 \\ \hline (3, 7, 8) & 6 & 0.027777777777777776 \\ \hline (4, 4, 4) & 1 & 0.004629629629629629 \\ \hline (4, 4, 6) & 3 & 0.013888888888888888 \\ \hline (4, 5, 5) & 3 & 0.013888888888888888 \\ \hline (4, 5, 7) & 6 & 0.027777777777777776 \\ \hline (4, 6, 6) & 3 & 0.013888888888888888 \\ \hline (4, 6, 8) & 6 & 0.027777777777777776 \\ \hline (4, 7, 7) & 3 & 0.013888888888888888 \\ \hline (4, 7, 9) & 6 & 0.027777777777777776 \\ \hline (4, 8, 8) & 3 & 0.013888888888888888 \\ \hline (5, 5, 6) & 3 & 0.013888888888888888 \\ \hline (5, 5, 8) & 3 & 0.013888888888888888 \\ \hline (5, 6, 7) & 6 & 0.027777777777777776 \\ \hline (5, 6, 9) & 6 & 0.027777777777777776 \\ \hline (5, 7, 8) & 6 & 0.027777777777777776 \\ \hline (5, 7, 10) & 6 & 0.027777777777777776 \\ \hline (5, 8, 9) & 6 & 0.027777777777777776 \\ \hline (6, 6, 6) & 1 & 0.004629629629629629 \\ \hline (6, 6, 8) & 3 & 0.013888888888888888 \\ \hline (6, 6, 10) & 3 & 0.013888888888888888 \\ \hline (6, 7, 7) & 3 & 0.013888888888888888 \\ \hline (6, 7, 9) & 6 & 0.027777777777777776 \\ \hline (6, 7, 11) & 6 & 0.027777777777777776 \\ \hline (6, 8, 8) & 3 & 0.013888888888888888 \\ \hline (6, 8, 10) & 6 & 0.027777777777777776 \\ \hline (6, 9, 9) & 3 & 0.013888888888888888 \\ \hline (7, 7, 8) & 3 & 0.013888888888888888 \\ \hline (7, 7, 10) & 3 & 0.013888888888888888 \\ \hline (7, 7, 12) & 3 & 0.013888888888888888 \\ \hline (7, 8, 9) & 6 & 0.027777777777777776 \\ \hline (7, 8, 11) & 6 & 0.027777777777777776 \\ \hline (7, 9, 10) & 6 & 0.027777777777777776 \\ \hline (8, 8, 8) & 1 & 0.004629629629629629 \\ \hline (8, 8, 10) & 3 & 0.013888888888888888 \\ \hline (8, 8, 12) & 3 & 0.013888888888888888 \\ \hline (8, 9, 9) & 3 & 0.013888888888888888 \\ \hline (8, 9, 11) & 6 & 0.027777777777777776 \\ \hline (8, 10, 10) & 3 & 0.013888888888888888 \\ \hline (9, 9, 10) & 3 & 0.013888888888888888 \\ \hline (9, 9, 12) & 3 & 0.013888888888888888 \\ \hline (9, 10, 11) & 6 & 0.027777777777777776 \\ \hline (10, 10, 10) & 1 & 0.004629629629629629 \\ \hline (10, 10, 12) & 3 & 0.013888888888888888 \\ \hline (10, 11, 11) & 3 & 0.013888888888888888 \\ \hline (11, 11, 12) & 3 & 0.013888888888888888 \\ \hline (12, 12, 12) & 1 & 0.004629629629629629 \\ \hline \end{array} \$
Joel HarmonJoel Harmon
\$\begingroup\$ Always nice to have some Python creep in, thanks. \$\endgroup\$ – Topple Jul 11 '19 at 8:52
Not the answer you're looking for? Browse other questions tagged statistics anydice or ask your own question.
How do I calculate dice probability in the A Song of Ice and Fire system?
Help with AnyDice Script for a d6 Dice Pool
How can I get the highest or lowest values of an irregular dice pool in AnyDice?
Help with Anydice function for 3d10 dice pool
How can I model this "Party Draft Pool" ability score generation method in AnyDice?
How to calculate conditional probabilities in AnyDice?
How can I calculate the probability of complete success with this mixed dice pool using AnyDice?
How can I model the probabilities for this mechanic involving cancelling out dice pools in AnyDice?
How can I model the probabilities for highest and lowest of opposing dice pools? | CommonCrawl |
Assessing the expression of immunosuppressive cytokines in the newly diagnosed systemic lupus Erythematosus patients: a focus on B cells
Mitra Abbasifard1,
Zahra Kamiab2,3,
Mohammad Hasani1,
Amir Rahnama4,
Pooya Saeed-Askari1 &
Hossein Khorramdelazad ORCID: orcid.org/0000-0002-0653-27245
The immunosuppressive effects of regulatory B-cells (Bregs) and their immunosuppressive cytokines on immune responses in autoimmune disorders, mainly systemic lupus erythematosus (SLE), have been recently established. Therefore, the purpose of this article has been the exploration of the expressions of cytokines produced by B cells in newly diagnosed SLE patients.
The findings demonstrated that the gene expression of IL-10, TGF-β, IL-35, PD-L1, and FasL was significantly up-regulated in SLE patients compared to healthy subjects (P < 0.05). Additionally, the results revealed that serum levels of IL-10, TGF-β, IL-35, PD-L1 were remarkably increased in patients with SLE compared to healthy subjects (P < 0.0001). However, serum levels of IL-10 and TGF-β decreased significantly with increasing SLEDAI score in studied patients (P < 0.05).
It was concluded that the release of anti-inflammatory cytokines, particularly IL-10 and TGF-β, might inhibit immune responses and autoreactive immune cells in a compensatory manner in SLE patients with mild to moderate disease activity.
The systemic lupus erythematosus (SLE) is a multifactorial chronic autoimmune disease that is more common in women [1]. Evidence showed that genetic and environmental factors, including cigarette smoking, drugs, ultraviolet (UV) light, chemical substances, gut microbiota, and viral infections, could be involved in SLE onset [1, 2]. Regarding the existence of an imbalance between apoptosis and removal of apoptotic substances in SLE patients, nuclear antigens such as histones, centromere proteins, single- and double-stranded deoxyribonucleic acid (ss- and ds-DNA), nucleosome, Smith antigen (Sm Ag), Ro and La proteins, as well as ribonucleoproteins (RNPs) become exposed to the immune system cells and components [3]. These autoantigens are able to stimulate B-cells to produce auto-antibodies such as anti-nuclear antibody (ANA) and anti-double-stranded DNA (ds-DNA) antibody as well as other inflammatory mediators [4].
Previous studies revealed that regulatory immune cells, such as regulatory T cells (Tregs) and B regs with immunosuppressive properties, are involved in the homeostasis processes [5]. Recent experimental and human studies have further indicated that Bregs can suppress inflammatory responses via production and secretion of anti-inflammatory cytokines like IL-10, IL-35, and TGF-β as well as expression of inhibitory molecules [6, 7]. Besides, Bregs come up from common progenitor of transitional 2-marginal zone precursor (T2-MZP) B cells capable of being autoreactive following interaction with pathogens and even activating release anti-inflammatory mediators [8]. Some investigations in this respect have correspondingly suggested that Bregs are impaired in inflammatory autoimmune disorders such as SLE, rheumatoid arthritis (RA), and Graves' disease [9]. However, signals necessary for differentiation of Bregs have remained poorly understood, and previous studies have revealed that, under normal conditions, plasmacytoid dendritic cells (pDCs) would stimulate differentiation of CD19 + CD24hiCD38hi immature B-cells into the CD24+CD38hi mature Bregs, which could produce IL-10 through release of interferon-alpha (IFN-α) and the cluster of differentiation 40 (CD40) engagement. Conversely, the mentioned Bregs can have an inhibitory effect on the generation of IFN-α through pDCs by releasing IL-10 [10]. In SLE, the cross-talk between pDCs and Bregs has defected, and pDCs fail to trigger differentiating CD19+CD24hiCD38hi B-cells into IL-10-producing CD24+CD38hi Bregs [10]. In addition, Bregs can suppress CD4+ T-cell proliferation, induce Foxp3+ regulatory T and Tr1 cells, and suppress T helper (Th) 1 (Th1). Bregs also can stimulate Th17 and CD8+ effector T-cells differentiation through cell-cell interactions and release of the anti-inflammatory cytokine, including TGF-β, IL-35, and IL-10. As well, TGF-β and IL-10, which are secreted by Bregs, can have an inhibitory effect on antigen-presenting function, cytokine secretion by DCs, neutrophils, natural killer (NK) cells, M1 macrophages, and conversely, induction of M2 macrophage differentiation.
Moreover, FasL (or CD178) or PD-L1 (or CD274) that are expressed on the Bregs surface are involved in the apoptosis of effector T cells following ligation with PD-1 (CD279) and Fas (CD95) on the surface of mentioned T-cells [11]. Since changes in immune cells phenotype, as well as their number and function, are closely related to the disease activity, measuring the produced mediators and inhibitory molecules by regulatory immune cells before treatment based on diseases activity status could be critical in further understanding the role of the modulatory mechanisms of the immune system in the pathogenesis of SLE. Therefore, this study aimed to measure inhibitory molecules' expression on B cells' surface and anti-inflammatory cytokines produced by B cells in newly diagnosed SLE patients compared to healthy subjects.
Twenty-three patients suffering from SLE and thirty normal-age and gender-matched subjects were enrolled in this study. The demographic features of SLE patients and healthy subjects are shown in Table 1. All the patients were classified as individuals having an active disease (SLEDAI ≥4). The mean ± SD of SLEDAI in patients was 9.73 ± 6.01and we divided patients into three groups (SLEDAI I; 4 to 7, SLEDAI II; 7 to10, SLEDAI III more than 10) based on SLEDAI scores (min:5, max: 34, and median: 8). At the time of the blood collection, newly diagnosed SLE patients were not under treatment. Clinical and laboratory data of studied SLE patients are shown in Table 2.
Table 1 Some demographic features of patients and healthy subjects enrolled in the study
Table 2 Clinical and laboratory data of studied SLE patient
mRNA and protein expression of cytokines and inhibitory molecules
The data obtained from RT-PCR demonstrated the considerable rising trend in expressions of IL-10, IL-35 (EBI3 or IL-12P35), TGF-β, PDL-1, and FasL genes (P < 0.0001) in the B-cell population of the SLE patients compared with ones isolated from the healthy subjects (Fig. 1). Additionally, serum levels of IL-10 (control: 115 ± 22.1; patient: 158 ± 31.7; P < 0.001), IL-35 (control: 377 ± 63.84; patient: 499 ± 42.95; P < 0.001), and TGF-β (control: 28 ± 6.87; patient: 112 ± 19.47; P < 0.005) were significantly increased in SLE patients compared with the healthy subjects. The multiple comparison analysis in three groups of patients based on the SLEDAI score showed that by increasing disease activity, serum levels of IL-10 and TGF-β decreased significantly (P < 0.05), while in the case of IL-35, a significant difference was not observed. Serum levels fluctuations in SLE patient different SLEDAI scores are shown in Fig. 2.
Alteration in mRNA levels of FasL (a), PDL-1 [2], TGF-β (c), IL-10 (d), EBI3 (e), and IL-12p35 (f) in control and SLE patient groups. Data are presented as mean ± SD, * significant (P < 0.05)
The differences between serum levels of IL-10 (a), TGF- β (b), and IL-35 (c) in healthy subjects (control) and three groups of SLE patients based on SLEDAI I; 4 to 7, SLEDAI II; 7 to10, SLEDAI III more than 10 scores. *: significant difference between control and three SLEDAI groups, $: significant difference between SLEDAI I and SLEDAI II groups, &: significant difference between SLEDAI I, SLEDAI II, and SLEDAI III groups. Data are presented as mean ± SD, * significant (P < 0.05)
Cytokines, autoantibodies, inhibitory molecules and disease activity
The correlation analysis between disease activity scores and titres of autoantibodies showed that there was a positive association between ANA, anti-dsDNA, and anti-SSA/Ro with disease activity score but the correlations were not statistically significant (r = 0.2, p = 0.359; r = 0.108, p = 0.623; and r = 0.353, p = 0.1; respectively). Correlation matrix analysis showed that there was an association between serum level of TGF-β and SLEDAI (r = − 0.54, p = 0.007); PDL-1 gene and IL-35 serum level (r = 0.53, p = 0.008); IL-10 and TGF-β serum levels (r = 0.64, p = 0.001); IL-10 and EBI3 genes (r = − 0.56, p = 0.05) (Fig. 3).
The correlations between SLEDAI as the disease activity index in SLE patients and studied inhibitory molecules as well as immunosuppressive cytokines. The confidence interval of r is shown in the right rectangle (between − 1 and 1). * significant (P < 0.05)
Moreover, information of nine variables related to studied cytokines (EBI3 gene, TGF-β, IL-10, IL-12p35, PDL-1, FasL genes and IL-10, TGF-β, and IL-35 serum levels) were extracted and converted into two components 1, and 2 using PCA. The cumulative percent of the variance of the two components was 88.47%. HCA was also used to cluster patients based on cytokine obtained data. Data showed that healthy subjects (n = 30) were placed in clusters 1 and 2 (blue and yellow) and to the left of the dendrogram while SLE patients (n = 23) were placed in clusters 3 and 4 (red and gray) and to the right of the dendrogram (Fig. 4 a). Altogether, the results showed that considering components 1 and 2 data, SLE patients could be differentiated from healthy subjects (Fig. 4 b).
Principal component analysis and clustering of cytokine profiles in newly diagnosed SLE patients (n = 23) and healthy subjects (n = 30). Information of nine variables related to studied cytokines (EBI3, TGF-β, IL-10, IL-12p35, PDL-1, FasL genes and IL-10, TGF-β, and IL-35 serum levels) were extracted and converted into two components 1, and 2 using PCA based on the loading values. The cumulative percent of the variance of the two components was 88.47. Data showed that healthy subjects were placed in clusters 1 and 2 (blue and yellow) and to the left of the dendrogram, while SLE patients were placed in clusters 3 and 4 (red and gray) and to the right of the dendrogram (a). The results also showed that by considering components 1 and 2 data extracted from the mRNA and protein levels of IL-10, TGF-β, and IL-35, SLE patients could be differentiated from healthy subjects (b)
Additionally, the results of regression (PCR) showed that there is only a significant inverse relationship between component 2 and SLEDAI scores because the estimated confidence interval for component 2 did not include zero; therefore, on average, with the increase of one unit of component 2 in which IL-10 plays an important role, the SLEDAI score has decreased 1.39 units (Table 3). These results suggest that there is an inverse association between SLEDAI scores and immunomodulatory cytokines.
Table 3 bootstrap linear regression with 95% bias-corrected confidence interval (CI) for evaluation of the association between SLEDAI scores and cytokines adjusted to auto-antibodies in SLE patients
In this study, the expression of some inhibitory molecules and immunosuppressive cytokine in isolated B cells and sera from newly diagnosed SLE patients whose treatment had not been started was investigated. The study findings revealed that mRNA levels of IL-35 (EBI3 or IL-12p35), IL-10, and TGF-β in isolated B-cells from SLE patients were elevated compared to healthy subjects. Moreover, the level of IL-10, TGF-β, and IL-35 serum increased in the patients' peripheral blood affected by the SLE compared with healthy subjects. The findings in three groups of patients based on SLEDAI score also demonstrated that by increasing disease activity, serum levels of IL-10 and TGF-β decreased significantly (P < 0.05), although, in the case of IL-35, there was no remarkable difference between groups. Additionally, the mRNA levels of PDL-1 and FasL were significantly up-regulated in B-cells of the SLE patients compared to healthy subjects.
Evidence revealed that SLE is a multiple-organ autoimmune disease described by autoantibodies' increased production against autoantigens and enhanced immune complex deposition. Both T- and B-cells with different phenotypes are also involved in SLE's pathogenesis [12]. Previous studies on a subset of B cells with modulatory properties in patients with RA, primary Sjogren's syndrome (SjS), and SLE have demonstrated a hyperfunction in these B cells anti-inflammatory cytokines like IL-10 production and eventually homeostasis [9, 13, 14]. Several investigations have shown that depletion of pan B cell could help SLE patients, while phase III trials of the pan-B cell depletion were unsuccessful [15]. This problem can be due to the depletion of both the effector and regulatory B cell subsets [6]. These findings could accordingly confirm the immune-modulatory properties of Bregs and their involvement in the homeostasis mechanisms [16]. The first clinical investigation of peripheral blood B cells in SLE patients revealed that the proportion of CD5+ B cells producing IL-10 was significantly higher than healthy subjects [17]. Although there are several contradictions in this field, some studies have shown that the number of Bregs in autoimmune diseases such as RA and SLE decreased [13]; on the other hand, some experimental and human studies have demonstrated that the number of Bregs was increased in SLE which can ultimately depend on various factors such as treatment protocol and disease activity. In this study, the results show that apart from the number, a group of B cells might be hyperactivated to cause homeostasis in patients with lower SLEDAI scores [17,18,19,20,21]. In line with our findings, it was shown that in humans, IL-10+ B cell frequency in SLE patients was higher than the healthy control group [17, 20]. These findings also confirm the possibility of hyperactive Bregs involvement in balancing inflammatory and anti-inflammatory responses. Although mechanisms that mediate development and induction of the Bregs have thus far remained unclear, a study suggested that the increased IL-10+ Bregs may be secondary to expanded T follicular helper (Tfh) cells provide help to effector B cells and promote autoimmunity. Tfh cell and IL-10+ Bregs are increased in SLE and Tfh cell-derived IL-21 induced IL-10 [22]. In most human studies that have been done so far, SLE patients were under different treatment protocols, affecting the outcomes [13]. For instance, methotrexate can increase the number and possibly, the activity of B cells [23]. Another study on murine collagen-induced arthritis showed that treating animals with methotrexate, alone or together with cyclophosphamide, can reduce Bregs and DCs in lymph nodes and spleen [24].
In contrast, another investigation reported no associations between azathioprine, methotrexate, mycophenolate mofetil, and hydroxychloroquine with the expression of IL-10. However, the mentioned study showed that the serum level of IL-10 was twice more in SLE patients with Asian ethnicity than non-Asians [25]. The results of this study show that patients' ethnicity can also affect the level of anti-inflammatory cytokines. Recently it has been shown that systemic treatment with methotrexate may cause dysregulation of anti-inflammatory cytokines [26]. According to the mentioned studies, it seems that treatment with immunomodulatory drugs can change the cytokine profile and negatively affect the results. In our study, the results showed that in newly diagnosed SLE patients with mild or moderate disease activity who were not under any treatment, anti-inflammatory cytokines as well as inhibitory molecules (as suppressive indicators of B cells) expression increased by isolated B cells in compared to healthy subjects and by increasing the SLEDAI score, serum levels of IL-10 and TGF-β decreased remarkably. However, the balance between regulatory and effector functions is a finely regulated immunological process that is not fully understood.
In this study, it was observed that IL-35 levels significantly elevated in SLE patients compared to healthy subjects. In this regard, an investigation reported that IL-35 could induce B cells and stimulate their differentiation into a regulatory subset producing IL-35 and IL-10. These findings similarly proposed that IL-35 is a potential inducer of the autologous and IL-35+ B cells in treating inflammatory and autoimmune diseases [27]. It had been further confirmed that circulatory IL-10+ Bregs had significantly increased in SLE patients, accompanied by fluctuation such that the number and activity of the Bregs had elevated during SLE flares and reduced subsequent remission of the disease [22]. Our findings also showed that as the SLEDAI score increased, the levels of IL-10 and TGF-β significantly decreased in studied SLE patients, although no decrease was observed in the serum level of IL-35. These findings could confirm the role of disease activity in the fluctuation of the B cells' number and activity in SLE patients.
It should be noted that myeloid-derived suppressor cells (MDSCs) as an immune suppressor cell can also induce the development of Bregs through inducible nitric oxide synthase (iNOS) and relieve self-immune responses in a lupus animal model [28]. Moreover, Bregs are not the only source of immunosuppressive cytokines because other cells might produce and release IL-10, TGF-β, and IL-35 in response to inflammatory and pathological conditions to modulate the immune response. Evidence showed that the regulatory T cells (Tregs) and Bregs involvement in different phases of autoimmune diseases could be different. For instance, a study had shown that IL-10+ Bregs could mainly regulate the initiation phase of the disease, while Tregs could cooperatively inhibit late-phase via covering IL-10+ Bregs in an experimental autoimmune encephalomyelitis (EAE) model of multiple sclerosis (MS) disease [29].
Berthelot et al. had reported that induction of apoptosis by Bregs through PD-1, FasL, or TNF-related apoptosis-inducing ligand (TRAIL) differed with the nature of the target T cell. T-cell subsets' sensitivity probably shifted from Th1 and Th2 to Th17, resulting in a reduction in Tregs [30]. The outputs of this study also demonstrated that the expressions of the PDL-1 and FasL gene had significantly elevated in isolated B-cells of the patients with SLE compared to healthy subjects. It could be another mechanism of B cell for modulating the immune response by eliminating effector immune cells.
On the other hand, some studies found that regulatory immune cells' function to inhibit effector T cells in inflammatory and autoimmune illnesses like lupus was defective. However, in most of these investigations, serum levels of IL-35, IL-10, and TGF-β cytokines along with PDL-1 and FasL had not been measured in B cells. The wide range of disease activity in SLE patients and treatment protocols might lead to these discrepancies in the studies' findings.
Our findings showed a negative association between serum level of TGF-β and SLEDAI; IL-10 and EBI3 genes may be due to reduced Bregs function in patients with higher disease activity scores. However, a positive and significant correlation was observed between the PDL-1 gene and serum level of IL-35, as well as IL-10 and TGF-β serum levels [7]. Furthermore, regarding the studied cytokines' extracted data at mRNA and protein levels (components 1 and 2), clustering showed that it is possible to differentiate patients from healthy subjects using the information in components 1 and 2. Additionally, statistical analysis demonstrated that with increasing the SLEDAI score, component 2 was significantly decreased. Therefore, with increasing disease activity, the level of immunomodulatory cytokines produced by B cells in patients with SLE might significantly be decreased.
One of the strengths of this study was that all the patients enrolled in the study were newly diagnosed cases, and blood collection was done before treatment onset because, as discussed before, routine treatment for lupus patients could affect the number and activity of B cells and outcome of the studies [17]. The inaccessibility of the materials for fluorescence-activated cell sorting (FACS) technique to specific isolation and evaluation cytokine levels in Bregs was also declared one of the major limitations of this study.
Taken together, the results of this study indicated that a group of SLE patients B cells might modulate immune responses in mild to moderate disease activity by producing anti-inflammatory cytokines and expressing inhibitory molecules. The findings can also clarify SLE disease activity's effect on the fluctuation and expression of IL-10, TGF-β serum levels. However, the source of the studied cytokines might be other regulatory immune cells. Future studies are needed to fully elucidate the source of released anti-inflammatory cytokines and evaluate the producer regulatory cells' number and activity to a well understanding of the immunomodulatory mechanisms in SLE.
Twenty-three newly diagnosed SLE patients (mean age of 39.6 ± 13.4 years) diagnosed by a rheumatology specialist and 30 normal-age and gender-matched healthy subjects with a mean age of 34.95 ± 10.01 years have participated in this study. All SLE patients had at least four of the American College of Rheumatology (ACR) revised SLE criteria. Moreover, activities of the SLE disease have been evaluated through Systemic Lupus Erythematosus Disease Activity Index (SLEDAI), and all the patients showed active SLE (SLEDAI score > 4). All of the enrolled SLE patients had received no treatment up to the blood collection for the experiments. Additionally, No SLE patient has been receiving immunomodulatory medications include antimalarials, at the time of examination. Informed consent was received from each SLE patient and healthy subjects based on Declaration of Helsinki (DoH). Moreover, this investigation was approved by the Ethics Committee of Rafsanjan University of Medical Sciences, Rafsanjan, Iran (IR.RUMS.REC.1389.206).
Cytokine assay
Five mL of peripheral blood was obtained from the studied groups, and separation of serums was performed by low-speed centrifugation. Then, serum samples were kept at − 20 °C for further experiments. Next, ELISA kits were used in order to measure circulatory levels of anti-nuclear antibody (ANA) (TECAN, IBL, Hamburg, Germany), IL-10 (Abnova, KA0125, Taiwan), IL-35 (Cosabio, CSB E13126h, Wuhan, China), and TGF-β (Abnova, KA3108, Taiwan) based on the assay procedures documented in the users' instructions manual provided by the manufacturers. The coefficients of variation of intra- and inter-assays were 5 and 15%, respectively.
Isolation and storage of peripheral blood mononuclear cells (PBMCs)
First, to isolate peripheral blood mononuclear cells (PBMCs), peripheral blood samples were gently diluted in PBS. Next, PBMCs were isolated using Ficoll (Lymphodex, Inno-Train, Germany). For further experiments, fresh PBMCs were utilized.
B cells isolation
A nylon wool fiber (NWF) procedure using nylon woolpack (Polysciences Inc., USA) was employed to separate B cells. Briefly, the column washed with the media contained Dulbecco's phosphate-buffered saline (DPBS) consisting of 5% heat-inactivated fetal calf serum (FCS), RPMI-1640 with 10% FCS, Hank's balanced salt solution (HBSS) consisting of 10% FCS, McCoy's 5A medium with 10% FCS at 37 °C, Earle's saline (ES) containing 10% FCS, and prepared column incubated for one hour at 37 °C. Next, 1 × 108 viable PBMCs were added to each column in a volume of 2 ml of the media. The stop-cock was also opened and allowed the media for draining until the cell volume entered into the packaged wool. The column top was then further washed with an additional 2 ml of the media and allowed the washed column to enter the packaged wool. The other 2–5 ml of the media was added to the column for ensuring that the wool top had been covered with the medium, and it was then incubated for one hour at 37 °C. Besides, nonadherent T-cells have been collected by two washes. Afterward, adherent B-cells were collected by adding the media for filling the column as well as knocking it to dislodge the cells. Therefore, this column was plunged twice, and the collected cells spun down at 1200 rpm for ten minutes, and the supernatant has been removed. Finally, the cell pellet has been re-suspended in approximately 10 ml of the media.
Flowcytometry
All antibodies were purchased from R&D Systems (Minneapolis, MN, USA). Isolated B cells in the previous step were stained with mouse anti-human CD3e PE-conjugated monoclonal antibody (mAB) and either mouse anti-human CD19 fluorescein-conjugated mAB or mouse IgG1 fluorescein isotype control. The samples were read by the Cyflow Space flow cytometer (Partec, Germany), and data were analyzed by Flomax software (Partec, Germany). B cell purity was more than 80%, as measured by CD19 expression (Fig. 5).
The expression of CD19+ B cells in healthy subjects (a) and SLE patients (b). Isolated B cells flow cytometry analysis: CD19+/CD3−(Q1, A; 82.97 ± 9.9%, B; 80.14 ± 5.7%); CD19+/CD3+ (Q2, A; 0.9 ± 0.12%, B;1.23 ± 0.27%); CD19−/CD3− (Q3, A; 9 ± 2.5%, B;10.7 ± 3.26%); CD19−/CD3+ (Q4, A;7.13 ± 2.1%, B; 7.93 ± 1.99%). Data are presented as mean ± SD
Gene expression assay
Total RNA was extracted from isolated B cells by an RNA extraction kit (Pars Toos: Iran) and converted into cDNA through a cDNA synthesis kit (Pars Toos, Iran) with the random hexamer primer and oligo (dT). The reverse transcription process was performed according to the one-step protocol recommended by the mentioned manufacturer: 25 °C for 10 min, followed by 47 °C for 60 min, and the reaction was finished by 85 °C for five minutes, and finally, the microtubes were chilled on the ice. Moreover, the RT-PCR technique was performed by \( \frac{50 ng}{20\mu L} \) of the synthesized cDNA (as template), specific forward and reverse primers (concentration of the primers: 0.5 μM), high ROX SYBR Green Master Mix (Takara, Japan), and nuclease-free water using an Applied Biosystems RT-PCR system (ABI, StepOnePlus software, USA) with this program: 1 cycle of 95 °C for 15 min, 40 cycles of 95 °C for 5 s, and adjusted annealing temperature for 40 s. Notably, the melting curve protocol was also proceeded later by 10 s at 95 °C and then 10 s each at 0.2 °C enhancements between 62 and 95 °C. The sequences of the target and the reference genes are shown in Table 4. The RT-PCR was performed in triplicate, and β-actin was utilized as the reference gene to normalize the target genes' amplified signal. Moreover, the relative expression of the RT-PCR products was equally calculated via the 2-ΔΔCt formula. The dissociation phases, melting curve, and the quantitative analysis of data were also completed with the StepOnePlus® software version 2.3 (Applied Biosystem, Foster City, CA, USA).
Table 4 The sequences of primers used in the study
GraphPad Prism 7.03 (GraphPad Software, San Diego, CA) and the IBM SPSS 20 (SPSS; Inc.; Chicago; IL: USA) were used to fulfill statistical analysis. The one-sample Kolmogorov–Smirnov and Shapiro–Wilk tests further assessed the normality of the variables. Differences between the studied groups were also evaluated by Chi-square statistic, independent sample T-test, Mann–Whitney U test, one-way ANOVA (Tukey's multiple comparisons), and Kruskal–Wallis tests. Moreover, the Spearman correlation test was used to verify the correlation between disease activity and serum levels of autoantibodies and cytokines, as well as inhibitory molecules. The principal component analysis (PCA) was used to explore the obtained data from the studied cytokines [31]. Hierarchical Clustering on Principal Components (HCPC) function from the FactoMine R package was employed to apply an HCA-based on principal component scores estimated from PCA analysis. The Factoextra package was also used to visualize a dendrogram generated by the hierarchical clustering. In addition, a principal component regression (PCR) test was used to explore the relationship between disease activity (SLEDAI) and cytokines adjusted by auto-antibodies (as potential confounders). Due to the low number of SLE patients, we used a bootstrap technique to inference regression coefficients [32]. All data are presented as mean ± SD. A p-value of less than 0.05 was considered statistically significant.
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
Bregs:
Regulatory B-cells
SLE:
Systemic lupus erythematosus
ACR:
TGF-β:
PD-L:
Programmed death ligand 1
PD-1:
Programmed death 1
ds-DNA:
Double strain stranded deoxy ribonucleic acid
Sm Ag:
Smith antigen
RNPs:
Ribonucleoproteins
ANA:
Anti-nuclear antibody
T2-MZP:
Transitional 2-marginal zone precursor
RA:
pDCs:
Plasmacytoid dendritic cells
IFN-α:
Interferon alpha
T helper
NK:
Natural killer
SLEDAI:
Systemic lupus erythematosus disease activity index
DoH:
NWF:
Nylon wool fiber
PBMC:
Peripheral blood mononuclear cell
PCA:
HCPC:
Hierarchical clustering on principal components
Principal component regression
Fluorescence-activated cell sorting
Nagafuchi Y, Shoda H, Fujio K. Immune profiling and precision medicine in systemic lupus Erythematosus. Cells. 2019;8(2):140.
Vieira SM, Hiltensperger M, Kumar V, Zegarra-Ruiz D, Dehner C, Khan N, et al. Translocation of a gut pathobiont drives autoimmunity in mice and humans. Science. 2018;359(6380):1156–61.
Cozzani E, Drosera M, Gasparini G, Parodi A. Serology of Lupus Erythematosus: Correlation between Immunopathological Features and Clinical Aspects. Autoimmune Diseases. 2014;2014:321359.
Wahren-Herlenius M, Dörner T. Immunopathogenic mechanisms of systemic autoimmune disease. Lancet. 2013;382(9894):819–31..
Manna A, Aulakh S, Sher T, Ailawadhi S, Chanan-Khan A, Paulus A. CD38hi B-regulatory (B-reg) cells maintain pathological immune tolerance in chronic lymphocytic leukemia (CLL)/B cell diseases: potential therapeutic considerations. Am Assoc Immnol. 2019;202(1 Supplement):71.13-71.13.
Matsushita T. Regulatory and effector B cells: friends or foes? J Dermatol Sci. 2019;93(1):2–7.
Wang T, Mei Y, Li Z. Research progress on regulatory B cells in Systemic Lupus Erythematosus. Biomed Res Int. 2019;2019:7948687.
Mauri C, Bosma A. Immune regulatory function of B cells. Annu Rev Immunol. 2012;30:221–41.
Karim MR, Zhang H-Y, Yuan J, Sun Q, Wang Y-F. Regulatory B cells in seropositive myasthenia gravis versus healthy controls. Front Neurol. 2017;8:43.
Menon M, Blair PA, Isenberg DA, Mauri C. A regulatory feedback between plasmacytoid dendritic cells and regulatory B cells is aberrant in systemic lupus erythematosus. Immunity. 2016;44(3):683–97..
Yang M, Rui K, Wang S, Lu L. Regulatory B cells in autoimmune diseases. Cell Mol Immunol. 2013;10(2):122.
Lipsky PE. Systemic lupus erythematosus: an autoimmune disease of B cell hyperactivity. Nat Immunol. 2001;2(9):764.
Blair PA, Noreña LY, Flores-Borja F, Rawlings DJ, Isenberg DA, Ehrenstein MR, et al. CD19+ CD24hiCD38hi B cells exhibit regulatory capacity in healthy individuals but are functionally impaired in systemic lupus erythematosus patients. Immunity. 2010;32(1):129–40.
Habib J, Deng J, Lava N, Tyor W, Galipeau J. Blood B cell and regulatory subset content in multiple sclerosis patients. J Mult Scler (Foster City). 2015;2(2):1000139.
Merrill JT, Neuwelt CM, Wallace DJ, Shanahan JC, Latinis KM, Oates JC, et al. Efficacy and safety of rituximab in moderately-to-severely active systemic lupus erythematosus: the randomized, double-blind, phase II/III systemic lupus erythematosus evaluation of rituximab trial. Arthritis & Rheumatism. 2010;62(1):222–33.
Matsushita T. Regulatory B cells in mouse models of Systemic Lupus Erythematosus (SLE). In: Vitale G, Mion F, editors. Regulatory B cells: methods and protocols. New York, NY: Springer New York; 2014. p. 195-205.
Kashipaz MA, Huggins M, Lanyon P, Robins A, Powell R, Todd I. Assessment of Be1 and Be2 cells in systemic lupus erythematosus indicates elevated interleukin-10 producing CD5+ B cells. Lupus. 2003;12(5):356–63.
Watanabe R, Ishiura N, Nakashima H, Kuwano Y, Okochi H, Tamaki K, et al. Regulatory B cells (B10 cells) have a suppressive role in murine lupus: CD19 and B10 cell deficiency exacerbates systemic autoimmunity. J Immunol. 2010;184(9):4801–9.
Díaz-Alderete A, Crispin JC, Vargas-Rojas MI, Alcocer-Varela J. IL-10 production in B cells is confined to CD154+ cells in patients with systemic lupus erythematosus. J Autoimmun. 2004;23(4):379–83.
Iwata Y, Matsushita T, Horikawa M, DiLillo DJ, Yanaba K, Venturi GM, et al. Characterization of a rare IL-10–competent B-cell subset in humans that parallels mouse regulatory B10 cells. Blood. 2011;117(2):530–41.
Vadasz Z, Peri R, Eiza N, Slobodin G, Balbir-Gurman A, Toubi E. The expansion of CD25highIL-10highFoxP3high B regulatory cells is in association with SLE disease activity. J Immunol Res. 2015;2015:254245.
Yang X, Yang J, Chu Y, Xue Y, Xuan D, Zheng S, et al. T follicular helper cells and regulatory B cells dynamics in systemic lupus erythematosus. PLoS One. 2014;9(2):e88441.
Piper CJ, Wilkinson MGL, Deakin CT, Otto GW, Dowle S, Duurland CL, et al. CD19+ CD24hiCD38hi B cells are expanded in juvenile dermatomyositis and exhibit a pro-inflammatory phenotype after activation through toll-like receptor 7 and interferon-α. Front Immunol. 2018;9:1372.
Fan J, Luo J, Yan C, Hao R, Zhao X, Jia R, et al. Methotrexate, combined with cyclophosphamide attenuates murine collagen induced arthritis by modulating the expression level of Breg and DCs. Mol Immunol. 2017;90:106–17.
Godsell J, Rudloff I, Kandane-Rathnayake R, Hoi A, Nold MF, Morand EF, et al. Clinical associations of IL-10 and IL-37 in systemic lupus erythematosus. Sci Rep. 2016;6:34604.
Zdanowska N, Owczarczyk-Saczonek A, Czerwińska J, Nowakowski JJ, Kozera-Żywczyk A, Owczarek W, et al. Adalimumab and methotrexate affect the concentrations of regulatory cytokines (interleukin-10, transforming growth factor-β1, and interleukin-35) in patients with plaque psoriasis. Dermatologic Ther. 2020;14:e14153.
Wang R-X, Yu C-R, Dambuza IM, Mahdi RM, Dolinska MB, Sergeev YV, et al. Interleukin-35 induces regulatory B cells that suppress autoimmune disease. Nat Med. 2014;20(6):633.
Park MJ, Lee SH, Kim EK, Lee EJ, Park SH, Kwok SK, et al. Myeloid-derived suppressor cells induce the expansion of regulatory B cells and ameliorate autoimmunity in the sanroque mouse model of systemic lupus erythematosus. Arthritis & Rheumatology. 2016;68(11):2717–27.
Matsushita T, Horikawa M, Iwata Y, Tedder TF. Regulatory B cells (B10 cells) and regulatory T cells have independent roles in controlling experimental autoimmune encephalomyelitis initiation and late-phase immunopathogenesis. J Immunol. 2010;185(4):2240–52.
Berthelot J-M, Jamin C, Amrouche K, Le Goff B, Maugars Y, Youinou P. Regulatory B cells play a key role in immune system balance. Joint Bone Spine. 2013;80(1):18–22.
Lê S, Josse J, Husson F. FactoMineR: an R package for multivariate analysis. J Stat Softw. 2008;25(1):1–18.
Canty A, Ripley B. boot: Bootstrap R (S-Plus) Functions [Software]; 2016.
Rafsanjan University of Medical Sciences was supported this study.
No specific funding was obtained for this study.
Department of Internal Medicine, Ali-Ibn Abi-Talib Hospital, School of Medicine; Molecular Medicine Research Center, Research Institute of Basic Medical Sciences, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
Mitra Abbasifard, Mohammad Hasani & Pooya Saeed-Askari
Clinical Research Development Unit, Ali-Ibn Abi-Talib Hospital, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
Zahra Kamiab
Department of Family Medicine, School of Medicine, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
Department of Pathology, School of Medicine, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
Amir Rahnama
Department of Immunology, School of Medicine; Molecular Medicine Research Center, Research Institute of Basic Medical Sciences, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
Hossein Khorramdelazad
Mitra Abbasifard
Mohammad Hasani
Pooya Saeed-Askari
All authors read and approved the final manuscript. Conception and design: MA, HK. Development of methodology: PS, HK. Acquisition of data (patient interviews, treatment data, assessments of oncologists): HK, MA, AR, ZK, MH. Analysis and interpretation of data: ZK. Writing, review, and/or revision of manuscript: HK. Administrative, technical, or material support: MA, HK.
Correspondence to Hossein Khorramdelazad.
This study was approved by the ethics committee of Rafsanjan University of Medical Sciences. Written informed consent was obtained from all participants. All procedures performed in this study were in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
All authors declare no competing interests.
Abbasifard, M., Kamiab, Z., Hasani, M. et al. Assessing the expression of immunosuppressive cytokines in the newly diagnosed systemic lupus Erythematosus patients: a focus on B cells. BMC Immunol 21, 58 (2020). https://doi.org/10.1186/s12865-020-00388-3
Regulatory B-cells (Bregs)
Systemic lupus Erythematosus (SLE)
Anti-inflammatory cytokine | CommonCrawl |
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS
MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS
Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS
The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance.
ISSN 1088-6850(online) ISSN 0002-9947(print)
Journals Home Search My Subscriptions Subscribe
Your device is paired with
for another days.
Previous issue | This issue | Most recent issue | All issues (1900–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article
Functional equations and their related operads
Author: Vahagn Minasian
Journal: Trans. Amer. Math. Soc. 357 (2005), 4413-4443
MSC (2000): Primary 55U15; Secondary 18D50, 55P99
DOI: https://doi.org/10.1090/S0002-9947-05-03974-7
Published electronically: June 9, 2005
MathSciNet review: 2156716
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: Using functional equations, we define functors that generalize standard examples from calculus of one variable. Examples of such functors are discussed, and their Taylor towers are computed. We also show that these functors factor through objects enriched over the homology of little $n$-cubes operads and discuss the relationship between functors defined via functional equations and operads. In addition, we compute the differentials of the forgetful functor from the category of $n$-Poisson algebras in terms of the homology of configuration spaces.
References [Enhancements On Off] (What's this?)
Glen E. Bredon, Topology and geometry, Graduate Texts in Mathematics, vol. 139, Springer-Verlag, New York, 1993. MR 1224675
Frederick R. Cohen, Thomas J. Lada, and J. Peter May, The homology of iterated loop spaces, Lecture Notes in Mathematics, Vol. 533, Springer-Verlag, Berlin-New York, 1976. MR 0436146
A. D. Elmendorf, I. Kriz, M. A. Mandell, and J. P. May, Rings, modules, and algebras in stable homotopy theory, Mathematical Surveys and Monographs, vol. 47, American Mathematical Society, Providence, RI, 1997. With an appendix by M. Cole. MR 1417719
Edward Fadell and Lee Neuwirth, Configuration spaces, Math. Scand. 10 (1962), 111-118. MR 141126, DOI https://doi.org/10.7146/math.scand.a-10517
E.Getzler and J.D.S.Jones. Operads, Homotopy Algebra, and Iterated Integrals for Double Loop Spaces. Preprint. 1993
Thomas G. Goodwillie, Calculus. I. The first derivative of pseudoisotopy theory, $K$-Theory 4 (1990), no. 1, 1–27. MR 1076523, DOI https://doi.org/10.1007/BF00534191
Thomas G. Goodwillie, Calculus. II. Analytic functors, $K$-Theory 5 (1991/92), no. 4, 295–332. MR 1162445, DOI https://doi.org/10.1007/BF00535644
Thomas G. Goodwillie, Calculus. III. Taylor series, Geom. Topol. 7 (2003), 645–711. MR 2026544, DOI https://doi.org/10.2140/gt.2003.7.645
B. Johnson and R. McCarthy, Deriving calculus with cotriples, Trans. Amer. Math. Soc. 356 (2004), no. 2, 757–803. MR 2022719, DOI https://doi.org/10.1090/S0002-9947-03-03318-X
Igor Kříž and J. P. May, Operads, algebras, modules and motives, Astérisque 233 (1995), iv+145pp (English, with English and French summaries). MR 1361938
R.McCarthy and V.Minasian. On Triples, Operads and Generalized Homogeneous Functors. Preprint, 2004
John W. Milnor and John C. Moore, On the structure of Hopf algebras, Ann. of Math. (2) 81 (1965), 211–264. MR 174052, DOI https://doi.org/10.2307/1970615
Vahagn Minasian, André-Quillen spectral sequence for $THH$, Topology Appl. 129 (2003), no. 3, 273–280. MR 1962984, DOI https://doi.org/10.1016/S0166-8641%2802%2900184-0
Daniel Quillen, On the (co-) homology of commutative rings, Applications of Categorical Algebra (Proc. Sympos. Pure Math., Vol. XVII, New York, 1968) Amer. Math. Soc., Providence, R.I., 1970, pp. 65–87. MR 0257068
Alan Robinson, Gamma homology, Lie representations and $E_\infty $ multiplications, Invent. Math. 152 (2003), no. 2, 331–348. MR 1974890, DOI https://doi.org/10.1007/s00222-002-0272-5
Charles A. Weibel, An introduction to homological algebra, Cambridge Studies in Advanced Mathematics, vol. 38, Cambridge University Press, Cambridge, 1994. MR 1269324
G.Bredon. Topology and Geometry. New York: Springer-Verlag, 1993
F.Cohen. The Homology of $\mathcal {C}_{n+1}$-spaces, $n \geq 0$ in "The homology of iterated loop spaces�. Lecture Notes in Math., 533, 207-351. 1976
A.D.Elmendorf, I.Kriz, M.A.Mandell and J.P.May. Rings, Modules, and Algebras in Stable Homotopy Theory. Mathematical Surveys and Monographs. Vol. 47. AMS. 1996
E.Fadell and L.Neuwirth. Configuration Spaces. Math. Scan., 10, 119-126. 1962
T.Goodwillie. Calculus I: The First Derivative of Pseudoisotopy Theory. K-theory, 4, 1-27. 1990
T.Goodwillie. Calculus II: Analytic Functors. K-theory, 5, 295-332. 1992
T.Goodwillie. Calculus III: The Taylor Series of a Homotopy functor. Geometry and Topology, 7, 645-711. 2003
B.Johnson and R.McCarthy. Deriving Calculus with Cotriples. Trans. Amer. Math. Soc., 356, 757-804. 2004
I.Kriz and J.P.May. Operads, Algebras, Modules and Motives. Astérisque, 233, 1995
J.Milnor and J.C.Moor. On the Structure of Hopf Algebras. Ann. of Math. (2), 81, 211-264. 1965
V.Minasian. André-Quillen spectral sequence for $THH$. Topology Appl., 129, 273-280. 2003
D.G.Quillen. On the (co-)homology of commutative rings. AMS Proc Symp. Pure Math., 17, 65-87. 1970
A.Robinson. Gamma homology, Lie representations and $E_{\infty }$ multiplications. Invent. Math., 152, 331-348. 2003
C.A.Weibel. An Introduction to Homological Algebra. Cambridge Studies in Advanced Mathematics, 38. 1994
Retrieve articles in Transactions of the American Mathematical Society with MSC (2000): 55U15, 18D50, 55P99
Retrieve articles in all journals with MSC (2000): 55U15, 18D50, 55P99
Vahagn Minasian
Affiliation: Department of Mathematics, Brown University, Providence, Rhode Island 02912-1917
Email: [email protected]
Keywords: Little $n$-cubes operads, Goodwillie calculus, functional equations
Received by editor(s): June 16, 2003
Article copyright: © Copyright 2005 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication.
Join the AMS
AMS Conferences
News & Public Outreach
Math in the Media
Mathematical Imagery
Mathematical Moments
Data on the Profession
Fellows of the AMS
Mathematics Research Communities
AMS Fellowships
Collaborations and position statements
Appropriations Process Primer
Congressional briefings and exhibitions
About the AMS
Jobs at AMS
Notices of the AMS · Bulletin of the AMS
American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267
AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office.
© Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility | CommonCrawl |
Existence of solution for a resonant p-Laplacian second-order m-point boundary value problem on the half-line with two dimensional kernel
O. F. Imaga ORCID: orcid.org/0000-0002-3252-35301 &
S. A. Iyase1
The existence of a solution for a second-order p-Laplacian boundary value problem at resonance with two dimensional kernel will be considered in this paper. A semi-projector, the Ge and Ren extension of Mawhin's coincidence degree theory, and algebraic processes will be used to establish existence results, while an example will be given to validate our result.
The following second-order p-Laplacian boundary value problem will be considered in this work:
$$ \left \{ \textstyle\begin{array}{l} (\varphi_{p}(u'(t)))' + g(t, u(t),u'(t))=0,\quad t \in(0,+\infty), \\ \varphi_{p}(u'(0)) = \int_{0}^{+\infty } v(t)\varphi_{p}(u'(t))\,dt,\qquad \varphi_{p}(u'(+\infty ))= \sum_{j=1}^{m} \beta_{j} \int_{0} ^{\eta_{j}} \varphi_{p}(u'(t))\,dt, \end{array}\displaystyle \right . $$
where \(g:[0,+\infty) \times\mathbb{R}^{2} \to\mathbb{R}\) is an \(L^{1}\)-Carathéodory function, \(0< \eta_{1}<\eta_{2} < \cdots\leq\eta _{m} < +\infty\), \(\beta_{j} \in\mathbb{R}\), \(j=1,2, \ldots, m\), \(v \in L^{1}[0,+\infty )\), \(v(t) >0\) on \([0,+\infty )\), and
$$\varphi_{p} (s) = \vert s \vert ^{p-2}s,\quad p \geq2. $$
There are many real life applications of boundary value problems with integral and multi-point boundary conditions on an unbounded domain, for instance, in the study of physical phenomena such as the study of an unsteady flow of fluid through a semi-infinite porous medium and radially symmetric solutions of nonlinear elliptic equations. They also arise in plasma physics and in the study of drain flows; see [1–3].
Boundary value problems are said to be at resonance if the solution of the corresponding homogeneous boundary value problem is non-trivial. Many authors in the literature have considered resonant problems. López-Somoza and Minhós [4] obtained existence results for a resonant multi-point second-order boundary value problem on the half-line, Capitanelli, Fragapane and vivaldi [5] addressed regularity results for p-Laplacians in pre-fractal domains, while Jiang and Kosmatov [6] considered resonant p-Laplacian problems with functional boundary conditions. For other work on resonant problems without p-Laplacian operator, see [7–10], while for problems with the p-Laplacian operator, see [11–16]. In [17], Jiang considered the following p-Laplacian operator:
$$ \left \{ \textstyle\begin{array}{l} (\varphi_{p}(u'))' + f(t,u,u') =0, \quad0< t< +\infty ,\\ u(0) =0, \qquad\varphi_{p}(u(+\infty ))=\sum_{i=1}^{n} \alpha_{i} \varphi _{p}(u'(\xi_{i})), \end{array}\displaystyle \right . $$
where \(\alpha_{i} >0\), \(i=1,2,\dots,n\), \(\sum_{i=1}^{n} \alpha_{i}=1\).
To the best of our knowledge p-Laplacian problems with two dimensional kernel on the half-line have not received much attention in the literature.
We will give the required lemmas, theorem and definitions in Sect. 2, Sect. 3 will be dedicated to stating and proving condition for existence of solutions, while an example will be given in Sect. 4 to validate the result obtained.
In this section, we will give some definitions and lemmas that will be used in this work.
Definition 2.1
([11])
A map \(w:[0,+\infty) \times\mathbb{R}^{2} \to\mathbb{R}\) is \(L^{1}[0,+\infty )\)-Carathéodory, if the following conditions are satisfied:
for each \((d,e) \in\mathbb{R}^{2}\), the mapping \(t \to w(t,d,e)\) is Lebesgue measurable;
for a.e. \(t\in[0,\infty)\), the mapping \((d,e) \to w(t,d,e)\) is continuous on \(\mathbb{R}^{2}\);
for each \(k>0\), there exists \(\varphi_{k}(t) \in L_{1}[0,+\infty)\) such that, for a.e. \(t \in[0,\infty)\) and every \((d,e) \in[-k,k]\), we have
$$\bigl\vert w(t,d,e) \bigr\vert \leq\varphi_{k}(t). $$
Let \((U, \Vert\cdot\Vert_{U})\) and \((Z, \Vert\cdot\Vert_{Z})\) be two Banach spaces. The continuous operator \(M:U \cap \operatorname {dom}M \to Z\), is quasi-linear if the following hold:
\(\operatorname {Im}M = M(U\cap \operatorname {dom}M)\) is a closed subset of Z;
\(\ker M = \{ u \in U \cap \operatorname {dom}M :Mu=0\}\) is linearly homeomorphic to \(\mathbb{R}^{n}\), \(n < +\infty \).
Let U be a Banach space and \(U_{1} \subset U\) a subspace. Let \(P, Q:U \to U_{1}\) be operators, then P is a projector if
\(P^{2} =P\);
\(P(\lambda_{1}u_{1} + \lambda_{2}u_{2})=\lambda_{1}Pu_{1} + \lambda _{2}Pu_{2}\) where \(u_{1}, u_{2} \in U\), \(\lambda_{1}, \lambda_{2} \in\mathbb{R}\),
and Q is a semi-projector if
\(Q^{2} = Q\);
\(Q(\lambda u) = \lambda Qu\) where \(u \in U\), \(\lambda\in \mathbb{R}\).
Let \(U_{1} = \ker M\) and \(U_{2}\) be the complement space of \(U_{1}\) in U, then \(U=U_{1} \oplus U_{2}\). Similarly, if \(Z_{1}\) is a subspace of Z and \(Z_{2}\) is the complement space of \(Z_{1}\) in Z, then \(Z = Z_{1} \oplus Z_{2}\). Let \(P: U \to U_{1}\) be a projector, \(Q:Z \to Z_{1}\) be a semi-projector and \(\varOmega\subset U\) an open bounded set with \(\theta\in\varOmega\) the origin. Also, let \(N_{1}\) be denoted by N, let \(N_{\lambda}: \overline{\varOmega} \to Z\), where \(\lambda\in [0,1]\) is a continuous operator and \(\varSigma_{\lambda} =\{ u \in \overline{\varOmega}:Mu=N_{\lambda}u \}\).
Let U be the space of all continuous and bounded vector-valued functions on \([0,+\infty )\) and \(X \subset U\). Then X is said to be relatively compact if the following statements hold:
X is bounded in U;
all functions from X are equicontinuous on any compact subinterval of \([0,+\infty )\);
all functions from X are equiconvergent at ∞, i.e. \(\forall \epsilon>0\), ∃ a \(T = T(\epsilon)\) such that \(\Vert A(t) - A(+\infty )\Vert_{R^{n}}<\epsilon\)\(\forall t >T\) and \(A \in X\).
Let \(N_{\lambda}: \overline{\varOmega} \to Z\), \(\lambda\in[0,1]\) be a continuous operator. The operator \(N_{\lambda}\) is said to be M-compact in Ω̅ if there exist a vector subspace \(Z_{1} \in Z\) such that \(\dim Z_{1} = \dim U_{1}\) and a compact and continuous operator \(R:\overline{\varOmega} \times[0,1] \to U_{2}\) such that, for \(\lambda\in[0,1]\), the following holds:
\((I - Q)N_{\lambda}(\overline{\varOmega}) \subset \operatorname {Im}M \subset(I-B)Z\),
\(QN_{\lambda}u=0 \Leftrightarrow QNu=0\), \(\lambda\in(0,1)\),
\(R(\cdot,u)\) is the zero operator and \(R(\cdot, \lambda )|_{\varSigma_{\lambda}}=(I-P)|_{\varSigma_{\lambda}}\),
\(M[P+R(\cdot, \lambda)]=(I-Q)N_{\lambda}\).
Lemma 2.1
The following are properties of the function\(\varphi_{p} : \mathbb{R} \to\mathbb{R}\):
It is continuous, monotonically increasing and invertible. Its inverse\(\varphi_{p} ^{-1} =\varphi_{q}\), where\(q >1\)and satisfies\(\frac{1}{p}+\frac{1}{q}=1\).
For any\(x, y >0\),
\(\varphi_{p} (x +y) \leq\varphi_{p} (x) + \varphi_{p}(y)\), if\(1 < p <2\),
\(\varphi_{p}(x+y) \leq2^{p-2}(\varphi_{p}(x) + \varphi _{p}(y))\), if\(p \geq2\).
Theorem 2.1
Let\((U, \Vert\cdot\Vert_{U})\)and\((Z, \Vert\cdot\Vert_{Z})\)be two Banach spaces and\(\varOmega\subset U\)an open and bounded set. If the following holds:
(\(A_{1}\)):
The operator\(M: U \cap \operatorname {dom}M \to Z\)is a quasi-linear,
the operator\(N_{\lambda}:\overline{\varOmega} \to Z\), \(\lambda\in[0,1]\)isM-compact,
\(Mu \neq N _{\lambda}u\), for\(\lambda\in(0,1)\), \(u \in\partial\varOmega\cap \operatorname {dom}M\),
\(\deg\{JQN, \varOmega\cap\ker M,0 \} \neq0\), where the operator\(J:Z_{1} \to U_{1}\)is a homeomorphism with\(J(\theta)=\theta \)and deg is the Brouwer degree,
then the equation\(Mu = Nu\)has at least one solution inΩ̅.
$$\begin{aligned} U = \Bigl\{ u \in C^{2}[0,+\infty): u, \varphi_{p} \bigl(u'\bigr) \in \mathit{AC}[0,+\infty ), \lim_{t \to +\infty }e^{-t} \bigl\vert u^{(i)}(t) \bigr\vert \text{ exist, } i=0,1 \Bigr\} , \end{aligned}$$
with the norm \(\Vert u \Vert= \max\{\Vert u \Vert_{\infty}, \Vert u' \Vert_{\infty}\}\) defined on U where \(\Vert u \Vert_{\infty} =\sup_{t \in[0,+\infty )}e^{-t}|u|\). The space \((U, \Vert\cdot\Vert)\) by a standard argument is a Banach Space.
Let \(Z = L^{1}[0,+\infty )\) with the norm \(\Vert w \Vert_{L^{1}} = \int_{0} ^{+\infty }|w(v)|\,dv\). Define M as a continuous operator such that \(M:\operatorname {dom}M \subset U \to Z\) where
$$\begin{aligned} \begin{aligned} \operatorname {dom}M &= \Biggl\{ u \in U: \bigl(\varphi_{p} \bigl(u' \bigr)\bigr)' \in L^{1}[0,+\infty ), \varphi _{p} \bigl(u'(0)\bigr)= \int_{0}^{+\infty }v(t)\varphi_{p} \bigl(u'(t)\bigr)\,dt, \\ &\quad \lim_{t \to +\infty } \bigl(\varphi_{p} \bigl(u'(t)\bigr)\bigr)= \sum_{j=1}^{m} \beta_{j} \int _{0} ^{\eta_{j}} \varphi_{p} \bigl(u'(t)\bigr)\,dt \Biggr\} \end{aligned} \end{aligned}$$
and \(Mu = (\varphi_{p}(u'(t)))'\). We will define the operator \(N_{\lambda}u : \overline{\varOmega} \to Z\) by
$$N_{\lambda}u = -\lambda g\bigl(t, u(t),u'(t)\bigr), \quad \lambda\in[0,1], t \in[0,+\infty ), $$
where \(\varOmega\subset U\) is an open and bounded set. Then the boundary value problem (1.1) in abstract form is \(Mu=Nu\).
Throughout the paper we will assume the hypotheses:
(\(\phi_{1}\)):
\(\sum_{j=1}^{m} \beta_{j} \eta_{j} = \int_{0}^{+\infty }v(t)\, dt=1\);
$$C = \left| \textstyle\begin{array}{c@{\quad}c} Q_{1}e^{-t} & Q_{2}e^{-t} \\ Q_{1}te^{-t} & Q_{2}te^{-t} \end{array}\displaystyle \right| := \left| \textstyle\begin{array}{c@{\quad}c}c_{11} & c_{12} \\ c_{21} & c_{22} \end{array}\displaystyle \right| =c_{11}\cdot c_{22} - c_{12} \cdot c_{21} \neq0,$$
$$Q_{1}w =\int_{0}^{+\infty }v(t) \int_{0}^{t} w(s)\,ds\,dt,$$
$$Q_{2}w=\sum_{j=1}^{m} \beta_{j}\int_{0}^{\eta_{j}}\int_{t}^{+\infty }w(s)\,ds\,dt.$$
It is obvious that \(\ker M = \{u \in \operatorname {dom}M:u=a +bt: a, b \in\mathbb {R}, t \in[0,+\infty )\}\) and \(\operatorname {Im}M = \{w:w \in Z, Q_{1}w = Q_{2}w=0\}\).
Clearly, \(\ker M=2\) is linearly homeomorphic to \(\mathbb{R}^{2}\) and \(\operatorname {Im}M \subset Z\) is closed, hence, the operator \(M:\operatorname {dom}M \subset U \to Z\) is quasi-linear.
We next define the projector \(P:U \to U_{1}\) as
$$ Pu(t)=u(0) + u'(0)t, \quad u \in U, $$
and the operators \(\Delta_{1}, \Delta_{2} : Z \to Z_{1}\) as
$$\Delta_{1}w=\frac{1}{C}(\delta_{11}Q_{1}w + \delta_{12}Q_{2}w)e^{-t},$$
$$\Delta_{2}w=\frac{1}{C}(\delta_{21}Q_{1}w + \delta _{22}Q_{2}w)e^{-t},$$
where \(\delta_{ij}\) is the co-factor of \(c_{ij}\), \(i,j=1,2\). Then the operator \(Q: Z \to Z_{1}\) will be defined as
$$ Qw = (\Delta_{1}w) + (\Delta_{2}w) \cdot t $$
where \(Z_{1}\) is the complement space of ImM in Z. Then the operator \(Q: Z \to Z_{1}\) can easily be shown to be a semi-projector.
Let the operator \(R:U \times[0,1] \to U_{2}\) be defined by
$$\begin{aligned} R(u,\lambda) (t)&= \int_{0}^{t} \varphi_{q} \biggl( \varphi_{p}\bigl(u'(0)\bigr) - \int _{0}^{\tau}\lambda\bigl(g \bigl(s,u(s),u'(s)\bigr) - QNu(s)\bigr)\,ds \biggr)\,d\tau- u'(0)t, \end{aligned}$$
where \(U_{2}\) is the complement space of kerM in U.
Ifgis a\(L^{1}[0,+\infty )\)-Carathéodory function, then\(R:U \times[0,1] \to U_{2}\)isM-compact.
Let the set \(\varOmega\subset U\) be nonempty, open and bounded, then, for \(u \in\overline{\varOmega}\), there exists a constant \(k >0\) such that \(\Vert u \Vert< k\). Since g is an \(L^{1}[0,+\infty )\)-Carathéodory function, there exists \(\psi_{k} \in L^{1}[0,+\infty )\) such that, for a.e. \(t \in[0,+\infty )\) and \(\lambda\in[0,1]\), we have
$$\begin{aligned} \Vert N_{\lambda}u \Vert _{L^{1}}+ \Vert QN _{\lambda }u \Vert _{L^{1}}&= \int_{0}^{+\infty } \bigl\vert N_{\lambda}u(v) \bigr\vert \,dv + \int_{0}^{+\infty } \bigl\vert QN _{\lambda}u(v) \bigr\vert \,dv \\ & \leq \Vert \psi_{k} \Vert _{L^{1}}+ \Vert QNu \Vert _{L^{1}}. \end{aligned}$$
Now for any \(u \in\overline{\varOmega}\), \(\lambda\in[0,1]\), we have
$$\begin{aligned} \begin{aligned} [b]\bigl\Vert R(u,\lambda) \bigr\Vert _{\infty} &= \sup_{t \in[0,+\infty )}e^{-t} \bigl\vert R(u,\lambda) (t) \bigr\vert \leq\frac{1}{e} \varphi_{q} \bigl(\varphi_{p}(k) + \Vert Nu_{\lambda} \Vert _{L^{1}} + \Vert QN_{\lambda}u \Vert _{L^{1}}\bigr)+k \\ &\leq\varphi_{q} \bigl( \varphi_{p}(k)+ \Vert \psi_{k} \Vert _{L^{1}} + \Vert QNu \Vert _{L^{1}}\bigr) +k< +\infty \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}[b] \bigl\Vert R'(u, \lambda) \bigr\Vert _{\infty} &= \sup_{t \in[0,+\infty )}e^{-t} \bigl\vert R'(u,\lambda) (t) \bigr\vert \\ &\leq\varphi_{q} \bigl(\varphi_{p}(k)+ \Vert \psi_{k} \Vert _{L^{1}} + \Vert QNu \Vert _{L^{1}}\bigr)+k < +\infty . \end{aligned} \end{aligned}$$
Therefore it follows from (2.3) and (2.4) that \(R(u, \lambda)\overline{\varOmega}\) is uniformly bounded.
Next we show that \(R(u, \lambda)\overline{\varOmega}\) is equicontinuous in a compact set. Let \(u \in\overline{\varOmega}\), \(\lambda\in[0,1]\). For any \(T \in[0,+\infty )\), with \(t_{1}, t_{2} \in [0,T]\) where \(t_{1} < t_{2}\), we have
$$\begin{aligned} &\bigl\vert e^{t_{2}}R(u, \lambda) (t_{2})-e^{t_{1}}R(u,\lambda) (t_{1}) \bigr\vert \\ &\quad= \biggl\vert e^{t_{2}} \int_{0}^{t_{2}} \varphi_{q} \biggl( \varphi _{p}\bigl(u'(0)\bigr)- \int_{0}^{\tau} \lambda\bigl(g \bigl(s,u(s),u'(s)\bigr)-QNu(s)\bigr)\,ds \biggr)\,d \tau-u'(0)t_{2}e^{-t_{2}} \\ &\qquad - e^{-t_{1}} \int_{0}^{-t_{1}} \varphi_{q} \biggl( \varphi_{p}\bigl(u'(0)\bigr)- \int _{0}^{\tau} \lambda\bigl(g \bigl(s,u(s),u'(s)\bigr) -QNu(s)\bigr)\,ds \biggr)\,d\tau+ u'(0)t_{1}e^{t_{1}} \biggr\vert \\ &\quad\leq \bigl\vert e^{t_{2}}-e^{-t_{1}} \bigr\vert \int_{0}^{t_{1}} \varphi_{q} \biggl( \varphi _{p}\bigl( \bigl\vert u'(0) \bigr\vert \bigr)+ \int_{0}^{\tau} \lambda \bigl\vert g \bigl(s,u(s),u'(s)\bigr)-QNu(s) \bigr\vert \,ds \biggr)\,d\tau \\ &\qquad + e^{-t_{2}} \int_{t_{1}}^{t_{2}} \varphi_{q} \biggl( \varphi _{p}\bigl( \bigl\vert u'(0) \bigr\vert \bigr)+ \int_{0}^{\tau} \lambda \bigl\vert g \bigl(s,u(s),u'(s)\bigr)-QNu(s) \bigr\vert \,ds \biggr)\,d\tau \\ &\qquad + \bigl\vert t_{1}e^{-t_{1}}-t_{2}e^{-t_{2}} \bigr\vert \bigl\vert u'(0) \bigr\vert \\ &\quad\leq\bigl(e^{t_{2}}-e^{-t_{1}}\bigr)\varphi_{q} \bigl( \varphi_{p}(k) + \Vert \psi_{k} \Vert _{L^{1}} + \Vert QNu \Vert _{L^{1}}\bigr)t_{1} \\ &\qquad+ e^{-t_{2}}\varphi_{q} \bigl( \varphi _{p}(k) + \Vert \psi_{k} \Vert _{L^{1}} + \Vert QNu \Vert _{L^{1}}\bigr) (t_{2} -t_{1}) + \bigl\vert t_{1}e^{-t_{1}}-t_{2}e^{-t_{2}} \bigr\vert r \\&\quad\to 0, \quad\text{as } t_{1} \to t_{2}, \end{aligned}$$
$$\begin{aligned} \begin{aligned}[b] &\bigl\vert e^{-t_{2}}R'(u, \lambda) (t_{2})-e^{-t_{1}}R'(u,\lambda) (t_{1}) \bigr\vert \\ &\quad= \biggl\vert e^{t_{2}}\varphi_{q} \biggl( \varphi_{p}\bigl(u'(0)\bigr)- \int_{0}^{t_{2}} \lambda\bigl(g \bigl(s,u(s),u'(s)\bigr)-QNu(s)\bigr)\,ds \biggr) -u'(0)e^{-t_{2}} \\ & \qquad- e^{-t_{1}}\varphi_{q} \biggl( \varphi_{p} \bigl(u'(0)\bigr)- \int_{0}^{t_{1}} \lambda \bigl(g \bigl(s,u(s),u'(s)\bigr) -QNu(s)\bigr)\,ds \biggr) + u'(0)e^{-t_{1}} \biggr\vert \\ &\quad\leq\bigl(e^{t_{2}}-e^{-t_{1}}\bigr)\varphi_{q} \bigl( \varphi_{p}(k) + \Vert \psi_{k} \Vert _{L^{1}} + \Vert QNu \Vert _{L^{1}}\bigr) + \bigl(e^{-t_{1}}-e^{-t_{2}}\bigr)k \\ &\quad\to0, \quad\text{as } t_{1} \to t_{2}. \end{aligned} \end{aligned}$$
Thus, (2.5) and (2.6) show that \(R(u,\lambda )\overline{\varOmega}\) is equicontinuous on \([0,T]\).
We will now prove that \(R(u,\lambda)\overline{\varOmega}\) is equiconvergent at ∞. Since \(\lim_{t \to +\infty }e^{-t}=0\),
$$\begin{aligned} \lim_{t \to +\infty } e^{-t}R(u,\lambda) (t)= \lim _{t \to +\infty } e^{-t}R'(u,\lambda) (t)=0. \end{aligned}$$
$$\begin{aligned} \begin{aligned}[b] &\Bigl\vert e^{-t}R(u, \lambda) (t)-\lim_{t \to +\infty }e^{-t}R(u,\lambda) (t) \Bigr\vert \\ &\quad= \biggl\vert e^{-t} \int_{0}^{t} \varphi_{q} \biggl( \varphi_{p}\bigl(u'(0)\bigr) - \int _{0}^{\tau} \lambda\bigl(g \bigl(s,u(s),u'(s)\bigr)-QNu(s)\bigr)\,ds \biggr)\,d\tau -te^{-t}u'(0) -0 \biggr\vert \hspace{-24pt} \\ &\quad\leq te^{-t} \varphi_{q} \bigl( \varphi_{p}(k) + \Vert \psi_{k} \Vert _{L^{1}} + \Vert QNu \Vert _{L^{1}}\bigr) + kte^{-t} \\&\quad\to0, \quad\text{uniformly as } t \to +\infty , \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}[b] &\Bigl\vert e^{-t}R'(u, \lambda) (t)-\lim_{t \to +\infty }e^{-t}R'(u, \lambda) (t) \Bigr\vert \\ &\quad= \biggl\vert e^{-t}\varphi_{q} \biggl( \varphi_{p}\bigl(u'(0)\bigr) - \int_{0}^{t} \lambda\bigl(g \bigl(s,u(s),u'(s)\bigr)-QNu(s)\bigr)\,ds \biggr) -e^{-t}u'(0) - 0 \biggr\vert \\ &\quad\leq e^{-t} \varphi_{q} \bigl( \varphi_{p}(k) + \Vert \psi_{k} \Vert _{L^{1}} + \Vert QNu \Vert _{L^{1}}\bigr) + ke^{-t} \\&\quad\to0, \quad\text{uniformly as } t \to +\infty . \end{aligned} \end{aligned}$$
Therefore \(R(u,\lambda)\overline{\varOmega}\) is equiconvergent at +∞. It then follows from Definition 2.4 that \(R(u,\lambda)\) is compact. □
The operator\(N_{\lambda}\)isM-compact.
Since Q is a semi-projector, \(Q(I-Q)N_{\lambda}(\overline{\varOmega })=0\). Hence, \((I-Q)N_{\lambda}(\overline{\varOmega})\subset\ker Q = \operatorname {Im}M\). Conversely, let \(w \in \operatorname {Im}M\), then \(w=w -Qw = (I-Q)w \in (I-Q)Z\). Hence, condition (i) of definition (2.5) is satisfied. It can easily be shown that condition (ii) of Definition 2.5 holds.
Let \(u \in\varSigma_{\lambda}=\{u \in\overline{\varOmega}:Mu = N_{\lambda}u\}\), then \(N_{\lambda}u \in \operatorname {Im}M\). Hence, \(QN_{\lambda }u=0\) and \(R(u,0)(t)=0\). From \((\varphi_{p}(u'(t)))' + g(t, u(t),u'(t))=0\), \(t \in(0,+\infty)\), we have
$$\begin{aligned} R(u,\lambda) (t)&= \int_{0}^{t} \varphi_{q} \biggl( \varphi_{p}\bigl(u'(0)\bigr)- \int _{0}^{\tau} \lambda g \bigl(s,u(s),u'(s) \bigr)\,ds \biggr)\,d\tau- u'(0)t \\ &= \int_{0}^{t} \varphi_{q} \bigl( \varphi_{p}\bigl(u'(0)\bigr)+ \varphi_{p} \bigl(u'(\tau )\bigr)-\varphi_{p}\bigl(u'(0) \bigr) \bigr)\,d\tau- u'(0)t \\ &= u(t) - u(0)-u'(0)t=u(t)-Pu(t)=\bigl[(I-P)u\bigr](t). \end{aligned}$$
Therefore, condition (iii) of definition (2.5) holds.
Let \(u \in\overline{\varOmega}\). Since \(Mu = (\varphi_{p}(u'(t)))'\) we have
$$\begin{aligned} M\bigl[Pu +R(u,\lambda)\bigr](t)&= \big(\varphi_{p}\bigl(\bigl[Pu + R(u,\lambda)\bigr]\big)'(t)\bigr)' \\ &= \biggl(\varphi_{p} \biggl[u(0)+u'(0)t + \int_{0}^{t} \varphi_{q} \biggl( \varphi_{p}\bigl(u'(0)\bigr)- \int_{0}^{\tau} \lambda\bigl(g \bigl(s,u(s),u'(s)\bigr) \\ &\quad - QN(s)\bigr)\,ds \biggr) \,d\tau-u'(0)t \biggr]' \biggr)' \\ &= \biggl(\varphi_{p}\bigl(u'(0)\bigr)- \int_{0}^{\tau} \lambda\bigl(g \bigl(s,u(s),u'(s)\bigr) - QN(s)\bigr)\,ds \biggr)'=(I-Q)N_{\lambda}(t), \end{aligned}$$
that is, condition (iv) of definition (2.5) holds. Hence, \(N_{\lambda}\) is M-compact in Ω̅. □
Existence result
In this section, the conditions for existence of solutions for boundary value problem (1.1) will be stated and proved.
Assumegis a\(L^{[}0,+\infty )\)-Carathéodory function and the following hypotheses hold:
(\(H_{1}\)):
there exist functions\(x_{1}(t), x_{2}(t), x_{3}(t) \in L^{1}[0,+\infty )\)such that, for a.e. \(t \in[0,+\infty )\),
$$ \bigl\vert g\bigl(t,u,u'\bigr) \bigr\vert \leq e^{-t}\bigl(x_{1}(t) \vert u \vert ^{p-1} + x_{2}(t) \bigl\vert u' \bigr\vert ^{p-1}\bigr) + x_{3}(t), $$
for\(u \in \operatorname {dom}M\)there exists a constant\(A_{0} >0\), such that, if\(|u(t)|>A_{0}\)for\(t \in[0,+\infty )\)or\(|u'(t)|>A_{0}\)for\(t \in[0,+\infty ]\), then either
$$ Q_{1}Nu(t) \neq0 \quad\textit{or} \quad Q_{2}Nu(t) \neq0, \quad t \in [0,+\infty ), $$
there exists a constant\(l>0\)such that, for\(|a| >l\)or\(|b|>l\)either
$$ Q_{1}N(a +bt) + Q_{2}N(a +bt) < 0, \quad t \in[0,+\infty ), $$
$$ Q_{1}N(a +bt) + Q_{2}N(a +bt) >0, \quad t \in[0,+\infty ), $$
where\(a, b \in\mathbb{R}\), \(|a| + |b| > l\)and\(t \in[0,+\infty )\).
Then the boundary value problem (1.1) has at least one solution, provided
$$2^{2q-4}\bigl( \Vert x_{2} \Vert _{L^{1}} + 2^{q-2} \Vert x_{1} \Vert _{L^{1}}\bigr) < 1, \quad\textit{for } 1 < p \leq2, $$
$$\varphi_{q}\bigl( \Vert x_{1} \Vert _{L^{1}} + \Vert x_{2} \Vert _{L^{1}}\bigr) < 1, \quad \textit{for } p>2. $$
The following lemmas are also needed to prove our main result.
The set\(\varOmega_{1} = \{ u \in \operatorname {dom}M :Mu = N_{\lambda}u \textit{ for some } \lambda\in(0,1)\}\)is bounded.
Let \(u \in\varOmega_{1}\) then \(N_{\lambda}u \in \operatorname {Im}M= \ker Q\). Hence, \(QN_{\lambda}u = 0\) and \(QNu=0\). It follows from \(H_{2}\) that there exist \(t_{0}, t_{1} \in[0,+\infty )\), such that \(|u(t_{0})| \leq A_{0}\) and \(|u'(t_{1})| \leq A_{0}\). From \(u(t)=u(t_{0}) + \int_{t_{0}}^{t}u'(v)\,dv\), we have
$$\begin{aligned} \bigl\vert u(t) \bigr\vert = \biggl\vert u(t_{0}) - \int_{t_{0}}^{t}u'(s)\,ds \biggr\vert \leq A_{0} + \vert t-t_{0} \vert \bigl\Vert u' \bigr\Vert _{\infty}. \end{aligned}$$
$$ \Vert u \Vert _{\infty} = \sup_{t \to\infty}e^{-t} \bigl\vert u(t) \bigr\vert \leq A_{0} + \bigl\Vert u' \bigr\Vert _{\infty}. $$
Also, from \(Mu = N_{\lambda}u\), we get
$$\varphi_{p}\bigl(u'(t)\bigr)=- \int_{t_{1}}^{t} \lambda g\bigl(s,u(s),u'(s) \bigr)\,ds + \varphi _{p}\bigl(u(t_{1})\bigr). $$
In view of (3.1), we have
$$\begin{aligned} \begin{aligned}[b] \bigl\vert \bigl(u'(t) \bigr) \bigr\vert &\leq\varphi_{q} \biggl(\varphi_{p}(A_{0})+ \int_{0}^{+\infty } \bigl(x_{1}(t) \bigl\vert \varphi_{p}\bigl(u(t)\bigr) \bigr\vert + x_{2}(t) \bigl\vert \varphi_{p}\bigl(u' \bigr) \bigr\vert + x_{3}(t)\bigr)\,dt \biggr) \\ &\leq\varphi_{q} \bigl(\varphi_{p}(A_{0})+ \Vert x_{1} \Vert _{L^{1}}\varphi _{p} \bigl( \Vert u \Vert _{\infty}\bigr) + \Vert x_{2} \Vert _{L^{1}}\varphi_{p}\bigl( \bigl\Vert u' \bigr\Vert _{\infty}\bigr) + \Vert x_{3} \Vert _{L^{1}} \bigr) \\ &\leq\varphi_{q} \bigl(\varphi_{p}(A_{0})+ \Vert x_{1} \Vert _{L^{1}}\varphi _{p} \bigl(A_{0}+ \bigl\Vert u' \bigr\Vert _{\infty}\bigr) + \Vert x_{2} \Vert _{L^{1}} \varphi _{p}\bigl( \bigl\Vert u' \bigr\Vert _{\infty}\bigr) + \Vert x_{3} \Vert _{L^{1}} \bigr). \end{aligned} \end{aligned}$$
If \(1 < p \leq2\), it follows from Lemma 2.1 that
$$ \bigl\Vert u' \bigr\Vert _{\infty} \leq\frac{2^{2q-4}[\varphi_{q}( \Vert x_{3} \Vert _{L^{1}}) + A_{0}(1+2^{q-2} \Vert x_{1} \Vert _{L^{1}}}{1-2^{2q-4}( \Vert x_{2} \Vert _{L^{1}} + 2^{q-2} \Vert x_{1} \Vert _{L^{1}})}. $$
If \(p >2\) then, by Lemma 2.1, we get
$$ \bigl\Vert u' \bigr\Vert _{\infty} \leq\frac{A_{0}(1+ \varphi_{q}( \Vert x_{1} \Vert _{L^{1}}) + \varphi_{q}( \Vert x_{3} \Vert _{L^{1}})}{1-\varphi_{q}( \Vert x_{1} \Vert _{L^{1}} + \Vert x_{2} \Vert _{L^{1}})}. $$
Since \(\Vert u \Vert= \max\{\Vert u \Vert_{\infty}, \Vert u' \Vert _{\infty}\} \leq A_{0} + \Vert u' \Vert_{\infty}\), in view of (3.7) and (3.8), \(\varOmega_{1}\) is bounded. □
If\(\varOmega_{2} =\{u \in\ker M:-\lambda u +(1-\lambda)JQNu=0, \lambda\in[0,1]\}\), \(J: \operatorname {Im}Q \to\ker M\)is a homomorphism, then\(\varOmega_{2}\)is bounded.
For \(a, b \in R\), let \(J: \operatorname {Im}Q \to\ker M\) be defined by
$$ J(a+bt)= \frac{1}{C}\bigl[\delta_{11} \vert a \vert +\delta_{12} \vert b \vert + \bigl(\delta _{21} \vert a \vert + \delta_{22} \vert b \vert \bigr)t)\bigr]e^{-t}. $$
If (3.3) holds, for any \(u(t) = a + bt \in\varOmega_{3}\), from \(-\lambda u + (1-\lambda)JQNu =0\), we obtain
{δ11(−λ|a|+(1−λ)Q1N(a+bt))+δ12(−λ|b|+(1−λ)Q2N(a+bt))=0,δ21(−λ|a|+(1−λ)Q1N(a+bt))+δ22(−λ|b|+(1−λ)Q2N(a+bt))=0.
Since \(C \neq0\),
$$\begin{aligned} \begin{gathered} \lambda \vert a \vert =(1 - \lambda)Q_{1}N(a +bt), \\ \lambda \vert b \vert =(1 - \lambda)Q_{2}N(a +bt). \end{gathered} \end{aligned}$$
From (3.10), when \(\lambda=1\), \(a = b =0\). When \(\lambda=0\),
$$Q_{1}N(a+bt) + Q_{2}N(a+bt)=0, $$
which contradicts (3.3) and (3.4), hence from (\(H_{3}\)), \(|a| \leq l\) and \(|b| \leq l\). For \(\lambda\in(0,1)\), in view of (3.3) and (3.10), we have
$$0\leq\lambda\bigl( \vert a \vert + \vert b \vert \bigr) =(1-\lambda) \bigl[Q_{1}N(a +bt) + Q_{2}N(a+bt)\bigr] < 0, $$
which contradicts \(\lambda(|a|+|b|) \geq0\). Hence, (\(H_{3}\)), \(|a| \leq l\) and \(|b| \leq l\), thus \(\Vert u \Vert\leq2l\). Therefore \(\varOmega _{2}\) is bounded. □
Proof of Theorem 3.1
Since M is quasi-linear, condition (\(A_{1}\)) of Theorem 2.1 holds, Lemma 2.2 proved (\(A_{2}\)), while Lemma 3.1 shows that (\(A_{3}\)) holds.
Let \(\varOmega\supset\varOmega_{1} \cup\varOmega_{2}\) be a nonempty, open and bounded set, \(u \in \operatorname {dom}M \cap\partial\varOmega\), \(H(u,\lambda)=-\lambda u +(1-\lambda)JQNu\), and J be as defined in Lemma 3.2 then \(H(u,\lambda) \neq0\). Therefore by the homotopy property of the Brouwer degree
$$\begin{aligned} \deg\{JQN|_{\overline{\varOmega} \cap\ker M},\varOmega\cap\ker M,0\}&=\deg\bigl\{ H(\cdot, 0), \varOmega\cap\ker M,0\bigr\} \\ &=\deg\bigl\{ H(\cdot,1),\varOmega\cap\ker M,0\bigr\} \\ &=\deg\{-I,\varOmega\cap\ker M,0\} \neq0. \end{aligned}$$
Hence, condition (\(A_{4}\)) of Theorem 2.1 also holds. □
Since all the conditions of Theorem 2.1 are satisfied, the abstract equation \(Mu=Nu\) has at least one solution in \(\overline {\varOmega} \cap \operatorname {dom}M\). Hence, (1.1) has at least one solution.
Consider the following boundary value problem:
$$ \left \{ \textstyle\begin{array}{l} (\varphi_{4}(u'(t)))' + e^{-t-2} \sin t \cdot u^{3}+e^{-t-3}\cos t\cdot u^{\prime3} + \frac{1}{6}e^{-6t}=0, \quad t \in (0,+\infty ), \\ \varphi_{4}(u'(0))=\int_{0}^{+\infty }2e^{-2t}\varphi_{4}(u'(t))\,dt, \qquad \varphi_{4}(u'(+\infty ))= 9\int_{0}^{1/9}\varphi_{4}(u'(t))\,dt. \end{array}\displaystyle \right . $$
Here \(v(t) =2e^{-2t}\), \(p=4\), \(q=\frac{4}{3}\), \(\beta_{1} = 9\), \(\eta _{1} = \frac{1}{9}\), \(x_{1}= e^{-t-2}\sin t\) and \(x_{2}=e^{-t-3}\cos t\). Therefore, \(\sum_{j=1}^{1}\beta_{j} \eta_{j}=1\), \(\int_{0}^{+\infty }v(t)\, dt=1\), \(C \neq0\) and \(\varphi_{q}(\Vert x_{1} \Vert_{L^{1}} + \Vert x_{2} \Vert_{L^{2}})<1\). It can easily be seen that conditions (\(H_{1}\))–(\(H_{3}\)) hold. Hence, (4.1) has at least one solution.
Agarwal, R.P., O'Regan, D.: Infinite interval problems modeling phenomena which arise in the theory of plasma and electrical potential theory. Stud. Appl. Math. 111(3), 339–358 (2003)
MathSciNet Article Google Scholar
Gala, S., Liu, Q., Ragusa, M.A.: A new regularity criterion for the nematic liquid crystal flows. Appl. Anal. 91(9), 1741–1747 (2012)
Gala, S., Ragusa, M.A.: Logarithmically improved regularity criterion for the Boussinesq equations in Besov spaces with negative indices. Appl. Anal. 95(6), 1271–1279 (2016)
López-Somoza, L., Minhós, F.: On multipoint resonant problems on the half-line. Bound. Value Probl. 2019, Article ID 38 (2019). https://doi.org/10.1186/s13661-019-1153-9
Capitanelli, R., Fragapane, S., Vivaldi, M.A.: Regularity results for p-Laplacians in pre-fractal domains. Adv. Nonlinear Anal. 8(1), 1043–1056 (2019)
Jiang, W., Kosmatov, N.: Resonant p-Laplacian problems with functional boundary conditions. Bound. Value Probl. 2018, Article ID 72 (2018). https://doi.org/10.1186/s13661-018-0986-y
Yang, A.J., Ge, W.: Existence of symmetric solutions for a fourth-order multi-point boundary value problem with a p-Laplacian at resonance. J. Appl. Math. Comput. 29, 301–309 (2009)
Lin, X., Du, Z., Meng, F.: A note on a third-order multi-point boundary value problem at resonance. Math. Nachr. 284, 1690–1700 (2011)
Iyase, S.A., Imaga, O.F.: On a singular second-order multipoint boundary value problem at resonance. Int. J. Differ. Equ. (2017). https://doi.org/10.1155/2017/8579065
Iyase, S.A., Imaga, O.F.: Higher order boundary value problems with integral boundary conditions at resonance on the half-line. J. Niger. Math. Soc. 32(2), 168–183 (2019)
Jiang, W., Zhang, Y., Qiu, J.: The existence of solutions for p-Laplacian boundary value problems at resonance on the half-line. Bound. Value Probl. (2009). https://doi.org/10.1186/s13661-015-0439-9
Feng, H., Lian, H., Ge, W.: A symmetric solution of a multipoint boundary value problems with one-dimensional p-Laplacian at resonance. Nonlinear Anal. 69, 3964–3972 (2008)
Yang, A., Miao, C., Ge, W.: Solvability for a second-order nonlocal boundary value problems with a p-Laplacian at resonance on a half-line. Electron. J. Qual. Theory Differ. Equ. 2009, Article ID 19 (2009)
Lin, X., Zhang, Q.: Existence of solution for a p-Laplacian multi-point boundary value problem at resonance Qual. Theory Dyn. Syst. 17, 143–154 (2017). https://doi.org/10.1007/s12346-017-0259-7
Papageorgiou, N.S., Radulescu, V.D.: Qualitative phenomena for some classes of quasilinear elliptic equations with multiple resonance. Appl. Math. Optim. 69(3), 393–430 (2014)
Imaga, O.F., Edeki, S.O., Agboola, O.O.: On the solvability of a resonant p-Laplacian third-order integral m-point boundary value problem. IAENG Int. J. Appl. Math. 50(2), 256–261 (2020)
Jiang, W.: Solvability for p-Laplacian boundary value problem at resonance on the half-line. Bound. Value Probl. 2013, Article ID 207 (2013). https://doi.org/10.1186/1687-2770-2013-207
Ge, W., Ren, J.: An extension of Mawhin's continuation theorem and its application to boundary value problems with a p-Laplacian. Nonlinear Anal. 58, 477–488 (2004)
Ge, W.: Boundary Value Problems for Ordinary Nonlinear Differential Equations. Science Press, Beijing (2007) (in Chinese)
Kosmatov, N.: Multi-point boundary value problems on an unbounded domain at resonance. Nonlinear Anal. 68, 2158–2171 (2008)
The authors acknowledges Covenant University for the support received from them. The authors are also grateful to the referees for their valuable suggestions.
The authors received no specific funding for this research.
Department of Mathematics, Covenant University, Ota, Nigeria
O. F. Imaga & S. A. Iyase
O. F. Imaga
S. A. Iyase
OF conceived the idea. SA supervised the work. All authors discussed and contributed to the final manuscript.
Correspondence to O. F. Imaga.
The authors declare they have no competing interest.
Imaga, O.F., Iyase, S.A. Existence of solution for a resonant p-Laplacian second-order m-point boundary value problem on the half-line with two dimensional kernel. Bound Value Probl 2020, 114 (2020). https://doi.org/10.1186/s13661-020-01415-3
70K30
Coincidence degree
Half-line
Integral boundary value problem
p-Laplacian | CommonCrawl |
Long-term feeding with high plant protein based diets in gilthead seabream (Sparus aurata, L.) leads to changes in the inflammatory and immune related gene expression at intestinal level
Guillem Estruch ORCID: orcid.org/0000-0001-8722-05561,
Maria Carmen Collado2,
Raquel Monge-Ortiz1,
Ana Tomás-Vidal1,
Miguel Jover-Cerdá1,
David S Peñaranda1,
Gaspar Pérez Martínez2 &
Silvia Martínez-Llorens1
In order to ensure sustainability of aquaculture production of carnivourous fish species such as the gilthead seabream (Sparus aurata, L.), the impact of the inclusion of alternative protein sources to fishmeal, including plants, has been assessed. With the aim of evaluating long-term effects of vegetable diets on growth and intestinal status of the on-growing gilthead seabream (initial weight = 129 g), three experimental diets were tested: a strict plant protein-based diet (VM), a fishmeal based diet (FM) and a plant protein-based diet with 15% of marine ingredients (squid and krill meal) alternative to fishmeal (VM+). Intestines were sampled after 154 days. Besides studying growth parameters and survival, the gene expression related to inflammatory response, immune system, epithelia integrity and digestive process was analysed in the foregut and hindgut sections, as well as different histological parameters in the foregut.
There were no differences in growth performance (p = 0.2703) and feed utilization (p = 0.1536), although a greater fish mortality was recorded in the VM group (p = 0.0141). In addition, this group reported a lower expression in genes related to pro-inflammatory response, as Interleukine-1β (il1β, p = 0.0415), Interleukine-6 (il6, p = 0.0347) and cyclooxigenase-2 (cox2, p = 0.0014), immune-related genes as immunoglobulin M (igm, p = 0.0002) or bacterial defence genes as alkaline phosphatase (alp, p = 0.0069). In contrast, the VM+ group yielded similar survival rate to FM (p = 0.0141) and the gene expression patterns indicated a greater induction of the inflammatory and immune markers (il1β, cox2 and igm). However, major histological changes in gut were not detected.
Using plants as the unique source of protein on a long term basis, replacing fishmeal in aqua feeds for gilthead seabream, may have been the reason of a decrease in the level of different pro-inflammatory mediators (il1 β, il6 and cox2) and immune-related molecules (igm and alp), which reflects a possible lack of local immune response at the intestinal mucosa, explaining the higher mortality observed. Krill and squid meal inclusion in vegetable diets, even at low concentrations, provided an improvement in nutrition and survival parameters compared to strictly plant protein based diets as VM, maybe explained by the maintenance of an effective immune response throughout the assay.
Fishmeal replacement in feeds is one of the main challenges in aquaculture farming in order to ensure the sustainability of the production of aquaculture species, especially in carnivorous species [1]. Plant sources have been used as substitutes in order to reduce the use of fishmeal [2] and to develop more economical and environmentally sustainable feeds compared to fishmeal based diets [1, 3].
Tolerance to vegetable products depends on species [4]. In the case of gilthead seabream, although high or total replacements of fishmeal by vegetable meal have been successfully achieved in terms of growth [5, 6], detrimental effects on nutrient digestibility and absorption [7, 8] have also been reported. Moreover, histomorphological gut and liver alterations [4, 9,10,11], immune status disorders [9] or gut microbial imbalances [12] have been described. Thus, the use of certain agricultural by-products seems to ultimately lead to a lower feed conversion efficiency and an increase in both the susceptibility against diseases and bacterial and parasitic infections [13], which may be induced by an immune deficiency status or disruptions on the inflammatory response.
Hence, dietary and nutritional factors have a great influence on growth and immune response of fish [14]. Among other physiological processes, fish gut particularly plays a key role in the digestion and absorption of nutrients, in the immune response to potential pathogenic invasions and in the protection against environmental stressors [15]. The intestinal status in response to dietary changes has been widely assessed in fish, including gilthead seabream [16,17,18,19,20,21]. In particular, the impact of lowfishmeal diets on the intestinal physiology of different species has been assessed in different stages of the growing phase [22, 23].
A wide set of physiological parameters can be evaluated by using different techniques. Gene expression approaches allow to analyse different genes involved in different processes [24] including digestion (digestive enzymes, nutrient transporters), epithelial structure, inflammatory processes (cytokines and other proinflammatory mediators), and innate and adaptive immune response (mucins, genes codifying for antibodies), obtaining a snapshot of the whole response that can indeed provide hints and new insights to dietary impact on the intestinal status. On the other hand, histological assessment of the different gut layers can provide some valuable information on possible inflammatory reactions, as well as morphological adaptations to face with the dietary modifications [25].
In addition to detrimental effects associated to anti-nutritional factors [26, 27], whose impact depends on the tolerance of different species, fishmeal substitutions by great proportions of vegetable meals in fish diet could result in amino acid imbalances and palatability problems [1, 27], which could have an influence in the feed intake and negatively affect the fish performance [28]. In order to achieve the minimum requirements, diets with high fishmeal substitution usually need a supplement with synthetic amino acids that increases the price of the diet and could have different adverse effects in nutrient utilisation [29]. Nevertheless, the addition of complementary ingredients such as marine by-products, as opposed or in combination with the amino acid supplementation, seems to be more effective in order to achieve an ideal amino acid profile when alternative vegetable-based diets are used [28]. Indeed, marine by-products, including squid meal or krill meal, are regarded as a high quality protein source, since they show a balanced amino acid profile and contain a considerable amount of free amino acids [28]. Furthermore, these marine ingredients yield several profits, such as acting as feed-attractant that improves feed intake or offsetting some of the deficiencies observed with high plant protein diets for marine carnivorous fish [28, 30, 31].
This work focuses on the impact of a complete replacement of fishmeal during the on-growing period on the intestine of gilthead seabream through the gene expression study of a broad set of genes related to inflammatory response, immune system, gut epithelia integrity, digestive enzymes and peptide transporters. In addition, the effect of the inclusion of marine by-products (squid and krill meal) in seabream plant based diets as a source of marine protein was also assessed in terms of growth parameters and gene expression. The study was supplemented with histological analysis of the foregut, aiming to understand the possible effects in relation to nutrient absorption and inflammatory processes at the morphological level.
The experimental protocol was reviewed and approved by the Committee of Ethics and Animal Welfare of the Universitat Politècnica de València, following the Spanish Royal Decree 53/2013 and the European Directive 2010/63/UE on the protection of animals used for scientific purposes.
Fish were weighed individually every month during the feed assay, using clove oil with an 87% of eugenol (Guinama ®, Valencia, Spain) as an anaesthetic (1 mg/100 mL of water) to minimize their suffering.
At the end of the growth assay, fish were euthanized by decapitation, after fish were anesthetized with clove oil dissolved in water (1 mg/100 mL of water), thus minimizing their suffering.
Design of the experiment
Rearing system, fish and growth assay
The experiment was conducted at the Universitat Politècnica de València in a recirculating saltwater system (75 m3 capacity) with a rotary mechanical filter and a 6 m3 capacity gravity biofilter. Nine cylindrical fiberglass tanks with a capacity of 1750 L were used, and water temperature, salinity, dissolved oxygen and pH were as follows: 22.0 ± 0.52 °C, 30 ± 1.7 g/L, 6.5 ± 0.49 mg/L, 7.5–8.5. Water parameters were daily measured. All tanks had similar lighting conditions, with a natural photoperiod (from November to March, average of hours of light: 11 h).
The seabreams were provided by the fish farm PISCIMAR, in Burriana (Castelló, Spain). The feed was given by hand twice a day (at 9:00 and 17:00 h) up to an apparent satiation with a standard commercial (48% crude protein, 23% ether extract, 11% crude ash, 2% crude fibre and 14% nitrogen free extract) diet during the two-month acclimation period to laboratory conditions. The weekly feeding regimen consisted of six days of feeding and one day of fasting. Growth assay started with fish with an average weight of 129 ± 19 g.
Seabream were randomly distributed into 9 fiberglass tanks (twenty fish per tank), and three different experimental diets (a vegetable diet, VM; a fishmeal-based diet, FM and a vegetable diet with marine ingredients, VM+) were randomly assigned to three of them (n = 3). Feeding parameters remained the same as during the acclimatation period. The experiment finished when the fish achieved a commercial size, (average weight ~ 350 g), and fish were sacrifice afterwards, 154 days after the beginning of the assay.
Fish weight (g) and survival rate (%) were assessed monthly. Final weight (g) (FW), specific growth rate (% / day) (SGR), feed intake (g/ 100 g fish · day) (FI), feed conversion ratio (FCR), and survival (%) (S) were determined when the experiment was completed. The SGR, the FI and the FCR were obtained taking into account the reported monthly biomass of dead fish.
Diets were prepared as pellets by cooking-extrusion with a semi-industrial twin-screw extruder (CLEXTRAL BC-45, Firminy, St Etienne, France); located at Universitat Politècnica de València. The processing conditions were as follows: 0.63 g screw speed, 110 °C and 30–40 atm.
Three isonitrogenous and isoenergetic diets were formulated using commercial ingredients, whose proximal composition was previously analysed according to AOAC (Association of Official Agricultural Chemists) procedures. in the FM diet, the protein was provided by fishmeal, although wheat meal was incorporated as a source of carbohydrates. Synthetic amino acids were not included. The VM diet was based on a mixture of vegetable meals as a protein source and included synthetic amino acids in order to accomplish the minimum requirements of essential amino acids [32]. Finally, VM+ contained a mixture of vegetable meals similar to the VM diet one, but squid meal and krill meal were added to the feed at 10 and 5% level, respectively, reducing the concentration of free amino acid supplementation. These meals were obtained from different companies as by-products: squid meal was provided by Max Nollert (Utrecht, Netherlands) and krill meal by Ludan Renewable Energy (Valencia, Spain).
Amino acids of raw materials and experimental diets were analysed, prior to diet formulation, through a Waters HPLC system (Waters 474, Waters, Milford, MA, USA) consisting of two pumps (Model 515, Waters), an auto sampler (Model 717, Waters), a fluorescence detector (Model 474, Waters) and a temperature control module. Aminobutyric acid was added as an internal standard pattern before hydrolysation. The amino acids were derivatised with AQC (6-aminoquinolyl-N-hydroxysuccinimidyl carbamate). Methionine and cysteine were determined separately as methionine sulphone and cysteic acid after oxidation with performic acid. Amino acids were separated with a C-18 reverse-phase column Waters Acc. Tag (150 mm × 3.9 mm). Proximate composition and essential amino acids of different ingredients are shown in Table 1. The ingredients used, the proximate composition and the essential amino acids of the experimental feeds are included in Table 2.
Table 1 Proximal composition and essential amino acid profile of the different aqua feed ingredients
Table 2 Price, ingredients,proximal composition and essential amino acid profile of diets tested in the growth assay
A digestibility experiment was performed after the growth assay, using five randomly selected fish per experimental group and digestibility tanks of 250 l of capacity (one per experimental group). Apparent digestibility coefficient of the crude protein were obtained, according to the Guelph System Protocol [33], by the Chromium Oxide determination method. After two days of fasting, digestibility assay started and lasted 14 days. Fish were fed to satiation once a day (9:00 h) with the same experimental diets containing chromium oxide (50 g kg− 1) as an innert marker and uneaten food was then removed from the columns (15:00). Wet faeces were collected from decantation columns just before the next morning feeding and then dried at 60 °C for 48 h prior to analysis. After acid digestion, an atomic absorption spectrometer (Perkin Elmer 3300, Perkin Elmer, Boston, MA, USA) was used for Chromium oxide determination in duplicate in diets and faeces. Apparent digestibility coefficient (ADC) of the crude protein (CP) was calculated as follows (Eq. 1):
$$ {ADC}_N\ \left(\%\right)=100\cdotp \left(1-\left(\frac{\% marker\ in\ diet\cdotp \% CP\ in\ faeces}{\% marker\ in\ faeces\cdotp \% CP\ in\ diet}\right)\right) $$
Economic assessment
The Economic Conversion Rate (ECR) and the Economical Profit Index (EPI) [2] were calculated for each experimental group using Eqs. (2) and (3), respectively. The currency type for economic evaluations was the euro (€). The price of the diets was obtained from the individual prices of the different ingredients. Gilthead seabream sale price was 5.37 € Kg fish− 1, based on prices of the Spanish Wholesale market on January 2017. With the aim of showing the impact of the fish mortality on economic profit in on-growing phase, biomass of dead fish was considered and therefore was not included in the total final biomass, and the initial number of fish was used to standarize when the EPI was determined.
$$ \mathrm{ECR}\ \left(\text{\EUR} \cdot \mathrm{kg}\ {\mathrm{fish}}^{-1}\right)=\mathrm{FCR}\ \left(\mathrm{kg}\ \mathrm{diet}\cdot \mathrm{kg}\ {\mathrm{fish}}^{-1}\right)\cdot \mathrm{Price}\ \mathrm{of}\ \mathrm{diet}\ \left(\text{\EUR} \cdot \mathrm{kg}\ {\mathrm{diet}}^{-1}\right) $$
$$ \mathrm{EPI}\ \left(\text{\EUR} \cdot {\mathrm{fish}}^{-1}\right)=\frac{\mathrm{Final}\ \mathrm{biomass}\left(\mathrm{kg}\ \mathrm{fish}\right)\cdot \mathrm{Sale}\ \mathrm{price}\ \left(\text{\EUR} \cdot \mathrm{kg}\ {\mathrm{fish}}^{-1}\right)-\Delta \mathrm{biomass}\ \left(\mathrm{kg}\ \mathrm{fish}\right)\cdot \mathrm{ECR}\ \left(\text{\EUR} \cdot \mathrm{kg}\ {\mathrm{fish}}^{-1}\right)}{\mathrm{Initial}\ \mathrm{number}\ \mathrm{of}\ \mathrm{fish}} $$
In order to assess gene expression and histological changes throughout the intestinal tract, intestinal samples from three fish per tank were sampled at the end of the growth assay after one day of fasting (40 h after the last feed). Based on the separation on sections proposed in previous researches [34], three different sections were considered but only pieces of foregut (FG) and hindgut (HG) were collected and stored in RNA later(Ambion Inc., Huntingdon, UK) at 4 °C overnight and then at − 20 °C until RNA extraction. Pieces of FG section (two fish per tank, n = 6) were stored in phosphate buffered formalin (4%, pH 7.4) for the histological assessment.
RNA extraction and cDNA step
Total RNA was extracted from FG and HG tissues by traditional phenol/chloroform extraction, using TRIzol Reagent (Invitrogen, Spain), and then purified and treated with DNase I using NucleoSpin® RNA Clean-up XS kit (Macherey-Nagel, Düren, Germany), according to guide instructions. Total RNA concentration, quality and integrity were evaluated using a NanoDrop 2000C Spectrophotometer (Fisher Scientific SL, Spain) and samples were stored at − 80 °C until complementary DNA (cDNA) synthesis.
cDNAwas synthetized from 1 μg of total RNA input using the qScript cDNA Synthesis Kit (Quanta BioScience), according to the manufacturer's instructions, using the Applied Biosystems 2720 Thermal Cycler. The cycling conditions were 22 °C for 5 min, 42 °C for 30 min, and 85 °C for 5 min. Total RNA samples were stored at − 80 °C until gene expression was analysed.
Measurement of gene expression by SYBR green assay real time quantitative RT-PCR (qPCR)
Reference and target genes
Four candidate reference genes (ef1α, gapdh, rps18, βact; Table 3) were tested to be used as housekeeping genes in the gene expression assay. The stability of these genes was determined using six cDNA pooled samples, obtained each one from combine equally volumes of cDNA samples from the same section in a given experimental group. Ribosomal protein s18 (rps18) and β-actin (βact) were selected as reference genes for the normalization of gene expression based on the stability of its expression in the cDNA pools and the cDNA specificity in the amplification, confirmed by melting curve analysis [see Additional file 1].
Table 3 Primer sequences of candidate genes (reference and target genes) in the RT-qPCR assay
Expression stability of reference genes in individualized samples was determined using the BestKeeper program [35], which reports a standard deviation (SD[±Cq]) lower than 1 for both genes (0.54 for rps18 and 0.68 for βact, p < 0.05) and Cq arithmetic means of 20.19 ± 1.46 and 17.96 ± 1.6 for rps18 and βact, respectively. The BestKeeper's calculated variations in the reference genes are based on the arithmetic mean of the Cq values.
Eighteen candidate target genes (Table 3) were previously tested by RT-qPCR. The proinflammatory cytokines genes il1β, il6 and il8, and other proinflammatory molecules, as tnfα, casp1 AND cox2 were included due to their relevance as inflammation markers [16, 20]. Genes encoding different mucins (imuc, muc2, muc2L, muc13 and muc19), which contribute to protect the intestine epithelium against a broad spectrum of damages [19], and specific antibodies (igm) were also chosen to assess the response of the innate and adaptive immunity, respectively. A tight junction protein, such as ocl, and an essential component of microtubules such as tub [16] were included in the expression pretesting due to their involvement in the maintenance of the epithelial gut integrity. Regarding the selected genes encoding digestive enzymes and nutrient transporters, αamy and tryp are digestive enzymes responsible for hydrolysis of carbohydrates and proteins, respectively, and pept1 is a peptide transporter at the brush border membrane of the enterocytes with an important role in the intestinal absorption [36]. Finally, the gene expression of the alp, responsible of removing the phosphate groups of many different molecules [37], was also determined.
This preliminary gene expression test was performed using the cDNA pooled samples used in the reference gene evaluation [see Additional file 2]. Target genes for the further individualized assesment were selected based on their function, potential fold-change differences between diets and intestine segments (significant differences cannot be determined by an statistical analysis since n = 1), gene expression level and nonspecific amplifications. Later on, relative gene expression of the nine selected genes (il1β, il6, cox2, igm, imuc, ocl, pept1, tryp, alp) was determined at the FG and at the HG in nine fish per dietary treatment.
RT-PCR assay conditions
All qPCR assays and expression analyses were performed using the Applied Biosystems 7500 Real-Time PCR with SYBR® Green PCR Master Mix (ThermoFisher Scientific, Waltham, Massachusetts, USA). The total volume for every PCR reaction was 10 μL, performed from diluted (1:50) cDNA template (1 μL), forward and reverse primers (10 μM, 1 μL), SYBR® Green PCR Master Mix (5 μL) and nuclease-free water up to 10 μL.
After an initial Taq activation of polymerase at 95 °C for 10 min, 42 cycles of PCR were performed with the following cycling conditions: 95 °C for 10 s and 60 °C for 20 s in all genes, except for alp (with annealing and extension step at 55 °C). In order to evaluate assay specificity, a melting curve analysis was directly performed after PCR by slowly increasing the temperature (1 °C / min) from 60 to 95 °C, with a continuous registration of changes in fluorescent emission intensity.
The analysis of the results was carried out using the 2-ΔΔCt method [24]. The target gene expression quantification was expressed relative to the expression of the two reference genes (rps18 and βact). A cDNA pool from all the samples was included in each run and acted as a calibrator, and a non-template control for each primer pair, in which cDNA was replaced by water, was run on all plates. Reference and target genes in all samples were run in duplicate PCR reactions.
Histological analysis
Fragments of FG fixed in formalin were routinely dehydrated in ethanol, equilibrated in UltraClear (Bio-Optica Milano s. p. a., Milan, Italy), and embedded in paraffin according to standard histological techniques. Transverse sections were cut with a thickness of 5 μm with a microtome Shandom Hypercut (four sections per paraffin block were obtained) and dyed with the haematoxylin-eosine staining method. A total of 72 FG sections, obtained from 18 different paraffin blocks (n = 6), were analysed under the light microscope (Nikon, Phase Contrast Dry JAPAN 0.90), focusing on possible inflammatory changes and other disorders.
A combination of different criteria reported by several authors [7, 9, 38,39,40] was used to measure the following parameters at FG sections: serous layer (SL), muscular layer (ML), submucosa layer (SML), villi length (VL), villi thickness (VT) and lamina propria thickness (LP), and number of goblet cells per villus (GC). Six measurements per section in each parameter were performed and average means were obtained for each sample (n = 6). Moreover, a continuous scoring system (Fig. 1), ranging from 1 to 4, was used to assess the supranuclear vacuolization on the epithelia (V), the position of the nuclei of the enterocytes (EN) and the lymphocytic infiltration of the epithelial layer (EI), the lamina propria (LPI) and the submucosa (SMI) in each sample (n = 6).
Evaluation and scoring system used to assess histological parameters of gilthead seabream foregut. a Measurements performed in a foregut histological section (20×). b Detail of villi with a certain grade of infiltration of the lamina propia and the epithelia. Enterocytes nuclei were displaced in some cases. Epithelial vacuolization can also be observed in a normal grade (40×). c Enterocytes showed aligned nuclei in a basal position. Villi presented a low grade of infiltration of their lamina propia and of the epithelia, and low vacuolization. A certain grade of infiltration in the submucosa layer can be observed (20×). SL, ML, SML, VL, VT and number of GC were measured six times per section, and averages were obtained for each section (six sections per group, n = 6). V, EN, EI, LPI and SMI were assessed in each section (n = 6) using the following scoring system: V, normal (1) to hypervacuolated (4); EN, basal (1) to apical (4); EI, low (1) to markedly increased (4); LPI, low (1) to markedly increased (4); SMI, low (1) to markedly increased (4). SL, serous layer; ML, muscular layer; SML, submucosa layer; VL, villi length; VT, villi thickness; LP, lamina propria; GC, goblet cells; V, supranuclear absorptive vacuoles; EN, enterocytes nuclei; EI, epithelial infiltration; LPI, lamina propria infiltration; SMI, submucosa infiltration
Statistical data analyses were carried out with Statgraphics © Centurion XVI software (Statistical Graphics Corp., Rockville, MO, USA).
Differences in fish weight and survival between dietary groups were monthly evaluated by simple analysis of variance, considering the tank as the experimental unit. At the end of the growth trial, economic indeces (ECR and EPI) and livestock data (FW, SGR, FCR, FI and S) were subjected to simple variance analysis. Each group in the calculation represented the combined group of fish per single tank (triplicate tanks per treatment). Student Newman-Keuls test was used to assess specific differences among dietary groups at the 0.05 significance level. Descriptive statistics are shown as the mean ± pooled standard error of the mean (SEM).
Relative gene expression data was statistically analysed by two-way analysis of variance using Newman-Keuls test. Differences in expression were considered statistically significant when p < 0.05. Data was expressed with the mean and the standard error for each gut section and experimental group. Differences in the gene expression between sections within each group, between experimental groups, and between same sections in different dietary groups were determined.
Finally, histological measurements in foregut were showed as the mean ± standard error of the mean and it was analysed through an analysis of variance (ANOVA), with a Newman-Keuls test for the comparison of the means and a level of significance set at p < 0.05. Principal Component Analysis was used to analyse the histological scored parameters of gut (V, EN, EI, LPI and SMI). Statistical differences between experimental groups were estimated by ANOVA using the first and second Principal Components of the Principal Component Analysis, with a Newman-Keuls test (p < 0.05).
Economic indices
Statistically differences were determined in the ECR between the groups FM and VM (p = 0.0473), whilst the EPI was greater in the groups FM and VM+ (p = 0.0167) (Table 4). Differences in the ECR can be explained by the greater cost of the FM feed, while the lower number of fish at the end of the growth assay in the different tanks assigned to the VM treatment led to a lower EPI (p = 0.0167) in this dietary group.
Table 4 Growth and economic indices of seabream fed experimental diets at the end of the experiment
Growth assay and growth indices
Differences were observed in the average weight of fish after 112 days from the beginning of the growth assay (p = 0.0042), registering greater weight in those fish fed FM and VM+ than in fish fed VM (Fig. 2), although no significant differences were observed in subsequent sampling points and at the end of the feeding trial (Table 4). Survival rate of fish fed VM began to decrease after 112 days (p = 0.0332) of the experiment in comparison to the rates observed in the other two groups (FM and VM+). Survival rate continued decreasing at VM group as the growth trial progressed, but no disease signs were reported in dead fish. No differences were observed in the growth parameters, which are shown in Table 4.
Average weight (g) and survival rate (%) evolution of gilthead seabream along the assay period. Average weight mean and standard error (bars) and survival rate (line) of each experimental group were displayed in different colours (Black: VM; Grey: FM; White: VM+). Different superscripts on the bars indicate significant statistical differences in the average weight during the growth trial (p < 0.05). Data are means of triplicate groups (n = 20). Asterisks indicate the existence of significant differences in the survival rate along the assay at p < 0.05
Inflammation and immune system genes
The diet was determined as a significant factor affecting the expression of il1β, il6, cox2 and igm (Table 5). Fish fed VM and FM reported lower expression levels of il1β (Fig. 3a), cox2 (Fig. 3c) and IgM (Fig. 3d) in comparison to VM+ group, and a lower expression of il6 (Fig. 3b) was observed in the VM group. IgM and i-muc relative expression were affected by the section (Table 5): igm (Fig. 3d) had a higher expression in FG than in HG, specially in the group VM+, and i-muc (Fig. 3e) reported a remarkably higher expression in the HG.
Table 5 p-values* determined for diet, intestinal section and the interaction between both factors on the gene expression assay
Relative gene expression in the intestine of gilthead seabream fed different experimental diets. a Interleukine-1β (il1β); b Interleukine-6 (il6); c Cyclooxigenase-2 (cox2); d Intestinal Mucin (imuc); e Immunoglobulin M (igm). f Occludin (ocl); g Alkaline Phosphatase (alp); h Trypsin (tryp); i Peptide Transporter 1 (pept1). Bars represent relative gene expression (mean + standard error, n = 9), for each group, in the foregut (FG, black bars) and the hindgut (HG, grey bars). Superscript letters on the bars indicate differences between experimental groups in each section, at p < 0.05. Asterisks indicate differences between intestinal sections in each experimental group, at p < 0.05. Capital letters at the top of the graph indicate differences between experimental groups, regardless the intestinal section (n = 18, p < 0.05), when interaction between factors (diet and section) is not significative (Table 5)
Structural, enzyme and nutrient transport genes
Expression of ocl, alp and pept1 was influenced by the diet (Table 5). The VM group showed a lower expression of ocl (Fig. 3f) and alp (Fig. 3g) in comparison to the other two groups. Additionally, this group showed a lower expression of pept1 (Fig. 3i) in comparison to FM, but a greater expression compared to VM+.The relative gene expression of tryp (Fig. 3h) showed a large individual variation and no differences were found at diet (p = 0.4677) or section level (p = 0.2036). Finally, the expression of pept1 was also affected by the section (Table 5), being overexpressed in the FG compared to HG in all experimental groups (Fig. 3i).
Fish fed VM exhibited thinner villi and lamina propria than the fish fed the FM diet (Table 6). No differences were determined in the thickness of the three layers of the intestinal wall, nor in the length of the villi and the thickness of the lamina propria. The number of GC was increased in many of the fish fed the vegetable diets, especially for the fish fed VM+, although no significant differences were determined between dietary groups.
Table 6 Dietary effect on the histomorphology of the foregut of gilthead seabream
Assessment by scoring of different parameters of the gut (Fig. 4) revealed differences on the number of supranuclear absorptive vacuoles in the epithelial layer (V), the displacement of the enterocytes nuclei to apical positions (EN), and the degree of inflammatory cells infiltration in the submucosa layer (SMI). In these three assessed parameters, related with the inflammatory status,higher values were reported in the foregut sections belonging to VM and VM+ groups. Dispersion graph (Fig. 4) based on the First and Second component values obtained from the Principal Component Analysis, showed evident differences among the sections belonging to FM group and the sections from groups fed with plant-based diets. First Component of the Principal Component Analysis explained the 53,7% of the variability and was related with the degree of inflammation. In this sense, an ANOVA taking this First Component as a variable confirmed the existence of significant differences (p = 0.0063) between FM sections and sections of the groups of fish fed vegetable diets (VM and VM+).
Histological assessment of foregut sections of gilthead seabream fed different experimental diets, according to Fig. 1. Frequency bar charts showing differences in a supranuclear absorptive vacuolization (V), b enterocytes nuclei (EN), c enterocytes infiltration (EI), d lamina propria infiltration (LPI) and e submucosa infiltration (SMI). f Dispersion graph representing values of the first and second components for each foregut section assessed, obtained from the Principal component analysis of histological foregut scores according to diet. Only sections evaluated in all parameters were included in the Principal component analysis (n = 5 for VM and FM, n = 6 for VM+)
Summarizing, the VM group registered greater mortality and lower expression of il6, ocl, alp and pepT1 at intestinal level, while the VM+ group registered higher expression of il1β, cox2 and igm and lower expression of pept1. At histological level, both dietary groups (VM and VM+) reported thinner villi in the foregut compared to the FM group, an apical displacement of the enterocytes nuclei and higher vacuolization and cellular infiltration in the submucosa.
Zootecnical and economical parameters
Based on the evolution of mean weight and survival rates, the impact of the different feeds on the growth and survival can be observed from 112 days of the growth assay. However, although survival rates of fish fed the VM diet decreased from this time until the end of the trial, no significant differences in terms of mean weight were registered at the 140 and 154 days. Dead fish found in the VM tanks in the final stage of the assay were mainly the smallest fish in these tanks, which could explain the disappearance of significant differences in the mean weight at the end of the trial. Variability in the different experimental groups prevents differences in growth indices,, specially on the FCR. The less growth performance and greater mortality reported in the VM group are manifested in the economic indices. In this sense, FM and VM+ diets showed a similar efficiency under a economical point of view.
Intestinal status
Fish perfomance, including growth and survival, could be compromised by alterations in the intestinal homeostasis [11].
Fishmeal replacement by different vegetable sources has been associated with occurrence of gut inflammation in different species [41, 42]. Previous research have reported the up-regulation of the expression of different inflammatory markers [22, 23, 43, 44], higher grade of cell infiltration in the submucosa and changes in the expression of genes related with several processes, including antioxidant defences, cell differentiation, epithelial permeability, immunity and mucus production [22, 43] in response to moderate and high levels of plant protein sources inclusion.
In the present work, the group VM+, in which fishmeal was totally replaced by plant sources and squid and krill meal were included at 15% level, reported the up-regulation of pro-inflammatory markers (il1β and cox2) and igm compared to the FM group. The increase of gene expression in relation to inflammatory mediators has been linked to the regulation of the inflammation [20] and the activation of the innate immunity in response to infection [45], and it has been observed as a common response against low fishmeal based diets in several species [23]. Although IgT has been recently suggested as the main inmunoglobulin in the mucosal responses in gilthead seabream [46], IgM plays a key role in the gut mucosal immune reactions against pathogens or environmental stress, and also in the triggering of the humoral response [18, 47]. Additionally, high levels of IgM in the gut mucosa of fish fed with plant sources based diets have been reported [48]. Thus, the up-regulation of these genes could reflect that fish fed VM+ were developing an inflammatory process at the intestinal mucosa level, and are able to maintain an active local immune system after the growth trial.
In contrast, this up-regulation is not observed in the VM group, which showed a lower expression of different pro-inflammatory markers and other genes related with the immune defence (igm, alp) and the regulation of epithelial permeability (ocl), even lower compared to the FM group. Occludin has been suggested as a key protein in the epithelial integrity maintenance and in the regulation of permeability and other properties of the epithelial barrier [49], being a marker of integrity of the tight junction between the enterocytes, and its underexpression could suggest deficiencies in the regulation of the gut inflammatory response [16, 20]. Importance and physiologic function of alkaline phosphatase in digestion and a possible dietary regulation of its expression remain unclear, but it has been described as a gut mucosal defense factor, which seems to be implicated in the mucosal defence through the dephosphorylation of the lipopolysaccharides from the endotoxins of gram-negative bacteria [50]. Microbial lipopolysaccharides upregulates alp and its activity reduces toxicity of lipopolysaccharides [37], preventing from excessive inflammation in response to commensal microbes and helping to maintain the balance and integrity of the intestinal epithelial barrier [51].
The down-regulation of the expression of the genes could reflect that fish fed the VM diet were not triggering an inflammatory response at the end of the growth trial, as well as certain grade of immune mechanism suppression at local level, maybe evidencing an stress response. This depressed status could explain the higher mortality reported in this group and it could be linked with microbial imbalances that have been described in response to total fishmeal replacement in gilthead seabream [12].
In this sense, inclusion of great amounts of plant protein sources in aqua feeds for carnivorous species can be considered as a chronic stress factor, triggering a reponse by the host [52], which redirects more energy and resources to face with the stressor [53]. After long periods, immune mechanisms and other pathways that demand a continuous energy supply can be affected, leading to depressive or suppresive effects [52], leading to a chronic stress status. The suppresion of inflammatory and immune mechanisms in response to long term feeding high plant protein diets has been observed in previous research in different species [54, 55], including the gilthead seabream [9], and a differential response was also observed in different intestinal sections [22]..
Exposition to antinutrients included into the vegetable-based diets (VM+, and specially, in VM) throughout the growth assay could initially determine a prolongued inflammatory reaction in both experimental groups, demanding an additional energy expenditure that fish fed VM are not able tot sustain. Therefore, differences in the inflammatory and immune status of the gut between the VM and VM+ group at the end of the growth assay might be explained by dietary composition.The VM diet only includes vegetable meals, and synthetic amino acids were added in order to comply minimum amino acid requirements [32], while in the VM+ diet squid and krill meal —which have higher quality protein than vegetable meals and could improve essential amino acid profile in terms of bioavailability— were included at 10% and 5% levels, respectively, and the amount of synthetic amino acids was lower. This inclusion of marine by-products at 15% level could favour the maintenance of an active gut proinflammatory response along the experiment, while the VM diet could be a deficient diet from a nutritional point of view and fish could be unable to meet the energy requirements to sustain the inflammatory response during all the growth trial. Chitin, which is present in the krill meal at 4%, could increase the activity of the seabream immune system [56]. Composition in fiber, non-starch polysaccharides and fatty acids was very similar in both experimental diets and did not seem to be the reason of the observed differences.
The higher expression of pept1 at the FG of fish confirms that this is the main production site and the intestinal section in which most of the absorption of small peptides takes place in gilthead seabream [36]. The downregulation of the peptide transporter in the anterior intestine of fish fed VM, and especially of fish fed VM+, could be related to a greater presence of non-starch polysaccharides, saponins or other antinutrients in the vegetable based diets, which could alter the gut integrity and reduce the gastrointestinal passage of the food [57], and also to a lower digestibility of vegetable protein, which possibly contributes to a lower small peptide transport.
Finally, some possible minor inflammatory signs were observed at histological level in the present work in fish fed with both plant protein based diets (VM and VM+), which could suggest that fish fed VM could develop an inflammatory reaction at certain point of the growth assay, before a possible suppression of inflammatory and immune mechanisms. Modifications include a higher grade of vacuolization in the epithelia and an increase of cell infiltration in the submucosa layer. Presence of supranuclear absorptive vacuoles in the epithelial layer is normal, but their excessive accumulation could be related to changes in the function of enterocytes [58], and it is often accompanied with evident signs of inflammation, as immune cell infiltration, as it has been observed in previous studies in response to different experimental diets in different species and in different segments of the gut [4, 7, 59,60,61,62]. Moreover, villi with a great number of GC were observed in the gut of fish fed diets containing vegetable meals, especially on the VM+ group, which were not observed in the foregut of fish from the FM group., However, no statistical differences were determined, because villi with a reduced number of GC were observed in all experimental groups.. The increase in the number of GC has been noticed in rainbow trout [59], likewise in seabream fed with vegetable-based diet [6, 11], suggesting a possible alteration of secretory processes. GC secreted a mucus gel that covered the epithelium of the intestinal tract [63], so that the thicker mucus layer observed in fish fed vegetable based diets during the sampling process is consistent with these findings, although no differences were reported between experimental groups in the imuc expression in the HG, were it is constitutively expressed accordin to previous research [19]. However, no enteritis features in the FG were found, which is in accordance to previous studies [7]. In this sense, tolerance to antinutrients, which may be the cause of enteritis [59], seems to depend on species [4], and gilthead seabream seems to tolerate high levels of plant sources in diets without intestinal structural damage [9, 64], and only moderate changes, without pathological signs, have been observed in most research works [4, 7, 9, 25, 43, 44, 64]. In this sense, a higher degree of cellularity and the widening of the lamina propria -described as signs of inflammation- of fish fed vegetable diets, were not noticed in the present experiment, but similar observations were also made [6, 10, 11] in feeding trials with high levels of fishmeal replacement, so this point must be clarified.
Finally, thinner villi observed in the FG of fish fed with VM can affect the nutrients absorption capacity, although impact on growth may be more related with the allocation of energy to face with an prolongued inflammatory status than with histomoprhological changes. However, a similar effect with great amounts of fishmeal replacement by plant sources has been observed [10] and further investigation should be also performed on this issue to explain that response.
Total replacement of fishmeal by vegetable protein sources in diets for the on-growing of gilthead seabream had a negative impact on long-term fish survival under the experimental conditions, maybe caused by a lack of gut mucosal immune response derived from a lingering poor nutritional status. The inclusion of squid and krill meal in vegetable-based diets seemed to produce a long-term inflammation response in the gut, but no negative effects on fish survival were reported. However, development of vegetable-based diets that do not cause gut inflammatory reactions is needed in order to ensure, not only growth and survival, but also health status and welfare of fish.
alp :
ANOVA:
casp1 :
Caspase 1
cox2 :
Economic Conversion Rate
ef1α :
Elongation Factor 1α
EI:
Lymphocytic infiltration of the epithelilal layer
EPI:
Economical Profit Index
FCR:
FG:
Foregut
FI:
Feed intake
FM:
Fishmeal-based diet
FW:
Final weight
gapdh :
Glyceraldehide 3-phosphate dehydrogenase
GC:
Goblet cells
HG:
Hindgut
igm :
Immunoglobulin M
il1β :
il6 :
imuc :
Intestinal Mucin
Lamina propria thickness
LPI:
Lymphocytic infiltration of the lamina propria
ML:
Muscular layer
muc13 :
Mucin 13
muc2 :
Mucin 2
muc2L :
Mucin 2-like
Position of the nuclei of the enterocytes
ocl :
Occludin
pept1 :
Peptide Transporter 1
rps18 :
Ribosomal protein S18
SGR:
Specific growth rate
Serous layer
SMI:
Lymphocytic infiltration of the submucosa
SML:
Submucosa layer
tnfα :
tryp :
tub :
V:
Supranuclear vacuolization
VL:
Villi length
VM + :
Vegetable diet with marine ingredients
VM:
Vegetable diet
VT:
Villi thickness
αamy :
α-Amylase
βact :
β-Actin
Hardy RW. Utilization of plant proteins in fish diets: effects of global demand and supplies of fishmeal. Aquac Res. 2010;41:770–6.
Martínez-Llorens S, Moñino AV, Vidal AT, Salvador VJM, Pla Torres M, Jover Cerdá M, et al. Soybean meal as a protein source in gilthead sea bream (Sparus aurata L.) diets: effects on growth and nutrient utilization. Aquac Res. 2007;38(1):82–90.
Tacon AGJ, Metian M. Global overview on the use of fish meal and fish oil in industrially compounded aquafeeds: trends and future prospects. Aquaculture. 2008;285:146–58.
Bonaldo A, Roem AJ, Fagioli P, Pecchini A, Cipollini I, Gatta PP. Influence of dietary levels of soybean meal on the performance and gut histology of gilthead sea bream (Sparus aurata L.) and European sea bass (Dicentrarchus labrax L.). Aquac Res. 2008;39(9):970–8.
Kissil G, Lupatsch I. Successful replacement of fishmeal by plant proteins in diets for the gilthead seabream, Sparus Aurata L. Isr J Aquac – Bamidgeh. 2004;56(3):188–99.
Monge-Ortíz R, Martínez-Llorens S, Márquez L, Moyano FJ, Jover-Cerdá M, Tomás-Vidal A. Potential use of high levels of vegetal proteins in diets for market-sized gilthead sea bream (Sparus aurata). Arch Anim Nutr. 2016;70(2):155–72.
Santigosa E, Sánchez J, Médale F, Kaushik S, Pérez-Sánchez J, Gallardo MA. Modifications of digestive enzymes in trout (Oncorhynchus mykiss) and sea bream (Sparus aurata) in response to dietary fish meal replacement by plant protein sources. Aquaculture. 2008;282:68–74.
Santigosa E, García-Meilán I, Valentin JM, Pérez-Sánchez J, Médale F, Kaushik S, et al. Modifications of intestinal nutrient absorption in response to dietary fish meal replacement by plant protein sources in sea bream (Sparus aurata) and rainbow trout (Onchorynchus mykiss). Aquaculture. 2011;317:146–54.
Sitjá-Bobadilla A, Peña-Llopis S, Gómez-Requeni P, Médale F, Kaushik S, Pérez-Sánchez J. Effect of fish meal replacement by plant protein sources on non-specific defence mechanisms and oxidative stress in gilthead sea bream (Sparus aurata). Aquaculture. 2005;249:387–400.
Martínez-Llorens S, Baeza-Ariño R, Nogales-Mérida S, Jover-Cerdá M, Tomás-Vidal A. Carob seed germ meal as a partial substitute in gilthead sea bream (Sparus aurata) diets: amino acid retention, digestibility, gut and liver histology. Aquaculture. 2012;338-341:124–33.
Baeza-Ariño R, Martínez-Llorens S, Nogales-Mérida S, Jover-Cerda M, Tomás-Vidal A. Study of liver and gut alterations in sea bream, Sparus aurata L., fed a mixture of vegetable protein concentrates. Aquac Res. 2014;47(2):460–71.
Estruch G, Collado MC, Peñaranda DS, Tomás Vidal A, Jover Cerdá M, Pérez Martínez G, et al. Impact of fishmeal replacement in diets for gilthead sea bream (Sparus aurata) on the gastrointestinal microbiota determined by pyrosequencing the 16S rRNA gene. PLoS One. 2015;10(8):e0136389. https://doi.org/10.1371/journal.pone.0136389.
Fekete SG, Kellems RO. Interrelationship of feeding with immunity and parasitic infection: a review. Vet Med. 2007;52(4):131–43.
Kiron V. Fish immune system and its nutritional modulation for preventive health care. Anim Feed Sci Technol. 2012;173(1–2):111–33.
Minghetti M, Drieschner C, Bramaz N, Schug H, Schirmer K. A fish intestinal epithelial barrier model established from the rainbow trout (Oncorhynchus mykiss) cell line, RTgutGC. Cell Biol Toxicol. 2017;33:539–55.
Cerezuela R, Meseguer J, Esteban MÁ. Effects of dietary inulin, Bacillus subtilis and microalgae on intestinal gene expression in gilthead seabream (Sparus aurata L.). Fish Shellfish Immunol. 2013;34(3):843–8.
Couto A, Kortner TM, Penn M, Bakke AM, Krogdahl O-TA, et al. Effects of dietary soy saponins and phytosterols on gilthead sea bream (Sparus aurata) during the on-growing period. Anim Feed Sci Technol. 2014;198:203–14.
Estensoro I, Calduch-Giner JA, Kaushik S, Pérez-Sánchez J, Sitjá-Bobadilla A. Modulation of the IgM gene expression and IgM immunoreactive cell distribution by the nutritional background in gilthead sea bream (Sparus aurata) challenged with Enteromyxum leei (Myxozoa). Fish Shellfish Immunol. 2012;33(2):401–10.
Pérez-Sánchez J, Estensoro I, Redondo MJ, Calduch-Giner JA, Kaushik S, Sitjà-Bobadilla A. Mucins as diagnostic and prognostic biomarkers in a fish-parasite model: transcriptional and functional analysis. PLoS One. 2013;8(6):e65457.
Reyes-Becerril M, Guardiola F, Rojas M, Ascencio-Valle F, Esteban MÁ. Dietary administration of microalgae Navicula sp. affects immune status and gene expression of gilthead seabream (Sparus aurata). Fish Shellfish Immunol. 2013;35(3):883–9.
Pérez-Sánchez J, Benedito-Palos L, Estensoro I, Petropoulos Y, Calduch-Giner JA, Browdy CL, et al. Effects of dietary NEXT ENHANCE ® 150 on growth performance and expression of immune and intestinal integrity related genes in gilthead sea bream (Sparus aurata L.). Fish Shellfish Immunol. 2015;44:117–28.
Estensoro I, Ballester-Lozano G, Benedito-Palos L, Grammes F, Martos-Sitcha JA, Mydland L-T, et al. Dietary butyrate helps to restore the intestinal status of a marine teleost (Sparus aurata) fed extreme diets low in fish meal and fish oil. PLoS One. 2016;11(11):1–21.
Torrecillas S, Caballero MJ, Mompel D, Montero D, Zamorano MJ, Robaina L, et al. Disease resistance and response against Vibrio anguillarum intestinal infection in European seabass (Dicentrarchus labrax) fed low fish meal and fish oil diets. Fish Shellfish Immunol. 2017;67:302–11.
Schmittgen TD, Livak KJ. Analyzing real-time PCR data by the comparative C T method. Nat Protoc. 2008;3(6):1101–8.
Omnes MH, Silva FCP, Moriceau J, Aguirre P, Kaushik S, Gatesoupe F-J. Influence of lupin and rapeseed meals on the integrity of digestive tract and organs in gilthead seabream (Sparus aurata L.) and goldfish (Carassius auratus L.) juveniles. Aquac Nutr. 2015;21:223–33.
Francis G, Makkar HPS, Becker K. Antinutritional factors present in plant-derived alternate fish feed ingredients and their effects in fish. Aquaculture. 2001;199:197–227.
Gatlin DM III, Barrows FT, Brown P, Dabrowski K, Gaylord TG, Hardy RW, et al. Expanding the utilization of sustainable plant products in aquafeeds: a review. Aquac Res. 2007;38:551–79.
Kader MA, Bulbul M, Koshio S, Ishikawa M, Yokoyama S, Nguyen BT, et al. Effect of complete replacement of fishmeal by dehulled soybean meal with crude attractants supplementation in diets for red sea bream, Pagrus major. Aquaculture. 2012;350-353:109–16.
Gómez-Requeni P, Mingarro M, Calduch-Giner JA, Médale F, Martin SAM, Houlihan DF, et al. Protein growth performance, amino acid utilisation and somatotropic axis responsiveness to fish meal replacement by plant protein sources in gilthead sea bream (Sparus aurata). Aquaculture. 2004;232(1–4):493–510.
Kader MA, Koshio S, Ishikawa M, Yokoyama S, Bulbul M. Supplemental effects of some crude ingredients in improving nutritive values of low fishmeal diets for red sea bream, Pagrus major. Aquaculture. 2010;308(3–4):136–44.
Mai K, Li H, Ai Q, Duan Q, Xu W, Zhang C, et al. Effects of dietary squid viscera meal on growth and cadmium accumulation in tissues of Japanese seabass, Lateolabrax japonicus (Cuvier 1828). Aquac Res. 2006;37(11):1063–9.
Peres H, Oliva-Teles A. The optimum dietary essential amino acid profile for gilthead seabream (Sparus aurata) juveniles. Aquaculture. 2009;296(1–2):81–6.
Cho CY, Slinger SJ, Bayley HS. Bioenergetics of salmonid fishes: energy intake, expenditure and productivity. Comp Biochem Physiol Part B. 1982;73(1):25–41.
Venou B, Alexis MN, Fountoulaki E, Haralabous J. Effects of extrusion and inclusion level of soybean meal on diet digestibility , performance and nutrient utilization of gilthead sea bream ( Sparus aurata ). Aquaculture. 2006;261:343–56.
Pfaffl MW, Tichopad A, Prgomet C, Neuvians TP. Determination of stable housekeeping genes, differentially regulated target genes and sample integrity: BestKeeper-excel-based tool using pair-wise correlations. Biotechnol Lett. 2004;26:509–15.
Terova G, Robaina L, Izquierdo M, Cattaneo A, Molinari S, Bernardini G, et al. PepT1 mRNA expression levels in sea bream (Sparus aurata) fed different plant protein sources. Springerplus. 2013;2:17.
Bates JM, Akerlund J, Mittge E, Guillemin K. Intestinal alkaline phosphatase detoxifies lipopolysaccharide and prevents inflammation in zebrafish in response to the gut microbiota. Cell Host Microbe. 2007;2(6):371–82.
Adamidou S, Nengas I, Henry M, Grigorakis K, Rigos G, Nikolopoulou D, et al. Growth, feed utilization, health and organoleptic characteristics of European seabass (Dicentrarchus labrax) fed extruded diets including low and high levels of three different legumes. Aquaculture. 2009;293(3–4):263–71.
Daprà F, Gai F, Costanzo MT, Maricchiolo G, Micale V, Sicuro B, et al. Rice protein-concentrate meal as a potential dietary ingredient in practical diets for blackspot seabream Pagellus bogaraveo: a histological and enzymatic investigation. J Fish Biol. 2009;74(4):773–89.
Overland M, Sorensen M, Storebakken T, Penn M, Krogdahl A, Skrede A. Pea protein concentrate substituting fish meal or soybean meal in diets for Atlantic salmon (Salmo salar)-effect on growth performance, nutrient digestibility, carcass composition, gut health, and physical feed quality. Aquaculture. 2009;288(3–4):305–11.
Penn MH, Bendiksen EA, Campbell P, Krogdahl AS. High level of dietary pea protein concentrate induces enteropathy in Atlantic salmon (Salmo salar L.). Aquaculture. 2011;310(3–4):267–73.
Hedrera MI, Galdames JA, Jimenez-Reyes MF, Reyes AE, Avendaño-Herrera R, Romero J, et al. Soybean meal induces intestinal inflammation in zebrafish larvae. PLoS One. 2013;8(7):1–10.
Kokou F, Sarropoulou E, Cotou E, Rigos G, Henry M, Alexis M. Effects of fish meal replacement by a soybean protein on growth, histology, selected immune and oxidative status markers of Gilthead Sea bream, Sparus aurata. J World Aquac Soc. 2015;46(2):115–28.
Kokou F, Sarropoulou E, Cotou E, Kentouri M, Alexis M, Rigos G. Effects of graded dietary levels of soy protein concentrate supplemented with methionine and phosphate on the immune and antioxidant responses of gilthead sea bream (Sparus aurata L.). Fish Shellfish Immunol. 2017;64:111–21.
Calduch-Giner JA, Sitjà-Bobadilla A, Davey GC, Cairns MT, Kaushik S, Pérez-Sánchez J. Dietary vegetable oils do not alter the intestine transcriptome of gilthead sea bream (Sparus aurata), but modulate the transcriptomic response to infection with Enteromyxum leei. BMC Genomics. 2012;13(1):470.
Piazzon MC, Galindo-Villegas J, Pereiro P, Estensoro I, Calduch-Giner JA, Gómez-Casado E, et al. Differential modulation of IgT and IgM upon parasitic, bacterial, viral, and dietary challenges in a perciform fish. Front Immunol. 2016;7. Article 637. https://doi.org/10.3389/fimmu.2016.00637.
Salinas I, Zhang Y, Sunyer JO. Mucosal immunoglobulins and B cells of teleost fish. Dev Comp Immunol. 2011;35(12):1346–65.
Krogdahl A, Bakke-McKellep AM, Roed KH, Baeverfjord G. Feeding Atlantic salmon Salmo salar L. soybean products: effects on disease resistance (furunculosis), and lysozyme and IgM levels in the intestinal mucosa. Aquac Nutr. 2000;6:77–84.
Chasiotis H, Effendi JC, Kelly SP. Occludin expression in goldfish held in ion-poor water. J Comp Physiol B Biochem Syst Environ Physiol. 2009;179(2):145–54.
Chen KT, Malo MS, Beasley-Topliffe LK, Poelstra K, Millan JL, Mostafa G, et al. A role for intestinal alkaline phosphatase in the maintenance of local gut immunity. Dig Dis Sci. 2011;56(4):1020–7.
Vaishnava S, Hooper LV. Alkaline phosphatase: keeping the peace at the gut epithelial surface. Cell Host Microbe. 2007;2(6):365–7.
Tort L. Stress and immune modulation in fish. Dev Comp Immunol [internet]. Elsevier Ltd. 2011;35(12):1366–75.
Martin SAM, Król E. Nutrigenomics and immune function in fish: new insights from omics technologies. Dev Comp Immunol. 2017;75:86–98.
Burrells C, Williams PD, Southgate PJ, Crampton VO. Immunological , physiological and pathological responses of rainbow trout (Oncorhynchus mykiss) to increasing dietary concentrations of soybean proteins. Vet Immunol Immunopathol. 1999;72:277–88.
Sahlmann C, Sutherland BJG, Kortner TM, Koop BF, Krogdahl Å, Bakke AM. Early response of gene expression in the distal intestine of Atlantic salmon (Salmo salar L.) during the development of soybean meal induced enteritis. Fish Shellfish Immunol. 2013;34(2):599–609.
Esteban MÁ, Cuesta A, Ortuño J, Meseguer J. Immunomodulatory effects of dietary intake of chitin on gilthead seabream ( Sparus aurata L .) innate immune system. Fish Shellfish Immunol. 2001;11:303–15.
Storebakken T, Kvien IS, Shearer KD, Grisdale-Helland B, Helland SJ. Estimation of gastrointestinal evacuation rate in Atlantic salmon (Salmo salar) using inert markers and collection of faeces by sieving: evacuation of diets with fish meal, soybean meal or bacterial meal. Aquaculture. 1999;172(3–4):291–9.
Olsen RE, Myklebust R, Ringø E, Mayhew TM. The influences of dietary linseed oil and saturated fatty acids on caecal enterocytes in Arctic char (Salvelinus alpinus L.): a quantitative ultrastructural study. Fish Physiol Biochem. 2000;22(3):207–16.
Heikkinen J, Vielma J, Kemiläinen O, Tiirola M, Eskelinen P, Kiuru T, et al. Effects of soybean meal based diet on growth performance, gut histopathology and intestinal microbiota of juvenile rainbow trout (Oncorhynchus mykiss). Aquaculture. 2006;261(1):259–68.
Krogdahl A, Bakke-McKellep AM, Baeverfjord G. Effects of graded levels of standard soybean meal on intestinal structure, mucosal enzyme activities, and pancreatic response in Atlantic salmon (Salmo salar L.). Aquac Nutr. 2003;9:361–71.
Cerezuela R, Fumanal M, Tapia-Paniagua ST, Meseguer J, Moriñigo MA, Esteban MA. Changes in intestinal morphology and microbiota caused by dietary administration of inulin and Bacillus subtilis in gilthead sea bream (Sparus aurata L.) specimens. Fish Shellfish Immunol. 2013;34(5):1063–70.
Cerezuela R, Fumanal M, Tapia-Paniagua ST, Meseguer J, Moriñigo MÁ, Esteban MÁ. Histological alterations and microbial ecology of the intestine in gilthead seabream (Sparus aurata L.) fed dietary probiotics and microalgae. Cell Tissue Res. 2012;350(3):477–89.
Deplancke B, Gaskins HR. Microbial modulation of innate defense: goblet cells and the intestinal mucus layer. Am J Clin Nutr. 2001;73(suppl):1131S–41S.
Kokou F, Rigos G, Henry M, Kentouri M, Alexis M. Growth performance, feed utilization and non-specific immune response of gilthead sea bream (Sparus aurata L.) fed graded levels of a bioprocessed soybean meal. Aquaculture. 2012;364-365:74–81.
The first author was supported by a contract-grant (Contrato Pre-doctoral para la Formación de Profesorado Universitario) from Subprogramas de Formación y Movilidad within the Programa Estatal de Promoción del Talento y su Empleabilidad of the Ministerio de Educación, Cultura y Deporte of Spain.
The research has been partially funded by Vicerrectorat d'Investigació, Innovació i Transferència of the Universitat Politècnica de València, which belongs to the project Aquaculture feed without fishmeal (SP20120603). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
The datasets during the current study are available from the corresponding authors on reasonable request.
Aquaculture and Biodiversity Research Group, Institute of Science and Animal Technology, (ICTA), Universitat Politècnica de València, Camino de Vera s/n, 46022, Valencia, Spain
Guillem Estruch, Raquel Monge-Ortiz, Ana Tomás-Vidal, Miguel Jover-Cerdá, David S Peñaranda & Silvia Martínez-Llorens
Institute of Agrochemistry and Food Technology, Department of Biotechnology, Spanish National Research Council (IATA-CSIC), Av. Agustin Escardino 7, 46980, Paterna, Spain
Maria Carmen Collado & Gaspar Pérez Martínez
Guillem Estruch
Maria Carmen Collado
Raquel Monge-Ortiz
Ana Tomás-Vidal
Miguel Jover-Cerdá
David S Peñaranda
Gaspar Pérez Martínez
Silvia Martínez-Llorens
MJC, ATV, GPM and SML designed the assay. GE, MCC and RMO carried out the experiments. GE analysed the data. GE, DSP and SML did the manuscript and discussed the results. All authors read and approved the final manuscript.
Correspondence to Guillem Estruch.
Cq values reported in cDNA pooled samples when evaluating candidate reference genes. It Includes Cq determined for different candidate reference genes in six different cDNA pooled samples, and the average and standard desviation. (XLSX 9 kb)
Relative gene expression of candidate target genes in cDNA pooled samples. A) Interleukine-1β (il1β), Interleukin-6 (il6) and Interleukine-8 (il8); B) Tumor Necrosis Factor–α (tnfα), Caspase 1 (casp1), Cyclooxigenase-2 (cox2); C) Mucin 2 (muc2), Mucin 2-like (muc2L), Mucin 13 (muc13); D) Intestinal Mucin (imuc), Mucin 18 (muc18), Immunoglobulin M (igm); E) Occludin (ocl) and Tubuline (tub); F) α-Amylase (αamy) and Alkaline Phosphatase (alp); G) Trypsin (tryp) and Peptide Transporter 1 (pept1). Different genes are represented with different colours. Bars represent relative gene expression of cDNA pools (one per section and treatment), in the foregut (FG) and the hindgut (HG). cDNA pool of the foregut of fish fed VM was used as a calibrator. (TIF 169 kb)
Estruch, G., Collado, M.C., Monge-Ortiz, R. et al. Long-term feeding with high plant protein based diets in gilthead seabream (Sparus aurata, L.) leads to changes in the inflammatory and immune related gene expression at intestinal level. BMC Vet Res 14, 302 (2018). https://doi.org/10.1186/s12917-018-1626-6
Gilthead seabream
Vegetable meal
Squid meal
Krill meal | CommonCrawl |
Loop Invariants as Tautologies
Would it be correct to characterize loop invariants as a type of tautology? I ask since the invariant must basically always be true, before the loop starts, before each iteration and after the loop terminates. I realize that there is the possibility that the invariant could become false during the body of the loop. But since inside the loop "doesn't count" is it fair to characterize the invariant as a tautology?
algorithm-analysis logic correctness-proof loop-invariants program-correctness
Raphael♦
Robert S. BarnesRobert S. Barnes
A Tautology is a formula (in a certain logic) that is true under every model of that logic. That is, it is equivalent to the formula "$True$".
A loop invariant, however, is a certain claim that is usually true under some models, and false under others (a model in this case is an algorithm). Then, you prove that the invariant is true under your specific model.
If you add axioms to your logic that forces then only model to be the one of your specific program, then indeed this would be a tautology. But such a process (adding axioms and proving that something is a tautology), is what is more commonly known as "proof". (For clarification: even if you add enough axioms, you may not be able to prove your claim, even if it's a tautology, in case of incomplete systems).
For example, consider a loop that increases a variable $i$ by $1$. An invariant of the loop may be that if before the loop $i>0$, then after the loop $i>0$. Indeed, this loop satisfies it. But it is not a tautology, since we can come up with other loops that do not satisfy it.
ShaullShaull
$\begingroup$ But since loop invariants are used typically in the context of proving the correctness of a specific loop / model then wouldn't it be correct, in that context, to call it a tautology? That seems to be what you're saying anyways in your third paragraph. $\endgroup$ – Robert S. Barnes Mar 12 '13 at 8:20
$\begingroup$ @RobertS.Barnes: If the loop doesn't satisfy the invariant, then it won't be true, thus it cannot be a tautology. $\endgroup$ – Dave Clarke Mar 12 '13 at 8:28
$\begingroup$ Mathematically, after axiomatically forcing the specific loop - it's ok to say it. But it is really not standard (or useful). It won't save you any work, or make anything clearer. $\endgroup$ – Shaull Mar 12 '13 at 8:30
$\begingroup$ OK, I personally find it useful to be able to think about it in these terms - to each his own :-) $\endgroup$ – Robert S. Barnes Mar 12 '13 at 8:53
$\begingroup$ @RobertS.Barnes Because the inv is not always true, which is the defining property of a tautology. Say you have a logic $T$ and assumptions $G$, then $P$ is a tautology iff $T, G \vdash P$ regardless of the assumptions in $G$. That is not true for non-trivial loop invariants, which will depend on program-specific assumptions. Even if you consider only the loop body it is still not a tautology, because an invariant need not hold throughout the body. You can of course call it a tautology, but that only causes confusion since it is not in line with the common understanding of tautologies. $\endgroup$ – Malte Schwerhoff Mar 12 '13 at 11:23
The word tautology is a technical word. The following is a tautology of classical propositional logic.
$\vdash p \lor \neg p$
When interpreted over the natural numbers, the following is a theorem.
$ (\mathbb{N},<)\vdash \forall x. \exists y. x < y$
But we do not say it is a tautology in the strict logical sense of the word because there are structures where this is not true.
Considering $S = \{a,b\}$, $<$ defined as $\{(a,b)\}$, we have $(S,<) \not \vdash \forall x. \exists y. x < y$
Similarly, if you think of a loop $P$ as implicitly defining the axioms of a theory, then a loop invariant $I$ as satisfying
$P \vdash \text{Every execution satisfies } I $
Thus, in exactly the same way as the existence of successors is a theorem of arithmetic, a loop invariant is a theorem of a logical theory defined by the program. A loop invariant is not a tautology in the standard mathematical sense of the word tautology. A tautology in this context would satisfy
$ \vdash \text{Every execution satisfies } I $
From which we can conclude that every tautology is a loop invariant, but not every loop invariant is a tautology.
Vijay DVijay D
I am guessing that, by "tautology," you mean a property that is true in all states. (I have seen some Lecturers use the term in that way, e.g., $x > 1 \Longrightarrow x > 0$, which is true in all states no matter what $x$ is, might be called a "tautology". The technical definition of "tautology" in logic is more narrow, but I will continue to use your terminology.)
A loop invariant is only true at a particular program point in the loop. It is true for every state encountered at that point, but it might be false for states encountered at other program points (inside the loop as well as outside the loop). So, clearly, it is not a "tautology" in the sense I stated above.
However, there is an interesting proof rule formulated in Reynolds's extension of Hoare Logic. If, in a particular piece of code, there are no operations that affect the truth/falsity of an assertion, and we know that the assertion is true at the beginning of the code, then we can pretend that the assertion is a "tautology" in the middle of that code.
A good example of this is a binary search procedure for an array $A$. Before the procedure starts, the pre-condition states that the array is sorted. Inside the binary search procedure, we don't do anything to modify the array. So, it will continue to be sorted throughout the procedure. Reynolds's rule says that, for the duration of the procedure, we can pretend that "$A$ is sorted" is a "tautology". This is a useful trick to use. Without it we would need to add "$A$ is sorted" in every assertion in the middle of the procedure, and we can see that it is quite pointless to keep repeating this silly condition because we are never modifying the array. Reynolds's rule allows to avoid the silliness.
For interesting applications of this rule, see the Chapter 5 of Reynolds's Craft of Programming.
Uday ReddyUday Reddy
Not the answer you're looking for? Browse other questions tagged algorithm-analysis logic correctness-proof loop-invariants program-correctness or ask your own question.
Loop invariants?
Proof of linear search?
loop invariant proof
Proof of Program Correctness - Loop Invariants
Understanding Log(n) Loop Invariant
Loop invariant for a division algorithm
Proving correctness of an iterative Fibonacci algorithm
Loop invariant for
Finding a strong loop invariant
Iterative Fibonacci algorithm correctness proof, finding loop invariants | CommonCrawl |
Uspekhi Matematicheskikh Nauk
Uspekhi Mat. Nauk:
Uspekhi Mat. Nauk, 2017, Volume 72, Issue 2(434), Pages 67–146 (Mi umn9763)
This article is cited in 11 scientific papers (total in 11 papers)
The theory of filtrations of subalgebras, standardness, and independence
A. M. Vershikabc
a St. Petersburg Department of the Steklov Mathematical Institute
b St. Petersburg State University
c Institute for Information Transmission Problems
Abstract: This survey is devoted to the combinatorial and metric theory of filtrations: decreasing sequences of $\sigma$-algebras in measure spaces or decreasing sequences of subalgebras of certain algebras. One of the key notions, that of standardness, plays the role of a generalization of the notion of the independence of a sequence of random variables. Questions are discussed on the possibility of classifying filtrations, on their invariants, and on various connections with problems in algebra, dynamics, and combinatorics.
Bibliography: 101 titles.
Keywords: filtrations, $\sigma$-algebras, independence, standardness, graded graphs, central measures.
Funding Agency Grant Number
Russian Science Foundation 14-11-00581
Partially supported by the Russian Science Foundation (grant no. 14-11-00581).
DOI: https://doi.org/10.4213/rm9763
Full text: PDF file (1196 kB)
Russian Mathematical Surveys, 2017, 72:2, 257–333
UDC: 517.518
MSC: 05A05, 37A60, 37M99, 28A06, 60A10
Revised: 15.02.2017
Citation: A. M. Vershik, "The theory of filtrations of subalgebras, standardness, and independence", Uspekhi Mat. Nauk, 72:2(434) (2017), 67–146; Russian Math. Surveys, 72:2 (2017), 257–333
\Bibitem{Ver17}
\by A.~M.~Vershik
\paper The theory of filtrations of subalgebras, standardness, and independence
\jour Uspekhi Mat. Nauk
\issue 2(434)
\pages 67--146
\mathnet{http://mi.mathnet.ru/umn9763}
\crossref{https://doi.org/10.4213/rm9763}
\adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2017RuMaS..72..257V}
\jour Russian Math. Surveys
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85026667082}
http://mi.mathnet.ru/eng/umn9763
https://doi.org/10.4213/rm9763
http://mi.mathnet.ru/eng/umn/v72/i2/p67
V. M. Buchstaber, N. Yu. Erokhovets, M. Masuda, T. E. Panov, S. Park, "Cohomological rigidity of manifolds defined by 3-dimensional polytopes", Russian Math. Surveys, 72:2 (2017), 199–256
A. M. Vershik, "Duality and free measures in vector spaces; spectral theory and the actions of non locally compact groups", J. Math. Sci. (N. Y.), 238:4 (2019), 390–405
A. M. Vershik, A. V. Malyutin, "The Absolute of Finitely Generated Groups: II. The Laplacian and Degenerate Parts", Funct. Anal. Appl., 52:3 (2018), 163–177
A. M. Vershik, P. B. Zatitskii, "Combinatorial Invariants of Metric Filtrations and Automorphisms; the Universal Adic Graph", Funct. Anal. Appl., 52:4 (2018), 258–269
A. M. Vershik, A. V. Malyutin, "The absolute of finitely generated groups: I. Commutative (semi)groups", Eur. J. Math., 4:4 (2018), 1476–1490
P. E. Naryshkin, "A remark on the isomorphism between the Bernoulli scheme and the Plancherel measure", J. Math. Sci. (N. Y.), 240:5 (2019), 567–571
A. M. Vershik, P. B. Zatitskii, "On a universal Borel adic space", J. Math. Sci. (N. Y.), 240:5 (2019), 515–524
A. M. Vershik, "Asimptotika razbieniya kuba na simpleksy Veilya i kodirovanie skhemy Bernulli", Funkts. analiz i ego pril., 53:2 (2019), 11–31
A. M. Vershik, "The problem of combinatorial encoding of a continuous dynamics and the notion of transfer of paths in graphs", Teoriya predstavlenii, dinamicheskie sistemy, kombinatornye i algoritmicheskie metody. XXX, Zap. nauchn. sem. POMI, 481, POMI, SPb., 2019, 12–28
A. M. Vershik, "Kombinatornoe kodirovanie skhem Bernulli i asimptotika tablits Yunga", Funkts. analiz i ego pril., 54:2 (2020), 3–24
A. M. Vershik, N. V. Tsilevich, "Ergodicity and Totality of Partitions Associated with the RSK Correspondence", Funct. Anal. Appl., 55:1 (2021), 26–33
Full text: 86
First page: 43 | CommonCrawl |
Conflict-free rerouting scheme through flow splitting for virtual networks using switches
Vianney Kengne Tchendji1,
Yannick Florian Yankam1 &
Jean Frédéric Myoupo2
The weaknesses of the Internet led to the creation of a new network paradigm – network virtualization. Virtualization is a very successful technique for sharing and reusing resources, which results in higher efficiency. Despite its advantages, including flexibility in network architecture, virtualization imposes many challenges, such as physical resource allocation to virtual devices. An efficient allocation strategy for these resources can ensure good Quality of Service (QoS) in virtual networks, whether in node or link failure events. This paper presents a conflict-free rerouting scheme with efficient additional capacity usage for link and node failure resilience in a virtual network using switches. Combining an IP Fast Rerouting approach and flow-splitting strategy, this scheme provides short reaction time, stable performance and low complexity because the rerouting calculation and configuration are performed in advance. We show that rerouting by traffic splitting based on the entering arc and destination is sufficient to address all link-failure situations in the network, assuming that the network is two-link connected. After modelling the dimensioning problem as an Integer Linear Programme, we demonstrate through practical implementation of our rerouting scheme on different networks that the scheme can substantially minimize the additional capacity draw on the substrate network. A solution using multiple virtual planes is also provided to solve several conflict problems in the case of simultaneous multiple link failures.
Since its creation, the Internet has brought innovations and success to industry, economic and research fields; however, deployment of any new, radically different technology and architecture is becoming highly difficult, a situation that cloud computing can mitigate. That effect is what we call Internet ossification [1]. To fend off ossification, studies have proposed rethinking the architecture of the actual Internet [1]. However, network virtualization is the most promising approach to addressing current limitations of the Internet and supporting new requirements [2,3,4,5]. Its principle is to implement multiple virtual routers on each physical machine and to interconnect them through substrate network architecture. This implementation allows virtual networks to have different logical topologies from that substrate network, and each of them behaves as a true network in which it is possible to implement different routing protocols and services. As in the substrate network, failures could occur in virtual networks; in this case, rerouting mechanisms can be implemented to forward traffic by using available resources in the virtual network or additional ones taken from the physical network. This additional resource could cause dysfunctional risks inside another virtual plane.
The link and node failure problem has been investigated for a long time in the framework of physical networks but not in virtualized networks because of the new requirements brought by this technology [6] (e.g., architecture flexibility, mobility, and isolation). Our restoration scheme is pre-calculated. Therefore, the rerouting paths for all link failure cases are determined and recorded inside the controller in advance. When a failure occurs, the nodes apply the pre-calculated rerouting paths directly. Multiple methods based on the IPFRR (IP Fast Reroute) strategy have been proposed for transient failures. However, they have the following limitations:
For rerouting schemes using a single path and additional capacity [7, 8], the limits of the physical resources are quickly reached, which paralyses the network.
For loop-free alternative mechanism-based methods [9], we are not certain of rerouting traffic to all destinations; doing so only helps to reduce the number of lost packets in an IP network.
Not-via addressing [10] and tunnelling [11] mechanisms require encapsulation and de-encapsulation of packets, whereas in a multiple routing configuration mechanism [12], the packets must carry configuration information. With the appearance of optic networks, methods that modify the packets are not recommended but can instead be used to optimize the usage of resources in network virtualization.
Multipath rerouting [13] using spare capacity in the network can induce a capacity saving of up to 11% in randomly generated networks, but lack of spare capacity due to the existence of multiple virtual planes on a substrate network can undermine this result in network virtualization.
Additionally, [14,15,16] propose methods to find back-up paths that permit rerouting traffic in the case of link failure but not in the context of network virtualization. Proposed schemes also based on IPFRR use two port types: primary and back-up ports. The traffic will change from the primary port to a back-up port only when there is a failure on the primary port or the traffic comes from the primary port. This packet forwarding strategy uses a bridge to reconnect sub-graphs coming from a failure but does not consider conflicts, which can disturb traffic. In contrast to these works, our rerouting scheme uses not just one but multiple bridges, and it avoids conflict on rerouting paths. Our strategy extends the results in [7] and can minimize the dimensioning cost of the network. Recently, a restoration scheme was proposed in [7, 8] to handle both node and link failure problems by replacing all or parts of a network with switches managed by an external controller. The authors summarized the previous works [17,18,19,20] and noted their drawbacks. They derived a rerouting scheme negating these drawbacks. Their rerouting model uses only one path to reroute traffic and to minimize the additional capacity used by a virtual network; it cannot solve the multiple-link-failure problem. Moreover, their work in [7, 8] has drawbacks:
Network modelling using only one rerouting path cannot significantly minimize additional capacity, which means that physical resource limits are quickly reached and the network is paralysed for a moment. Because the physical resource limits are quickly reached in [7, 8], the number of failures handled is also reduced.
Based on their rerouting model, the authors in [7, 8] claim that a conflict-free rerouting scheme for the multiple-link-failure situation cannot be constructed even when the network remains connected.
The work in [21] presents a virtual network embedding model that allows a virtual link to map multiple substrate paths. This model can help to build a rerouting strategy using multiple planes.
We initially propose a new scheme to solve the link and node failure problems in a network of switches that avoids conflict on rerouting paths in contrast to protocols in [6, 7]. The new rerouting scheme we present here clearly uses not just one but multiple bridges, and it avoids conflict on rerouting paths. Our strategy uses a flow-splitting technique [22,23,24,25], extends the results in [7, 8] and can minimize the dimensioning cost of the network. Flow splitting is a method for restoring traffic from a failed link using multiple rerouting paths in the case of insufficient residual capacity. In this first contribution, we consider only one virtual plane. We propose a rerouting scheme that ensures that for any link or node failure, the traffic will be rerouted until it reaches the destination through an alternative path.
Given that there generally are no rerouting solutions that avoid conflicts in all network configurations, our second contribution is to avoid these situations. To reach this goal we use multiple planes to provide a rerouting strategy that avoids conflicts that could not be solved by using only one plane. Therefore, we introduce here a new scheme to solve the link and node failure problems in a network of switches. This new scheme negates the drawbacks of [7, 8] by showing that a conflict-free rerouting scheme for a multiple-link-failure situation can be constructed even when the network remains connected. The rerouting scheme described in this paper uses filters in switches to determine the next hop for the incoming flow. We provide each virtual network a specific filter. Then, the controller sets the flow path by programming the switches in the form of quadruplets (S, N, O, F) in which S is the source port (node), N the current node, O the output ports because flow can be split and forwarded through different paths to the same destination depending upon the spare capacity needed, and F the filter, which indicates the destination.
More precisely, for an incoming flow from a neighbour and a given destination, the scheme will assign the potential output ports. In the case of failure, only the upstream node must react by directing the disturbed traffic to one or many of its neighbours. The traffic is routed according to the filter programmed in each node of the network. Traffic can be split anytime at the level of each node if needed. Hence, the proposed scheme needs only a local reaction, making its implementation particularly easy in distributed environments. This local reaction helps the network operate normally and can solve the problem of transient failures. A transient failure is a failure whose duration is short, less than 10 min, whereas the duration of a persistent failure is longer [7]. When the failure is determined to be persistent, the controller can recalculate the routing table for all nodes in the network. To avoid the rerouted traffic of a failure causing disturbances to another part of network, additional capacity is added to all arcs. Because this additional capacity is added in the physical network and is exploited by several virtual layers, it is necessary to minimize it. The mathematical model that we propose in this paper can calculate the rerouting paths and optimize the total additional capacity injected into the network.
To the best of our knowledge, presently our work is the only one that shows how to effectively solve simultaneous multilink failures using flow splitting methods, thus providing an improvement in QoS of computer networks.
The rest of this paper is organized as follows. The next section presents motivations for traffic splitting. Section 3 provides a full description of our restoration scheme for a link failure configuration. Our mathematical model is described in Section 4. Section 5 presents numerical results of implementations. Section 6 studies the application of our rerouting scheme to the single-node-failure problem. Section 7 extends our work to simultaneous multiple-link-failure situations by providing a solution for some conflict problems. Finally, Section 8 ends the paper.
Motivation for traffic splitting
Improvements of computer networks Qos
Virtual networks use physical network resources to achieve their needs. In the case of link or node failure, spare capacity available in safe links is commonly used to restore traffic. However, this spare capacity might not be sufficient to carry the entering traffic; this situation represents a lack of spare capacity and is the main motivation for using flow splitting in the network. The idea is to split an original flow into multiple parts such that they can be forwarded easily through the network. This method induces numerous potential advantages as:
Reduction of transit delay and packet loss rate, because the flows are more able to reach their destination nodes, thus improving the network QoS;
improvement of the packets routing delays: since the original flow is split into several lighter-sized streams, they can be transported more quickly to the destination;
improvement of load balancing distribution [26], leading to prevent or decrease congestion risk across the network. This is a well-known benefit of flow splitting in computer networks.
extension of the lifetime of a network by allowing more flexible and efficient resources allocation;
better economy of the substrate network resources supporting virtual networks. Since virtual networks are built on a physical network infrastructure, it is necessary to avoid an abuse of these resources with the risk of causing the hosted virtual networks malfunctions.
We illustrate this last point by an example. Figure 1 shows a network with 6 nodes and 8 links. The numbers carried by each arc represent spare capacity available on the links. There is only one path (of traffic) 6–5–2-1 going through link (5, 2). Other paths are not shown for clarity. Let the traffic of this path be 8 units, and let all links be of equal length. In link restoration, a faulty link is replaced by one alternative path.
Example network
Assume that faulty link (5, 2) is replaced by path 5–3-1. Because links (5, 3) and (3, 1) have only 5 and 7 spare capacity each, both links need, respectively, 3 and 1 more spare capacity to make the restoration possible. This example proves that the use of traffic splitting in the link restoration scheme can result in lower spare capacity requirements but in the context of a virtual network, it could be very important to reduce additional capacities added to the links to avoid disturbances due to the rerouting scheme.
Now, let us consider the traffic-splitting version of link restoration (Fig. 2). There are two alternative paths, 6–5–3-1 and 6–3-1, for link (5–2), and each alternative carries 4 units of traffic. (As in the model [22], the general design principles presented in this paper are valid for any unit of bandwidth capacity for virtual networks.) With this second method, there is no need for more spare capacity.
Illustration of traffic splitting
The packets re-ordering problem
Beyond its advantages, flow splitting also brings some difficulties, the best known of which are the following:
Avoiding the packet re-ordering. When a stream is subdivided into the network, the different parts must be reassembled without losing no part (a potential TCP performance problem [27, 28]), making the use of flow splitting strategy very delicate. So, we should try to make reordering rare. Therefore avoiding this reordering until the packets reached the destination is crucial. However to overcome this problems some approaches were elaborated. Some strategies separate traffic at the level of flows. This approach removes the problem of reordering but at the cost of a restriction in the granularity with which we can split traffic [29]. Another one operates on bursts of packets (flowlets) carefully chosen to avoid reordering, but allowing a finer granularity of load balancing [30]. Some other algorithms [31, 32] minimize or eliminate reordering in some situations. But, some reordering problem should occur, and probably often enough to affect performance of IP networks;
the problem of reassembling packet segments inside the destination node. Due to various reasons, such as multipath routing, route fluttering, and retransmissions, packets belonging to the same flow may arrive out of order at a destination. The problem is how to know which packet comes before or after another one when we want to rebuild the original flow packets [27,28,29]. Some algorithms based on packet numbering in [29] can be used to at least minimize reordering.
In this work, flow splitting is implemented by building little flow of packets from an original one. To face reordering challenge, we use the numbering packets each time the flows are split, because this method does not modify significantly the packets headers.
New multipath link failure restoration scheme
In this section, we present our rerouting scheme that addresses the case of a link failure. As presented in Section 2, traffic splitting occurs because spare capacity is lacking in the network, but implementation of that approach in our rerouting scheme helps to solve many other problems:
Safeguarding of network resources by minimizing spare and additional capacity usage to manage more traffic
Possible rerouting, however, is impossible to do with only one path, as shown in [7].
A routing tree called a nominal routing tree is associated with a given destination; this tree is constructed using the shortest path tree criterion. We assume that the routing is provided. In the case of a failure of an arc or edge (both arcs are then concerned) and a lack of spare capacity in links, we reroute the traffic through one or more alternative paths. When there is only one path used, our rerouting scheme is similar to [7]. According to the routing scheme, for two independent failures, if two rerouting paths to a given destination have a common arc, they must merge after this arc. This requirement holds for both nominal and rerouting paths. If two paths do not satisfy this requirement, we say that they are in conflict. Any routing scheme satisfying this requirement is said to be without conflict. In this strategy, only the extremity failed link nodes will know about the failure. The upstream nodes initiate the traffic diversion, whereas all other nodes in the network apply the filter for each incoming flow without any difference between disturbed and non-disturbed flows. Because the disturbed traffic is rerouted on multiple alternative paths and should satisfy the non-conflict requirement, the cost in terms of resources and of computational time is expected to be higher compared with conventional schemes using single path rerouting.
We illustrate this rerouting scheme in Figs. 3 and 4.
Network with spare capacities on links
Network with spare and additional capacities on links
These figures represent a network with 6 nodes and 8 links.
The original graph with thespare capacity of each link is shown in Fig. 3.
Assume that the source node is node 6 and that the destination node is node 1. We also suppose that the failure link 4–1 generates a flow weight of 15 units to reroute. Figure 4 represents each link's weight and the additional capacity required on each arc if rerouting paths use this arc. If we reroute using only one alternative path, similar to [7], the path 4–6–3-1 will be selected as the rerouting path (see Fig. 5), which consumes T = 13 + 9 + 8 = 30 units of additional capacity.
Pham's single path rerouting scheme
When applying our rerouting scheme (Fig. 6), part 4–6 of the previous rerouting path will be preserved. From node 2, traffic can be split into two parts because there are two arcs leaving node 6 to node 1. Our splitting criterion is the amount of spare capacity available on links. Thus, a node will send some flow on the link that offers the greatest spare capacity, and the remaining flow will be sent on the other link. Therefore, 6 units of traffic will be sent on arc (6 → 3) (which is a bridge), offering a spare capacity of 6, and 9 will be sent on arc (6 → 5) (another bridge), which offers a spare capacity of 4. In this example, we use an additional capacity of 5 on arc (6 → 5) in addition to its existing spare capacity. The rerouting paths are 4–6–3-1 using 13 + 0 + 0 = 13 additional capacity, and 4–6–5-2-1 using 13 + 5 + 1 + 6 = 25 additional capacity. Arc (4 → 6) belongs to the two rerouting paths and involves a total additional capacity usage of 13. Therefore, the total additional capacity needed for these two paths is 25, i.e., a total additional capacity savings of 17% relative to [7]. This analysis shows that our rerouting scheme can substantially minimize the additional capacity needed on links to reroute the traffic of a failed link. Because of the configuration of filters, traffic will be rerouted until it reaches the destination without any conflict.
Our rerouting scheme
Mathematical formulation
Description of our model
This section provides a rerouting mathematical model based on the following assumptions:
The graph is assumed oriented and symmetric.
There are at least two disjoint arc paths between any two nodes of the graph.
There is only one link failure at a time.
To resolve the question of the existence of a rerouting solution without conflict, we have the following theorem:
Theorem 1. For any destination d, there is a rerouting plan without conflict using one or many alternative paths.
The formal mathematical proof can be found in the Appendix.
As in [7], a similar mathematical formulation can be provided for our case, but we add in the equations the number of rerouting paths. Consider the following notation:
Rtd: set of arcs of the routing tree to destination d
Ac ab : additional capacity assigned to arc (a, b)
\( {Tr}_n^d \): total traffic for d that passes through node n. n is the node that detects the failure. In fact, the failure is characterized by a source n and a destination d because we use the routing tree for nominal routing. For a destination d, the failed arc is the one routing the traffic going to d and coming from n by nominal routing.
F i : indicates fictive nodes used to divert traffic in the case of failure. We introduce the fictive nodes F i that will be used for all failures. For a given failure (n, d), the traffic to d will be rerouted by i paths from F i to d starting with arc (F i , n), i = 1, 2...
\( {\mathrm{SRed}}_{\mathrm{n}}^{\mathrm{d}} \): \( {\mathrm{SRed}}_{\mathrm{n}}^{\mathrm{d}} \) sub-tree of sink n. Recall that in the case of failure, the tree is divided into two parts, the isolated part, that is the Red part, and the Blue part. Alternative paths will reroute traffic from the Red part to the Blue part.
\( {\mathrm{SBlue}}_{\mathrm{n}}^{\mathrm{d}} \): sub-tree of sink d, with \( {Rt}^d-{SRed}_{\mathrm{n}}^{\mathrm{d}}{\mathrm{SRed}}_{\mathrm{n}}^{\mathrm{d}}{\mathrm{SRed}}_{\mathrm{n}}^{\mathrm{d}} \)
\( {\mathrm{y}}_{\mathrm{efgh}}^{\mathrm{dn}}{\mathrm{y}}_{\mathrm{efgh}}^{\mathrm{dn}} \): this binary variable indicates whether the eth alternative path to destination d for a given failure contains arcs (f, g) and (g, h); node n is the node that detects the failure.
\( {\mathrm{x}}_{\mathrm{efgh}}^{\mathrm{d}} \): \( {\mathrm{x}}_{\mathrm{efgh}}^{\mathrm{d}} \) this binary variable indicates the rerouting scheme to destination d. It takes value 1 if there is a failure in which the eth alternative path to destination d contains arcs (f, g) and (g, h). Therefore, the variable takes value 1 if there are n and e where \( {y}_{efgh}^{dn}{\mathrm{y}}_{\mathrm{efgh}}^{\mathrm{dn}} \) is equal to 1.
\( {\upalpha}_{\mathrm{eab}}^{\mathrm{dn}} \): \( {\upalpha}_{\mathrm{eab}}^{\mathrm{dn}} \) this binary coefficient equals 1 if arc (a, b) belongs to one of the paths in the nominal routing from n to d except the failed arc.
Quadruplet: All quadruplets (e, f, g, h) where e is the number of an alternative path in a rerouting scheme, f, g, h are nodes of the graph; f can be the fictive node, and (f, g) and (g, h) are two adjacent arcs, with. f ≠ h.
Arc: All arcs of the graph
L: Set of links
N: Set of nodes
The objective is to minimize the sum of additional capacity allocated to each arc; our objective function is provided by (1):
$$ \mathit{\min}\sum \limits_{\left(a,b\right)\in Arc}{Ac}_{ab} $$
$$ \sum \limits_{{\left(n,{h}_s\right)}_{\notin {Rt}^d},h\ neighbor\ of\ n,\kern0.5em h={h}_1}^{h={h}_s}{y}_{eF_0{nh}_s}^{dn}=1,n\in N,d\in N,s=1,2,\dots, e=1,2,\dots $$
$$ \sum \limits_{h_s\in neighbor\ of\ g,\kern0.5em h={h}_1}^{h={h}_s}{x}_{efgh}^d\le 1,\left(f,g\right)\in Arc,d\in N,s=1,2,\dots, e=1,2,\dots $$
$$ {y}_{efgh}^{dn}=0,d\in N,n\in N,j\in {SRed}_n^d,\left(f,g\right)\in {Rt}^d,\left(e,f,g,h\right)\in Quadruplet,e=1,2,\dots $$
$$ \sum \limits_{f\in N\mid \left(e,f,{g}_s,{h}_s\right)\in Quadruplet}^{h={h}_s}{y}_{ef{g}_s{h}_s}^{dn}\le 1,d\in N,n\in N,\left({g}_s,{h}_s\right)\in Arc,{g}_s\in {SRed}_n^d,{h}_s\in {SRed}_n^d,e=1,2,\dots $$
$$ \sum \limits_{f_s\in neighbor\ of\ f}^{h={h}_s}{y}_{e{f}_s fg}^{dn}=\sum \limits_{h_s\in neighbor\ of\ g}^{h={h}_s}{y}_{efg{h}_s}^{dn},d\in N,\left(f,g\right)\in Arc,f\ne {f}_s,f\ne n,g\in N-d,s=1,2,\dots, e=1,2,\dots $$
$$ \sum \limits_{e=1}^{e=k}\sum \limits_{s=1}^{s=k}{y}_{eng{h}_s}^{dn}={y}_{e{F}_i ng}^{dn},d\in N,n\in N,\left(n,g\right)\in Arc,k=1,2,\dots, e=1,2,\dots $$
$$ {y}_{egh{h}_1}^{dn}-{y}_{efgh}^{dn}\ge 0,h\in {SBlue}_n^d-d,\left({h}_1,h\right)\in {Rt}^d,\left(g,h\right)\in {Rt}^d,\forall d\in N,\forall \left(e,f,g,h\right)\in Quadruplet,e=1,2,\dots $$
$$ \sum \limits_{n\in N}\sum \limits_{e=1}^{e=k}{y}_{engh}^{dn}\ge {x}_{efgh}^{dn}\ge \frac{\sum_{n\in N}{y}_{engh}^{dn}}{cardinal(N)},\left(e,f,g,h\right)\in Quadruplet,n\in N,k=1,2,\dots, e=1,2,\dots $$
$$ \sum \limits_{d\in N\mid \left(n,m\right)\in {Rt}^d}\sum \limits_{f\in neighbor\ of\ g,f\ne h}\sum \limits_{e=1}^{e=k}{y}_{efgh}^{dn}.{Tr}_n^d+\sum \limits_{d\in N\mid \left(m,n\right)\in {Rt}^d}\sum \limits_{f\in neighbor\ of\ g,f\ne h}\sum \limits_{e=1}^{e=k}{y}_{efgh}^{dn}.{Tr}_m^d\le {Ac}_{gh}+\sum \limits_{d\in N\mid \left(n,m\right)\in {Rt}^d}{\alpha}_{egh}^{dn}.{Tr}_n^d+\sum \limits_{d\in N\mid \left(m,n\right)\in {Rt}^d}{\alpha}_{egh}^{dm}.{Tr}_n^d,\left(g,h\right)\in Arc,g\ne n,g\ne m,\left(n,m\right)\in L,k=2,3,\dots, e=1,2,\dots $$
$$ \sum \limits_{d\in N\mid \left(n,m\right)\in {Rt}^d}\sum \limits_{i=1}^{i=k}{y}_{e{F}_i nh}^{dn}.{Tr}_n^d\le {Ac}_{nh}+\sum \limits_{d\in N\mid \left(n,m\right)\in {Rt}^d}{\alpha}_{nh}^{dn}.{Tr}_n^d,\left(m,h\right)\in Arc,h\ne n,\left(m,n\right)\in L,e=1,2,\dots $$
$$ \sum \limits_{d\in N\mid \left(m,n\right)\in {Rt}^d}\sum \limits_{i=1}^{i=k}{y}_{e{F}_i mh}^{dn}.{Tr}_m^d\le {Ac}_{nh}+\sum \limits_{d\in N\mid \left(n,m\right)\in {Rt}^d}{\alpha}_{mh}^{dm}.{Tr}_n^d,\left(m,h\right)\in Arc,h\ne n,\left(m,n\right)\in L,e=1,2,\dots $$
$$ {x}_{efgh}^d\in \left\{0,1\right\},\forall \left(e,f,g,h\right)\in Quadruplet,\forall d\in N,e=1,2,\dots $$
$$ {y}_{efgh}^d\in \left\{0,1\right\},\forall \left(e,f,g,h\right)\in Quadruplet,\forall d\in N,\forall n\in N,e=1,2,\dots $$
The objective function will allow us to evaluate the ratio between the additional capacity and the installed capacity.
Equation (2) is a constraint implying that there exist multiple paths resulting from flow splitting, which go from n to d for disturbed traffic.
Equation (3) ensures that there is no conflict in the rerouting, i.e., the incoming flows to node n to destination d must follow the same rerouting paths. If we use arcs (i, k1), (i, k2), (i, k3), ... for rerouting to destination d, there is at most one output (k s , j s ) for each.
To avoid loops and conflict problems, the alternative paths should not contain any arc of nominal routing in the red part of the network. Equation (4) ensures that condition.
Constraint (5) ensures that there will be no loop in the network. For a given destination and a given failure, an alternative path could contain a loop if the flows go from a node with a larger index number to another one with a smaller index. This constraint prohibits this type of problem.
Equations (6), (7), and (8) are the flow constraints for the continuity of the alternatives paths. Equation (6) is the constraint of flow conservation. Referring to (7), the total amount of entering traffic in n is equal to the total outgoing traffic of g; because of flow splitting being used for a given failure and a destination, we could have multiple incoming streams and possibly multiple outgoing streams.
Equation (8) ensures that in the blue part, if a path uses an arc of the nominal routing tree, it must continue until destination d.
Equation (9) is a constraint for the relationship between two rerouting paths that avoids a conflict (see the definitions of variables x and y). Because x and y are binary variables, with the same quadruplet (e, f, g, h) and same destination d, we can deduce from (9) that (x) will take the maximum value of (y). We use the sum of failures divided by the cardinal to reduce the number of constraints.
Equations (10), (11), and (12) are the capacity constraints. For each failure of edge (n, m), the constraint in (10) assumes rerouted paths for arcs (n, m) and (m, n), and only trees that contain the arc failure are involved. They also consider the released bandwidth on the initial routing paths. Equations (11) and (12) are special cases of (9) for the nodes that detect the failure, node n and node m. Finally, (13) and (14) indicate that the variables take binary values.
Convexity of our model
The objective function of our model has the general form:
$$ \underset{s.t.\kern0.5em x\in C}{\min }f(x) $$
where C is the set Arc and f is a function over C giving the additional capacity needed for a chosen arc. The problem described by Eq. (15) is convex in the set C and the function f is convex.
Under the existence assumption of at least two paths between any pair of nodes of the graph and considering each arc of the set Arc as a segment, then the set C is convex as an intersection of convex subsets. According to [33], if the function f is affine, it is convex and the problems described by general Eq. (15) are usually stated convex problems with an implicit convexity. This implicit convexity is because there are more explicit formulations of convex problems such as convex optimization problems in functional form, which are convex problems of the form:
$$ \underset{s.t.{g}_i(x)\le 0,i=1,2,\dots m,{h}_j(x)=0,j=1,2,\dots, p\ }{\min }f(x) $$
Where f, g 1 ,…., g m : ℝn → ℝ are convex functions and h 1 ,…., hp.: ℝn → ℝ are affine functions. Each constraint of our model can be written under any of the forms of constraints of Eq. (16). This proves that our model is implicitly convex.
Implementation and simulation results
Based on the comparative study in [34], the OMNet++ network simulator has many advantages: Unlike NS-2 and NS-3, OMNet++ has extensive graphical user interface (GUI) and intelligence support, provide good computation times. The flexibility of the NED language used for describing the network architecture is appropriate to meet the great topology flexibility requirements of network virtualization. OMNet++ is also able to carry out large scale network, which is an important feature for our simulations. That is why the experiments have been conducted in the simulation environment OMNet++ running on a computer with the following configuration: Core i5 2.40 GHz, 4.00 GB RAM, 12 MB cache. We applied our model to 4 networks: network1 (5 nodes and 7 links), network2 (10 nodes and 18 links), network3 (20 nodes and 31 links), and network4 (60 nodes and 81 links). These four networks satisfy the assumptions cited above. They contain a set of nodes with high degree for the estimation of the impact of multilink failures adjacent to the same node on those networks. It therefore shows the robustness of our strategy. The restoration rate of our rerouting scheme without conflict between rerouting paths is shown in Table 1. The simulations have been done on non-simultaneous multiple link failures for each tested network.
Table 1 Restoration rate without conflicts
The data provided in Table 1 show that our rerouting scheme supplies rerouting solutions for almost all link failure situations considered. If the conflict constraint is neglected, we can find solutions for more failure configurations (see Table 2), but the variation is small (approximately 2%). In other words, conflict constraint does not significantly affect the number of failures handled.
Table 2 Restoration rate neglecting conflicts
Figure 7 graphically compares our rerouting scheme with that of [7]. This figure shows that the restoration rate gap between both methods increases with network size. This phenomenon is observed because the potential conflicts in a small network are not numerous, which means that cases of unsolvable conflicts are also not numerous.
Comparative graph of two versions of our rerouting scheme: with conflict constraint and without
We also perform an additional capacity consumption test for the previous four networks; the results are shown in Table 3. This table consists of five columns. Descriptions of the rightmost three columns are as follows: "Unused CA" represents the additional capacity available in the network; "Our used CA" represents the total of additional capacity used by our strategy for link-failure handling; and "Used by X" represents additional capacity used in the network by method [7]. The result units are expressed in seconds.
Table 3 Comparison of our rerouting scheme with [7] concerning additional capacity used
The results in Table 3 show that our rerouting scheme based on a traffic-splitting strategy uses less additional capacity than does the method presented in [7]. This difference is very important when the network size increases. Figure 8 provides us a better illustration of this difference. This figure shows that the additional capacity used for flow restoration increases with network size and network connectivity.
Additional capacity used by different strategy
Node failure problem
We speak of node failure when some flow can no longer go through a given node in the network. This situation can be caused by overflow traffic in this node or a physical failure of the given node. Because of the two-link connectivity included in our hypothesis, a node failure leads directly to the outage of at least two or several links; in other words, node failure can be treated as a simultaneous multiple link failure. In this case, failure will be detected by all nodes connected to the failed node. Figure 9 presents the failure of node number 4.
Failure of node number 4
When node 4 fails, flows coming from nodes 1 and 7 must be rerouted. The failure of node 4 implies a simultaneous failure of links (1–4), (7–4) and (4–6). Therefore, we must reroute the two flows (1–4) and (7–4) to destination 6 without conflict. To solve this type of failure, two solutions are possible:
First solution resort to the controller
In the case of node failure, each switch that detects the failure sends a specific message called packet-in message to the controller that sets the rerouting order for the link failures related to these nodes. The idea of this rerouting approach is to solve these link failures as cascading failures. This order can be built on a node's label criteria. The nodes are labelled in a decreasing order as we approach the destination node. We could handle the failure detected by the node of a smaller label before another one with a larger label. Once the resolution order is fixed, the controller updates the routing tables of involved nodes as described in [7] by using another specific message called packet-out message. After this update, our rerouting scheme can be used to solve each link failure. The reaction time of this solution is too long, due to pro-activity; therefore, the principles of IPFRR are not satisfied with this solution.
Second solution: No resort to the controller
Each link failure because of a node failure is handled locally and instantly by each node that identifies a link failure. The incurred risk in this strategy is the looping problem during flow rerouting; however, assuming our constraint imposing traffic from nodes with a lower label on another one with a higher label, cycles can be avoid. Our rerouting scheme for a simple link failure can be used to fix simple node failure situations. Recall that we speak about simple node failure when only one node fails at a time. Our rerouting strategy for simple node failure problems uses this approach. The following illustrates our rerouting scheme for a simple node failure with an example. Fig. 10 shows the nominal routing tree of an example network, and Fig. 11 presents link failures (dotted links) resulting from node 3's failure. Fig. 12 presents a cyclic problem resulting from a node failure, and Fig. 13 illustrates the efficiency of our solution to solve this cycle problem.
Shortest path from each node to destination 1
Node 3's failure
Cycle example in two flows rerouting
Our rerouting strategy solving the cycle problem
For each destination, we determine the nominal routing tree from each node towards this destination (see Fig. 10). The failure of node 3 generates simultaneous failures of links (5–3), (6–3) and (3–1) (dotted links). Nodes 5 and 6 will detect the breakdowns of links (5–3) and (6–3), i.e., we have two flows to reroute. These failures split the graph into two parts: the blue part and the red part (see Fig. 11).
Using our rerouting scheme, the flow coming from link (5–3) could be rerouted through arcs (5 → 2), (5 → 6) and (5 → 8). The flow of link (6–3) could follow arcs (6 → 8), (6 → 4) and (6 → 5). However, arc (5 → 6) can lead to node 3 through arc (6 → 3) or keep the rerouted flow in cycle 5–6–8-5 (see Fig. 12). Thus, arc (6 → 3) will be excluded from the list of potential paths for rerouting the flow coming from link (5–3) or node 3. If we consider the criteria related to management of the cycles, arc (5 → 6) will be considered in rerouting the paths of link (5–3), which will not be true of arc (6 → 5) (see Fig. 13). Similarly, for rerouting link (6–3), arc (5 → 6) could also be used rather than (6 → 5).
Consequently, the possible rerouting paths will be 5–2-1, 5–8–9-7-4-1 and 5–6–4-1 for flow from the failure of link (5–3); concerning the flow from the failure of link (6–3), the possible rerouting paths could be 6–8–9-7-4-1 and 6–4-1. We can conclude that local reaction required by IPFRR strategy can also be preserved when addressing simple node failure situations through our rerouting scheme.
To achieve the local connectivity recovery, there is a filter similar to an agent, running inside each switch (example of OpenFlow switches) used in network architecture like ours. This agent detects the port states and acts as needed. For classical switches, there are control mechanisms provided to check that ports status.
Multiple link failures studied in the case of simple node failure involve links adjacent to that node, but we also have cases of simultaneous multiple link failures not adjacent to the same node.
Simultaneous multilink failures
We speak about simultaneous multiple link failures when several links fail at the same time. The case considered in this section concerns non-adjacent links to the same single node. In this case, there are multiple nodes, each of which detects a link failure as in the simple node failure case. This type of failure can also be handled using either of two methods:
First method: Treat only one link failure at a time
In this approach, despite many link failures occurring at the same time, they are handled as non-simultaneous link failures; therefore, failures are treated sequentially. This method is used in [7], in which a rerouting scheme is proposed to solve the problem for the case of two links failing simultaneously. As stated in Section 7 about the node failure problem, the limit of this strategy is its slowness in rerouting.
Second method: Treat all link failures at the same time
This approach is similar to the second one presented in Section 7 for the node failure problem, and it enables all nodes that detect a failure to initiate the rerouting process. Our rerouting scheme for node failure can also be used here. When several link failures occur simultaneously during the rerouting process, we can use flow splitting each time to find spare capacity lacking in the network.
Consider the example network of Fig. 14 to illustrate our rerouting strategy for the case of simultaneous and non-adjacent two-link failures.
An example network of simultaneous and non-adjacent two link failures
The nominal routing tree is shown, and the destination node is labelled 1.
Figure 15 shows two link failures named p1 and p2 occurring at the same time.
Two simultaneous and non-adjacent link failure examples
The failures p1 and p2 create the red parts R1 and R2. p1 is detected by the node labelled 4, and p2 is detected by the node labelled 9. The rerouting scheme of p1 can be through bridges (8–3) and (11–7) leading to the paths 4–5–8-3-1 and 4–5–8-11-7-2-1 (see Fig. 16). Flows can be split at node numbers 8, 11, 2 and 3. Concerning rerouting of p2, link (12–13) can be considered a bridge that connects the red part R1 to R2 in addition to (8–3) and (11–7). After the link failure pair (p1, p2), if another occurs (pair (p3, p4) for example), the rerouting will be done based on the previous one to avoid conflict.
Our rerouting scheme for two simultaneous and non-adjacent link failures
However, concerning this simultaneous multiple link failure, there are several conflict configurations that require particular attention as shown in Fig. 17. For the configuration example shown in Fig. 17, [7] affirms that the conflict problem illustrated is insoluble. Indeed, the rerouting scheme provided in [7] uses only one path for rerouting, with management of conflict between the paths similar to our strategy. We prove that this potential unsolvable conflict claimed by [7] can be solved by using multiple planes. The principle is to cross from one plane to another when there is a risk of unsolvable conflict when using a single plane.
Unsolvable conflict when using only one plane
Consider the configuration example given by [7] in Fig. 17, in which the authors claim that there is no rerouting solution. Two simultaneous link failures situations are considered: first, we have simultaneous link failures (A-D) and (C-D). Second, we have simultaneous link failures (E-C) and (I-D).
According to [7], when (A-D) fails, the only available rerouting path is A-B-H-G-F-E-C-D because if we choose path H-G-K, there would be a conflict at node G. When (C-D) fails, the traffic that comes from failure (A-D) will be rerouted by C. To reroute the traffic of failure (C-D), node C must transfer the traffic back to G; then, there are two possibilities: use G-H-B-A-D as the rerouting path, or transfer the traffic through link (G-K). We cannot use G-H-B-A because the traffic would be transferred indefinitely between A and C. Therefore, we must use F-G-K as the rerouting path in this situation.
Concerning the second situation in which the two links (E-C) and (I-D) fail at the same time, when (I-D) fails, using the same reasoning as in the previous case, the only available rerouting path is I-J-O-K-G-F-E-C-D, according to [7]. Because we used F-G-K in the previous case of link (E-C) 's failure, we must also transfer the traffic through F-G-K for this case to avoid conflict. Because both failures occur at the same time, the traffic will be transferred indeterminately between C and I; therefore, the traffic cannot be rerouted in this situation. That property is why [7] affirms that there is no rerouting scheme without conflict for destination D in this configuration.
Now, consider our rerouting scheme using multiple planes. For the same configuration example above, our rerouting solution is shown in Fig. 18. In this figure, virtual nodes E and C are hosted by physical node EC; virtual nodes O and G are lodged by physical node OG. The network topology with unsolvable conflict is located in a virtual plane.
Solution through multiple planes
Let us transpose the topology of Fig. 17 into the physical plane as illustrated by Fig. 18.
Assuming that the nodes which detect failures are nodes E and I in the case of simultaneous link (E-C) 's and (I-D) 's failures and considering the topology's heterogeneity in the virtual networks, node E detects that a traffic redirection through path F-G-K-O-J will be deviated on node I and cause a cyclic problem. To solve that problem, we use another plan for traffic coming from both link failures. We will choose paths E-EC-D'-D and I-I'-D'-D for link (E-C) 's and (I-D)'s failure restoration. Thus, our approach can solve unsolvable conflicts presented in [7] by making use of multiple planes.
Our aim in this paper was to propose a rerouting approach to handle the single link node failure and simultaneous multiple link failure problems in a network of switches in the context of network virtualization. We proposed a conflict-free rerouting scheme that can ensure that, whatever the case of link or node failure, traffic will be rerouted to the destination. The proposed method is based on local reaction of nodes placed at the extremities of the failed link, whereas the other nodes need not know about the failure or take any particular action. Thus, the implementation is particularly easy. The flow splitting strategy used when there is insufficient spare capacity on links helps to reduce additional capacity added to the network. We proved that there exists a restoration scheme without conflict in the network and provide a mathematical model that permits calculation of the rerouting scheme with optimization of the sum of additional capacities needed for one virtual plane. We also proposed a rerouting solution using several planes to solve cases of potentially unsolvable conflicts when we use only one plane. Further work will address congestion management into the nodes implied in the rerouting and routing table updates without disturbing the network.
Chowdhury NM, Boutaba R. Network virtualization: state of the art and research challenges. IEEE Commun Mag. 2009;7:20–6.
Alkmim GP, Batista DM, da Fonseca NLS. Mapping virtual networks onto substrate networks. J Internet Serv Appl. 2013;4:3. https://doi.org/10.1186/1869-0238-4-3.
Bays LR, Oliveira RR, Barcellos MP, Gaspary LP, Mauro Madeira ER. Virtual network security: threats, countermeasures, and challenges. J Internet Serv Appl. 2015;6:1. https://doi.org/10.1186/s13174-014-0015-z.
Cheng X, Su S, Zhang Z, Shuang K, Yang F, Luo Y, Wang J. Virtual network embedding through topology awareness and optimization. Comput Netw. 2012;56:1797–813.
Cheng X, Su S, Zhang Z, Wang H, Yang F, Luo Y, Wang J. Virtual network embedding through topology-aware node ranking. ACM Comput Commun Rev. 2011;41:38–47.
Fernandes NC, Moreira MD, Moraes IM, Ferraz LH, Couto RS, Carvalho HE, Campista ME, Costa LH, Duarte OC. Virtual networks: isolation, performance, and trends. Ann Telecommun. 2011;66:339–55.
Pham TS, Lattmann J, Lutton JL, Valeyre L, Carlier J, Nace D. A restoration scheme for virtual networks using switches: International Workshop on Reliable Networks Design and Modeling. USA: IEEE Press; 2012. p. 800–5.
Pham TS. Autonomous management of quality of service in virtual networks: PhD Thesis. Compiegne: the university of Technology of Compiegne; 2004.
Atlas AK, Zinin A (2008) Basic specification for IP fast-reroute: loopfree alternates. https://tools.ietf.org/pdf/rfc5286.pdf. Accessed 20 July 2017.
Bryant S, Shand M, Previdi S (2011) IP fast reroute using not-via addresses. https://www.ietf.org/proceedings/62/slides/rtgwg-3.pdf. Accessed 10 July 2017.
Ho K-H, Wang N, Pavlou G, Botsiaris C. Optimizing post-failure network performance for IP fast reroute using tunnels. In: Proceedings of the 5th international ICST conference on heterogeneous networking for quality, reliability, security and robustness, article no 44. Hong Kong: ACM digital Library; 2008.
Kvalbein A, Hansen A, Cicic T, Gjessing S, Lysne O. Fast IP network recovery using multiple routing configurations, vol. 2006: Proceedings IEEE INFOCOM; 2006. https://doi.org/10.1109/INFOCOM.2006.227.
Zalesky A, LeVu H, Zukerman M. Reducing spare capacity through traffic splitting. IEEE Commun Lett. 2004;8:594–6. https://doi.org/10.1109/LCOMM.2004.833800.
Wang J, Nelakuditi S. IP fast reroute with failure inferencing. In: ACM proceedings of the 2007 SIGCOMM workshop on internet network management. Kyoto: ACM Digital Library; 2007. p. 268–73.
Kang X, Chao HJ. IP fast rerouting for single-link/node failure recovery. In: BROADNETS 2007, fourth international conference on broadband communications: Networks and Systems. USA: IEEE Press; 2007. p. 142–51.
Kang X, Chao HJ. IP fast reroute for double-link failure recovery. In: Proceedings of the 28th IEEE conference on global telecommunications. Piscataway: GLOBECOM; 2009. p. 1035–42.
Sgambelluri A, Giorgetti A, Cugini F, Paolucci F, Castoldi P. Openflow based segment protection in ethernet networks. J Opt Commun Netw. 2013;5:1066–75. https://doi.org/10.1364/JOCN.5.001066.
Staessens D, Colle D, Pickavet M, Demeester P. A demonstration of automafic bootstrapping of resilient openFrow networks. In Poceedings of IFIP/IEEE International Symposium on Integrated Network Managemenr (IM 2013). Ghent: IEEE Xplore Digital Library; 2013. pp. 1066–7.
Sharma S, Staessens D, Colle D, Pickavet M, Demeester P. OpenFlow: meeting carrier- grade recovery requirements. Comput Commun. 2013;36:656–65. https://doi.org/10.1016/j.comcom.2012.09.011.
Kamamura S, Shimazaki D, Hiramatsu A, Nakazato H. Autonomous IP fast rerouting with compressed backup flow entries using OpenFlow. IEICE Tans Inf Sys, pp. 2013;96:l84–192.
Yu M, Yi Y, Rexford J, Chiang M. Rethinking virtual network embedding: substrate support for path splitting and migration. ACM SIGCOMM Comput Commun Rev. 2008;38(2):17–29.
Veerasamy J, Venkatesan S, Shah JC. Effect of traffic splitting on link and path restoration planning. In: The global telecommunications conference, 1994 IEEE GLOBECOM, vol. 3: Communications: The Global Bridge. USA: IEEE Press; 1994. p. 1867–71.
Fischer S, Kammenhuber N, Feldmann A. REPLEX: dynamic traffic engineering based on wardrop routing policies. In: Proceedings of the of the ACM CoNEXT'06. Lisboa: ACM Digital Library; 2006. p. 1–12.
OpenFlow multipath proposal. http://openflowswitch.org/wk/index.php/Multipath_Proposal. Accessed 26 Oct 2015.
Cao Z, Wang Z, Zegura E. Performance of hashing-based schemes for internet load balancing. In: Proceedings of INFOCOM'00, vol. 1. Israel: Tel Aviv; 2000. p. 332–41.
Prabhavat S, Nishiyama H, Ansari N, Kato N. On the performance analysis of traffic splitting on load imbalancing and packet reordering of bursty traffic. In: IEEE international Conference on Network infrastructure and digital content, IC-NIDC, USA: IEEE Press. 2009, p. 236–40
Bennett JC, Partridge C, Shectman N. Packet reordering is not pathological network behavior. IEEE/ACM Trans Networking. 1999;7(6):789–98.
Laor M, Gendel L. The effect of packet reordering in a backbone link on application throughput. IEEE Netw. 2002;16(5):28–36.
Leung KC, Li VO, Yang D. An overview of packet reordering in transmission control protocol (TCP): problems, solutions, and challenges. IEEE Trans Parallel Distrib Syst. 2007;18(4):522–35.
Kandula S, Katabi D, Sinha S, Berger A. Dynamic load balancing without packet reordering. ACM SIGCOMM Comput Commun Rev. 2007;37(2):51–62.
Adiseshu H, Parulkar G, Varghese G. A reliable and scalable striping protocol. ACM SIGCOMM Comput Commun Rev. 1996;26(4):131–41.
Partridge C, Milliken W. Method and apparatus for byte-by-byte multiplexing of data over parallel communications links: Patent number: 6160819, Assigned to GTE Internetworking Incorporated, December 2000, Cambridge, Massachusetts, USA.
Boyd S, Vandenberghe L. Convex optimization, vol. 34. UK: Cambridge university press; 2004.
Khana AR, Bilalb SM, Othmana M. A performance comparison of network simulators for wireless networks. USA: Cornell University, Library. 2013;arXiv:1307.4129.
We thank the anonymous reviewers whose valuable comments and suggestions have significantly improved the presentation and the readability of this work.
Department of Mathematics and Computer Science, University of Dschang, Dschang, Cameroon
Vianney Kengne Tchendji & Yannick Florian Yankam
Computer Science Lab-Mis, University of Picardie Jules Verne, Amiens, France
Jean Frédéric Myoupo
Vianney Kengne Tchendji
Yannick Florian Yankam
JFM suggested this work. VKT and YFY carried out analysis and performed the experiments. VKT and YFY wrote the first draft of this work and worked for the revised version. JFM revised the first draft and worked on the revised version. In addition, all authors read and approved the work.
Correspondence to Jean Frédéric Myoupo.
Authors' information
VKT and YFY are with the department of Mathematics and Computer Science of the University of Dschang, Cameroon. JFM is with the Computer Science Lab. MIS of the university of Picardie Jules Verne, Amiens, France.
The following is a formal proof of theorem 1. stated in Section 5. This proof is similar to the one presented in [7], with the difference that we are showing the existence of multiple paths rather than one path as in [7].
Consider a routing tree Rt to a destination d as shown in Fig. 19; then d is the sink of Rt. The dotted links represent the possible existence of nodes between nodes connected through that link. Let (p1, q1) be an arc of Rt. We assume that this arc fails and that we must find a rerouting scheme without conflict. We note that (q1, p1) does not belong to Rt, which means that, for a given destination, the link failure problem can be treated as an arc failure. Without loss of generality in this proof, we will consider the problem of arc failure (p1, q1). p1 is the sink of a sub-tree Rt1 whose nodes are coloured in red. The other vertices in the tree are coloured in blue. We assume that all vertices are part of the tree Rt and that this assumption is true for any destination d.
Multipath existence proof
Based on [7], there exists a rerouting path connecting both the red part and the blue part. Inside the red part of sub-tree (p1, q1), under the assumption of two-link connectivity, there are at least two paths going from any node of that red part to an arc connecting both the red and blue parts. Thus, if traffic splitting is performed on any of those nodes, it will be possible to reroute the flow through at least two different paths until it reaches destination d. In other words, two paths μ and ν exist from p1 that visit some vertices of Rt1 and connect a vertex of Rt1, which is red, to a blue vertex. This connection can be done through arcs (i, j) and (k, l) offering sufficient spare or adequate additional capacity. These arcs act as a bridge between the red and blue part. For a red vertex of Rt1 affected by the failure, the descended traffic will first follow paths μ and ν inside the red part to p1. It will then follow the original routing tree from p1 to i or k and use arc (i, j) or (k, l) as a bridge to reach destination d through blue vertices located in the blue part of the routing tree. Due to dimensioning issues, traffic can also be rerouted into the blue part over at least two alternative paths until destination d. According to the rerouting choices, the rerouting paths associated with the link (p1, q1) failure in the red and the blue parts are without conflict.
Consider the n–1 arc failures of the tree; we choose different arcs (i, j) to connect the red part to the blue part. First, we prove the existence of a bridge connecting both parts; second, we demonstrate the absence of conflict between different paths. The arcs of the tree are numbered in decreasing order as we approach sink d. We choose arcs (i, j) and (k, l) in successive order of increasing numbers. Let (pr, qr) be an arc under consideration. We assume that we have chosen arc pairs ((i1, j1), (k1, l1)), ((i2, j2), (k1, l1)), ... ((ir − 1, jr − 1), (kr − 1, lr − 1)) for rerouting. pr is the root of tree Rt r . We consider two cases. The first case is with arc (ps, qs) as the failure, and s < < r. In this case, arcs (i s , j s ) and (k s , l s ) have their extremities i s and k s inside tree Rt r, and their extremities j s and l s are out of trees Rt s and Rt r , which are included in Rt s (see Fig. 16). Then, we choose arc (i s , j s ) as arc (i, j) and (k s , l s ) as arc (k, l) for tree Rt r . In the second case, there is no rerouting arc with this property. We choose any arcs (i r , j r ) and (k r , l r ) that connect Rt r to its complement. Each of these cases offers at least two path possibilities in the red part of the network.
Let us prove the absence of conflict in our rerouting scheme. By recurrence of the number of rerouted arcs, let us assume that we have already rerouted n − 1 arcs in the tree. Each rerouted arc has generated at least two rerouting paths. By the recurrence hypothesis, there is no conflict for the first n–1 reroutings. We must verify that the nth rerouting also has no conflict with the first n–1 reroutings.
There is no conflict by construction concerning the rerouting of the outside part of tree Rt r , which is the part in common with the classical routing. Although this nth rerouting uses the same arc as the previous ones do, in this part, the rerouting will follow the same path until destination d; therefore, there is no conflict. A conflict could occur if the splitting strategy is located in the blue part of the network. We verify no conflict exists for the part in which it goes in the opposite direction of the tree, which verifies that in the two cases cited above, there is no conflict. In the first case, when we have chosen arcs (i s , j s ) and (k s , l s ) in tree Rt r , there was no conflict in that part of the tree because Rt r would use the same arcs (i s , j s ) and (k s , l s ) as the passing bridges between its red part and its blue part (see Fig. 20). In the second case, the part climbing up the tree has nothing in common with the other rerouting arcs. In this case, there would exist (i s , j s ) and (k s , l s ). Therefore, there is no conflict in this case. We can conclude that the property remains true to the order n. The absence of conflict in our rerouting scheme can be proved by reoccurrence.
Recurrence hypothesis
Tchendji, V.K., Yankam, Y.F. & Myoupo, J.F. Conflict-free rerouting scheme through flow splitting for virtual networks using switches. J Internet Serv Appl 9, 13 (2018). https://doi.org/10.1186/s13174-018-0085-4
Restoration scheme
Spare capacity
Traffic splitting | CommonCrawl |
Learning Microbial Community Structures with Supervised and Unsupervised Non-negative Matrix Factorization
Yun Cai1,
Hong Gu1 and
Toby Kenney1Email author
Accepted: 2 August 2017
Learning the structure of microbial communities is critical in understanding the different community structures and functions of microbes in distinct individuals. We view microbial communities as consisting of many subcommunities which are formed by certain groups of microbes functionally dependent on each other. The focus of this paper is on methods for extracting the subcommunities from the data, in particular Non-Negative Matrix Factorization (NMF). Our methods can be applied to both OTU data and functional metagenomic data. We apply the existing unsupervised NMF method and also develop a new supervised NMF method for extracting interpretable information from classification problems.
The relevance of the subcommunities identified by NMF is demonstrated by their excellent performance for classification. Through three data examples, we demonstrate how to interpret the features identified by NMF to draw meaningful biological conclusions and discover hitherto unidentified patterns in the data.
Comparing whole metagenomes of various mammals, (Muegge et al., Science 332:970–974, 2011), the biosynthesis of macrolides pathway is found in hindgut-fermenting herbivores, but not carnivores. This is consistent with results in veterinary science that macrolides should not be given to non-ruminant herbivores. For time series microbiome data from various body sites (Caporaso et al., Genome Biol 12:50, 2011), a shift in the microbial communities is identified for one individual. The shift occurs at around the same time in the tongue and gut microbiomes, indicating that the shift is a genuine biological trait, rather than an artefact of the method. For whole metagenome data from IBD patients and healthy controls (Qin et al., Nature 464:59–65, 2010), we identify differences in a number of pathways (some known, others new).
NMF is a powerful tool for identifying the key features of microbial communities. These identified features can not only be used to perform difficult classification problems with a high degree of accuracy, they are also very interpretable and can lead to important biological insights into the structure of the communities. In addition, NMF is a dimension-reduction method (similar to PCA) in that it reduces the extremely complex microbial data into a low-dimensional representation, allowing a number of analyses to be performed more easily—for example, searching for temporal patterns in the microbiome. When we are interested in the differences between the structures of two groups of communities, supervised NMF provides a better way to do this, while retaining all the advantages of NMF—e.g. interpretability and a simple biological intuition.
Microbial communities
Subcommunities
Non-negative Matrix factorization
Microbes affect human physiology and global nutrient cycling, through the action of microbial communities [1–3]. A microbial community usually consists of hundreds or even thousands of different microorganisms [4, 5] which survive through the interaction with each other and environments and form metabolically integrated communities [6]. Although in some cases the abundance of a single species can have a big effect on the overall state of the community, for example some species of pathogens are believed to single-handedly cause illnesses, in many cases, differences between different types of microbial communities (for example, the communities in the guts of healthy and IBD people) are attributable to the overall structure of the community. It is therefore critical to devise models which take into account this overall structure.
Next generation sequencing has generated a large amount of microbial metagenomics data for the study of microbial diversity of different environments. These data consist of either marker-gene data (counts of OTUs) or functional metagenomic data, i.e. counts of reaction-coding enzymes. The OTUs or gene counts will be referred to as variables, and a sample will be also referred to as an observation in this paper. Considering the difficulty of collecting data and the large number of variables, the data always consist of hundreds or even thousands of variables but only a few observations, which means p≫n (p is the number of variables and n is the number of observations). In addition, many species are only observed in a few samples; thus, the data are highly sparse [7, 8]. This makes it challenging to apply classical statistical analysis methods.
Exploratory data analysis, such as principal component analysis (PCA) [9], on the original data matrix, is not appropriate for count data and has largely been replaced by clustering analysis or principal coordinates analysis based on UniFrac [10]. The UniFrac distance measures the abundance difference between two samples, incorporating phylogenetic tree information between the organisms. Although UniFrac is widely used, it has some drawbacks. One is that it does not address the heterogeneity between samples due to the different sequencing depths for different samples. Subsampling techniques are sometimes used to attempt to remedy this problem, but these do not fully resolve the problem and involve throwing away a large amount of information in the data and so are not recommended [11]. UniFrac based methods are only applicable to OTU data, not whole metagenome sequence data. Furthermore, UniFrac is an ad hoc method in that it is not based on a probablistic model and thus does not provide as much insight as an explicit statistical model-based approach.
Early work on the probabilistic modeling of microbial metagenomics data by [12] has represented the data as multinomial samples from an underlying multinomial distribution which in turn is generated from one of several Dirichlet mixture components. The hyperparameters of each of the Dirichlet mixture components have been assumed to follow a Gamma prior. This Bayesian probability framework seemed to be reasonable, though some assumptions such as choice of prior are arbitrary; however, the analysis results of the two examples based on this probability framework in [12] are not totally convincing in that the clustering results of lean and obese samples do not really show clustering patterns, and the method underperforms existing methods at classification. Another Bayesian probabilistic framework models the contaminated sample as a mixture of several known microbial community sources [13]. Bayesian Inference of Microbial Communities (BioMiCo) [14] is a more recent Bayesian hierarchical mixed-membership model. BioMiCo takes OTU abundances as input and models each sample as a two-level hierarchy of mixtures of multinomial distributions which are constrained by Dirichlet priors. This model identifies clusters of OTUs which it calls assemblages and then infers the mixing of assemblages within samples. Unlike the Gamma prior used in [12], the Dirichlet priors are used to control the sparsity of mixing probabilities for both levels of the multinomial distributions which results in more interpretable assemblages and a more parsimonious model.
The above probability frameworks have been mainly applied to the marker-gene data, but could easily be applied to whole metagenomic data as well. Another hierarchical Bayesian framework, BiomeNet [15], has been developed to specifically model the structure of metagenomic data.
A common theme in these Bayesian probability modelling frameworks is that each sample is modeled as a mixture of several typical "types". These typical types are mostly inferred from data by computational methods. The Bayesian framework provides a natural vehicle for fitting complicated models, but the resulting models are generally not easy to interpret because of the hierarchical structure, and the computation usually takes a very long time.
In order to provide an effective exploratory data analysis method that is suitable for both marker-gene and functional metagenomic data and is based on a probability model that can capture the subcommunity structure information and can address the issues of heterogeneity among samples, we explore the application of Non-negative Matrix Factorization (NMF) to microbiome data in a likelihood framework. NMF has been widely applied in many areas, such as image and natural language processing, and also has found many applications in computational biology [16]. More recently, it was applied in the ocean microbes data to investigate the biogeography of microbial function and its correlation to environmental distance [17]. It has also been applied to metabolic profile matrices [18]. This application is similar to the unsupervised NMF we used here. They focused on functional gene reads aggregated into pathways in that paper, rather than direct reads or OTU data. It also seems that they used NMF on the proportion data, rather than the original counts. This is theoretically not correct, as using the original counts allows the estimate to account for the fact that samples with greater sequencing depth give a more accurate estimate of the proportions. Conceptually, similar to the above Bayesian modeling frameworks, NMF models each sample as a mixture of different types. These types represent the structure of subcommunities. Instead of using a multi-level hierarchical structure as in BioMiCo [14] and BiomeNet [15], NMF uses one level of subcommunities as building blocks which makes the connection between the sample microbiome composition and the OTUs or reaction-coding enzymes more direct; this will provide better interpretability for the analysis results. In addition, NMF is a natural method to use for dimension reduction and feature selection in microbiome data. The commonly used unsupervised learning methods such as PCA and vector quantization (VQ) for reducing dimension and picking up the main features of the data usually result in linear combinations with negative coefficients which are hard to interpret naturally in this context. We want to find the main features (subcommunities) of the data and at the same time keep all the elements in these features non-negative. The features extracted by NMF are somewhat different from those identified by PCA or other variable selection methods; they are points in the high dimensional space which form a convex hull to envelop the observed points. Thus, they can involve much more than a single variable or a few variables. As demonstrated by Lee and Seung [19], NMF also tends to identify sparse features, and thus, each sample is expressed as a non-negative linear combination of a few sparse points (types), which further facilitates the interpretation of the results.
Like PCA or BiomeNet, NMF is an unsupervised method. Although NMF can extract the main features from the data, it cannot guarantee that these features are the best discriminant features to distinguish different classes. For example, if two classes are described by similar features, NMF will extract an average of these features to fit both classes, rather than separate features for the two classes.
For the purpose of identifying differences between different types of communities, we develop a supervised version of NMF in this paper. In cases where a single variable (or a small number of variables) is the main discriminant feature, this is often readily apparent from the types identified. In other cases, where the main differences are based on smaller-scale community-wide structure differences, NMF is able to identify these. In the real-data examples, we study some examples where the main differences between the classes come from a small number of key variables, and other examples where the main differences seem to arise more in the structure of the whole community. In these latter cases, the features extracted by NMF represent subcommunities of microbes that act as building blocks for the whole community.
There are many off-the-shelf supervised learning methods that can perform a classification directly on such data (a review is given by [8]). Since typically p≫n (the number of predictor variables is substantially larger than the number of data points), we need to choose methods designed for the p≫n case. Directly applying these classification methods often results in quite good classification. With some classification methods (for example random forest and the elastic net), variable selection is also possible. However, the selected variables are often difficult to map back to some discriminating community level features between classes, particularly if the true discriminating feature is not a single variable. Some classification methods (such as support vector machine, boosted trees or Neural network) can construct a very good classifier for such data, but without any possibility of interpretation and thus cannot provide any insight for the underlying community structure. BioMiCo [14] builds a classifier on the discriminant assemblages of the OTUs to predict the class labels with these assemblages showing the subcommunity structures. The model complexity of BioMiCo is controlled by the number of assemblages and the Dirichlet priors which are both pre-specified. These pre-specified parameters in principle can be adapted to the data through cross-validation on the training data, but running these Bayesian models needs a long time for each run which hurts the wide applicability of BioMiCo to different data.
Since we are interested in the community level features or systematic differences between different classes, we first use NMF to identify features from each class and then we build a classifier based on the weights distribution of each sample on the combined features from different classes. The features selected by this method will describe the original data well and also contain classification information. We can measure how well the features identified relate to the differences between different types of communities by looking at the prediction error of classifiers. As mentioned above, the purpose of NMF is to provide insights into the structural differences between different types of microbial communities, rather than to produce the most accurate classification possible. Classification is however a good measure to gauge the extent to which the subcommunities identified have important biological roles in the overall community structure.
Supervised NMF has similar model structures to BioMiCo but is fast to compute, and the only tuning parameters are the number of features that are extracted from different classes. Unlike BioMiCo which controls the sparsity of variables within features by the Dirichlet priors, the sparsity of NMF is decided by the number of features. With fewer features used in the model, each feature tends to be less sparse and conversely more features mean each feature is more sparse.
We will first give a review of NMF and its application to metagenomic data under the Poisson likelihood framework. We then describe the idea of supervised NMF based on unsupervised NMF, with the computation of the weight matrix over the combined features, followed by the method used to choose the tuning parameters for the supervised NMF. The details of the prediction method are given in the next subsection.
The NMF model
Non-negative matrix factorization [19] is a dimension reduction method for non-negative data. The idea is to represent each data point as a linear combination of non-negative features which are also computed from the data. Given a non-negative p×n matrix X, we approximate X by TW, where T is a non-negative p×k matrix, referred to as the type (feature) matrix, and W is a non-negative k×n weight matrix. Each column of X is approximated by a non-negative linear combination of the types (columns of T). Here, k is the number of types or features which determines the complexity of the model; thus, it is a tuning parameter in this context. Usually, k is chosen such that (p+n)k≪np, so that we reduce the dimension significantly.
In our analysis, X is the microbes data with counts of OTUs or genes. Specifically, X ij is the number of times the ith OTU or gene is observed in the jth sample. Thus, each feature (column) in T describes a subcommunity and each column in W contains the linear coefficients for the corresponding sample (column) in X. The whole community in a sample is thus approximated by a mixture of the subcommunities. For count data, such as our X, we model each element as an independent Poisson observation given its mean in the matrix TW. Note that because the Poisson mean varies between samples, the proportions of each OTU will exhibit the sort of overdispersion commonly seen. The idea is that there is a latent proportion of OTUs given as a weighted mean of the types, but the observation is a Poisson sample with this mean. We might argue that the sequencing procedure actually introduces more variance, so introducing overdispersion to the measurement distribution may have some value in future work. The covariance structure between the variables in X is implicitly given by the patterns in the type matrix T. The columns of the type matrix T are constrained to have the sum 1, and in this context, each column in T can be interpreted as the composition of OTUs or genes for each type. The different sequencing depths for the samples in X are absorbed in the weight matrix W. To compute T and W, we maximize the Poisson log-likelihood of the data [7],
$$L(T,W)=\sum_{i,j}\left(X_{ij}\log(TW)_{ij}-(TW)_{ij}\right), $$
In most literature (e.g. [20]) Euclidean distance is used as a criterion, assuming a Gaussian distribution for the observations instead of the Poisson.
There are a number of algorithms available for fitting NMF, for example [19, 21–24]. A thorough discussion of the algorithms available and their merits can be found in [25]. We used the R package NMF by Renaud [26], which implements the algorithm of Lee and Seung [19]. The choice of algorithm can, in theory, influence the results because the solution to NMF is not always unique, since the criterion depends only on the product TW. Usually, in practice, the non-negativity constraint will ensure that there is a unique solution.
Another challenge in applying NMF is to choose, k, the number of types. Generally, the log-likelihood increases with k increasing. We can plot the log-likelihood values versus k to find the "elbow point" after which the log-likelihood increases more slowly. This means the increase in the number of types will not add as much in modeling the data. Thus, we should choose the k value at the elbow point. In cases where there is no such elbow point, exploring multiple different k values by using our interactive data exploration tool, SimplePlot which is described later in this section, could help to find the k value based on which some meaningful data structure can be shown.
Supervised NMF
For a supervised learning problem, we have observations from different classes. Our objective is usually to find the differences between the structures from the different classes. We will approach this by separately identifying the subcommunities in each class first and then combine them into a single matrix of subcommunities. Each sample now can be expressed as a mixture of all these subcommunities. For example, if data X has g classes,
$$X=\left(X^{(1)},X^{(2)},\cdots,X^{(g)}\right) $$
where X (1),X (2),⋯,X (g) are g classes of observations. From X (i), we can calculate the non-negative type matrix T (i) and weight matrix W (i) (i=1,⋯,g) by NMF. To get the hidden structure of different classes in the whole data, we combine these type matrices together and denote this combined type matrix for the whole data as
$$T=\left(T^{(1)},T^{(2)},\cdots,T^{(g)}\right) $$
It is straightforward that T is non-negative since each T (i) is non-negative. For fixed T, to maximize the Poisson log-likelihood for the whole data X is equivalent to maximizing the Poisson log-likelihood for each sample, because the weight vectors in W associated with different samples are independent. Thus, calculating the weight matrix W can be reduced to performing a non-negative Poisson regression of each sample in X on T. The details of the procedure are given in Appendix 1.
Method for choosing the number of types
The number of types for each class of observations should be chosen to best describe its own class but not to describe other classes or noise. For discrimination purposes, the number of types for each class should be chosen to best separate the classes in combination with the number(s) of types in other classes. The most direct way to choose number of types for all classes is to find the model mis-classification errors on the validation sets for each combination of the numbers of types for different classes. However, the computation burden is heavy in such an effort. Thus, we propose to choose the number of types for each class separately first and try the selected combinations of number of types for different classes if the results are not clear-cut. Full details of the method used to choose the number of types, with some explanation are presented in Appendix 2. The basic method is to fit an NMF model on training folds from one class and compare the deviances on the test fold from that class with the deviances on a fold from each other class, using a Wilcoxon Rank-Sum test, then combining the test statistics for each fold into a single test statistic and estimating the standard deviation from the results for different folds.
In easy cases, the number of types to choose is clear-cut. Often, the number of types will be clear-cut for some classes, but not others. In these cases, we fix the number of types for the easy class(es) and use cross-validated error to choose the number of types for the other class(es).
For fixed T=(T (1),T (2),⋯,T (g)), we apply the non-negative Poisson regression algorithm on training data to calculate the training W and on test data to calculate the test W. After getting the W matrix, we have effectively reduced the dimension from p to k, in the sense that for the fixed T feature matrix, each observation is best approximated by the corresponding k vector in the W matrix. We can use an off-the-shelf supervised learning method to predict the class labels since k<n. Note that the sum of each column in the W matrix is the same as the sum of corresponding column in the X matrix, which means sequencing depth in microbiome data. When we perform a supervised learning, the transpose of the W matrix will be used as input for each observation. Geometrically this corresponds to projecting all the data into the space spanned by the vectors in the T matrix. The entries for different individuals on the same input vector of T are not comparable due to the different sequencing depth for the original data. We normalize the W matrix so that its column sum is 1 before performing a supervised learning method. This makes the entries in each row of W comparable and also makes it possible to show all the data in a plot. The normalization at this step is different from the normalization on the X matrix directly, because different sequencing depths result in heterogeneity in the original observations, and this has to be taken care of in the likelihood calculation and in the estimation of T and W.
We choose a suitable supervised learning method based on the graphical display of NMF results as described below. In the following sections, we most often perform a logistic regression on W. We choose logistic regression because our interactive exploration of the data suggests that a linear classifier is appropriate for this classification, and logistic regression is one of the simplest linear classification methods. The trained logistic regression model can then be used to do prediction on the test W.
Graphical display of NMF results
To properly display the NMF results we need to project down to two dimensions. A software package, SimplePlot, has been developed by one of the authors in this paper. It is available from Toby Kenney's website www.mathstat.dal.ca/~tkenney/SimplePlot/. Using SimplePlot, we can interactively choose a projection. Since the projection of the W-matrix is entirely determined by the projections of the types, the program allows us to manually move the positions of the types (represented by crosses on the figure) around the plane and watch how the relative positions of vectors from the W-matrix change. The advantage of using interactive software is that it is easier to identify non-linear separation if that is more appropriate for a particular dataset.
We apply both unsupervised NMF and supervised NMF on three datasets: whole metagenome sequences from faecal samples from 39 mammals (the mammal dataset) [27]; time sequences of 16S data from a range of body sites across two individuals (the moving picture dataset) [4]; and whole metagenome sequences from IBD patients (some Crohn's disease, some ulcerative colitis) and healthy controls (the Qin dataset) [28]. We gain some biological insight through the biological interpretation of the features and graphical display of the weight matrices from the NMF analysis. NMF is compared with UniFrac [10] and supervised NMF with two commonly used classification methods: Support Vector Machine (SVM) and Random Forest (RF). For the SVM, linear kernel, polynomial kernel and radial basis kernel are used. We use the R package e1071 [29] to apply SVM and the R package randomForest [30] to apply RF. The two tuning parameters for SVM—gamma and cost—are chosen by minimizing the average cross-validation error as the best combination for four values from 10−4 to 0.1 for the gamma and three values from 1 to 3 for the cost. We also compare the moving picture dataset results with BioMiCo [14] and the Qin dataset results with BiomeNet [15].
The mammal data
The mammal dataset [27] contains gut metagenomes extracted from n=39 mammals. The metagenomes include 1239 different types of genes (categorized by EC number). The mammals can be classified into four types: carnivore, foregut fermenting herbivore, hindgut fermenting herbivore and omnivore. There are 21 herbivores, 11 omnivores and 7 carnivores.
Unsupervised NMF results for mammal data
We calculate the log-likelihood for a range of k values and then observe how the log-likelihood changes with the k values. We choose the number of types for the mammal data as nine and apply unsupervised NMF on the data. A snapshot of the projected data on a plane is shown on the left panel of Fig. 1. From the plot, we can see that the carnivores can be totally separated from others and the other three types are mostly separated with a few overlapping points. The dimension of the data is reduced from 1239 to 9 in this analysis.
Left: Unsupervised NMF can totally separate the carnivores (blue) from the other three types of animals. The foregut-fermenting herbivores (red), the hindgut-fermenting herbivores (green) and Omnivores (yellow) are largely separated with a few mixed. Right: supervised NMF for separation of the carnivores from the herbivores. Both training carnivores (dark blue) and testing carnivores (light blue) are easily separated from the herbivores. The model was not trained to separate two types of herbivores, but a good degree of separation is shown for the foregut-fermenting herbivores (dark red for training and light red for testing cases) and the hindgut-fermenting herbivores (dark green for training and light green for testing cases). a A SimplePlot for unsupervised NMF, b A SimplePlot for supervised NMF
Supervised NMF results for mammal data
We first apply supervised NMF on the whole mammal dataset. The supervised NMF did not improve the classification significantly from the unsupervised NMF in this case. This is possibly because the metagenomic composition of omnivores is a mixture of that of herbivores and carnivores. In order to find the most important discriminant features between herbivores and carnivores, we apply supervised NMF only on the carnivores and herbivores from the mammal dataset [27]. So the data we use here contain metagenomic sequencing of fecal samples from 28 mammals: 7 carnivores and 21 herbivores. As the number of observations is small, we perform a sevenfold cross-validation on the whole data. Each time, we use six folds as training data and the remaining observations as test data.
We find two types that suitably describe both classes. Then, we calculate two types on each class using the training data and combine them to get the type matrix T. Fixing the type matrix, we obtain the weight matrices for the training cases and test cases by the non-negative Poisson regression method detailed in the appendix. We fit a logistic regression using the training data weight matrices and perform a prediction on the test data.
The projections of both training and test data in one fold of the sevenfold cross-validation, relative to the positions of four types calculated from the training data, are plotted in the right panel of Fig. 1. It shows that both training carnivores and test carnivores could be well separated from herbivores. Also, from the plot, we can see that although we did not supervise the distinction between the two types of herbivores, there is some reasonable degree of separation between these two classes.
Both the training and test errors are 0 in each fold of the sevenfold cross-validation data split. The prediction errors are all 0 meaning our algorithm could separate the two classes of mammals perfectly. The huge number of variables in the original data could be reduced to four features (two for each class), which means the classes of mammals can be easily determined by four features.
To compare the supervised NMF with support vector machine and random forest, we choose the best tuning parameters for SVM by the same sevenfold cross-validation as in supervised NMF. The best cost value for all kernels is 1. The best gamma value for polynomial kernel is 0.01, for sigmoid kernel 0.001 and for radial basis kernel 0.1. We also compare with Random Forest with the sparse variables removed. (We remove the 50% of the variables with lowest abundance in all samples.) The mean and standard deviation of prediction errors for models with these best tuning parameters on different folds are summarized in Table 1. The table shows that supervised NMF is among the methods which perform perfectly on the mammal data.
Comparison of test errors for support vector machine with linear kernel (SVM l), with polynomial kernel (SVM p), with sigmoid kernel (SVM s), with radial basis kernel (SVM r), RandomForest (RF), RandomForest with sparse variables removed (RFrm) and Supervised NMF
SVM l
SVM p
SVM s
SVM r
RFrm
Left Palm
Right Palm
[0.0461]
The first four rows are the prediction errors on the test data. The last two datasets are cross-validated errors with standard errors given in brackets on the line below. Best prediction for each dataset is presented in italics
Interpretation of the features in the mammal data
We map the features extracted separately from herbivores and carnivores to the metabolic pathways in KEGG. We find that most of the features from herbivores and carnivores involve the same metabolic pathways except that herbivores have more reactions in the biosynthesis of macrolides pathway, shown in Fig. 2. The most significant difference is found in one of the features of herbivores, which corresponds to the feature (cross) in the upper left corner of the right panel of Fig. 1. (This feature has been highlighted in purple on this plot.) Macrolides are a group of drugs belonging to the polyketide class of natural products. Macrolides are not to be used on non-ruminant herbivores: they rapidly produce a reaction causing fatal digestive disturbance [31]. This explains the results that 8 out of 10 herbivores which have highest weight on this feature are non-ruminants. These correspond to the 8 hindgut-fermenting herbivores (green) in Fig. 1 b. This shows that the inferred differences in the microbial communities of mammals relate well to the known different phenotypes for different mammals.
Biosynthesis of 12-, 14- and 16-membered macrolides. Reactions in red ellipses are those appearing in herbivores' features. Reactions in blue rectangles are those appearing in carnivores' features. Figure downloaded from KEGG [44], red ellipses and blue rectangles added by the authors
The moving picture data
The moving picture data [4] is the most detailed investigation of temporal microbiome variation to date. It consists of a long-term sampling series from two human individuals at four body sites: gut, tongue, right and left palm. Person 2 was measured for a longer time than person 1 (336–373 samples from each body site for Person 2 over a period of 443 days, compared to 131–135 samples from each site for Person 1 over a period of 186 days). The total number of variables (different OTUs) across all samples was more than 15,000. After removing all 0's, the total number of different OTUs for the gut data is around 3000, for the tongue data is around 2000, for the left palm data and right palm data are around 13,000. In spite of this extensive sampling, no temporal core microbiome was detected, with only a small subset of OTUs reoccurring at the same body site across all time points [4].
Unsupervised NMF results for gut data in the moving picture data
First, we apply NMF to the gut data. The gut data consists of 131 observations from person 1 and 336 observations from person 2. We find the number of types is 6 based on the plot of log-likelihood values versus number of types. And we see that the data from two individuals can be well separated—see the left panel of Fig. 3. It can be seen that the four types seemed to be used to mainly describe individual 2 and two types are mainly related to individual 1. It also shows that the observations for individual 2 are separated into two groups, the reason for which will become clear later in this paper.
Top row shows results from the gut dataset (with 6 types/coordinates used for the unsupervised methods); second row shows results from the tongue dataset (with 9 types used for the unsupervised methods); third row shows results from the left palm (with 6 types/coordinates); fourth row shows results from the right palm (with 6 types/coordinates). Blue points are from person 1; green points are from person 2. Left: unsupervised NMF; middle: supervised NMF on both training and testing data— darker blue and green points are testing data; right: UniFrac. a Gut NMF. b Gut supervised NMF. c Gut UniFrac. d Tongue NMF. e Tongue supervised NMF. f Tongue UniFrac. g L. Palm NMF. h L. Palm supervised NMF. i L. Palm UniFrac. j R. Palm NMF. k R. Palm supervised NMF. l R. Palm UniFrac
Supervised NMF results for gut data in the moving picture data
As the gut data is time based, we choose the first 70 time points' observations out of 131 observations of person 1 and the first 170 time points out of 336 observations of person 2 as training data. If the system changes slowly, we might expect samples from the same individual separated by only a short time might be more closely related. By choosing this separation into training and test data, we minimize the correlation between training and test data, ensuring that we only test the method's ability to pick up long-term microbial signatures of each individual. A 10-fold cross-validation with the training data split into 10 folds sequentially over time is applied for choosing the number of types and we find two types for each person is the best according to our method. This is an easy classification problem: based on two types for person 1, all deviance values for person 2 are much larger than the deviance values from person 1. A total separation is almost achieved for each fold of the cross-validation for any value of k (k≥2). This is the same based on two types for person 2. Thus, we choose two types for each person. Then, we fit a logistic regression model on the training W matrix and perform a prediction on the test data. The results are shown in the right panel of Fig. 3 and the prediction error is 0 for the test data.
We see that both training and test data are almost perfectly separated between the two individuals which means the distinguishing features of the gut data are included in a matrix consisting of four features. These four features contain sufficient information for classification and will be examined in detail in the interpretation section together with the features computed from the tongue data from these two individuals, because some interesting connections between the tongue data features and gut data features can be detected within individual 2.
Unsupervised NMF results for tongue data in the moving picture data
Next, we apply NMF to the tongue data. For the tongue data, there are 135 observations from person 1 and 373 observations from person 2.
It is not obvious what the appropriate value for the number of types should be by looking at the plot of log-likelihood versus number of types. We try NMF on nine types and ten types. Neither achieves good separation between samples from the two individuals. A SimplePlot for unsupervised NMF for nine types is shown Fig. 3 d as an example. Here, we see that samples from the two individuals are somewhat separated, but there is a lot of mixing: we cannot achieve a great classification from these features.
Although standard NMF works for the animal dataset and the gut dataset, it does not perform as well on the tongue dataset. The reason is that with unsupervised methods, the signal that is identified is not always the signal we are interested in. Using supervised NMF, we will be able to identify the different features for different classes. This allows us to more easily distinguish samples from different classes.
Supervised NMF results for tongue data in the moving picture data
For the tongue data, as above, we choose the first 70 time points' observations out of 135 observations of person 1 and the first 190 time points out of 373 observations of person 2 as training data. The remaining data are test data. We split the training data over time and perform a 10-fold cross-validation on the training data to find the number of types for both individuals. Our method shows that two types are appropriate for person 1, but is not so clear for person 2 (possibly suggesting nine types). Fixing two types for person 1, and comparing cross-validated error, we choose three types for person 2. This results in a test error of 0.04.
For illustration purposes, we show the SimplePlot of both training and test data based on two types for person 1 and three types for person 2 in Fig. 3 f. Through these five features, most of the observations in tongue data could be correctly classified according to which individual they come from.
Unsupervised NMF results for left palm data in the moving picture data
For the left palm data, there are 134 observations from person 1 and 365 observations from person 2. We try NMF for several different numbers of types on the left palm data. None of them achieve good separations between samples from the two individuals. A SimplePlot for unsupervised NMF for six types is shown in Fig. 3 g as an example. Here, we see that samples from the two individuals are somewhat separated with a considerable amount of mixing.
Standard NMF does not perform well on the left palm dataset. Using supervised NMF allows us to more easily distinguish samples from different classes.
Supervised NMF results for left palm data in the moving picture data
For the left palm data, we choose the first 67 time points' observations out of 134 observations of person 1 and the first 183 time points out of 365 observations of person 2 as training data. The remaining data are test data. Using the same procedure as for the tongue data, we find two types for person 1 and three types for person 2 can best separate the two individuals.
We show the SimplePlot of both training and test data based on two types for person 1 and three types for person 2 in Fig. 3 h. Most of the observations in left palm data could be correctly classified with a test error of 0.092.
Unsupervised NMF results for right palm data in the moving picture data
For the right palm data, there are 134 observations from person 1 and 359 observations from person 2. Similar to the results for the left palm, there is not a good separation between samples from the two individuals. A SimplePlot for unsupervised NMF for six types is shown in Fig. 3 j as an example. Here, we see that most samples from the two individuals cannot be separated using these features.
Supervised NMF results for right palm data in the moving picture data
For supervised NMF on right palm data, we choose the first 67 time points from person 1 and the first 180 time points from person 2 as training data. The remaining data are test data. We find two types for each person can best separate the two individuals.
We show the SimplePlot of both training and test data based on two types for each person in Fig. 3 k. Most of the observations in the right palm data could be correctly classified according to which individual they come from with a test error of 0.179.
Comparisons with other methods
These datasets have also been extensively analysed by BioMiCo [14]. To enable comparison with their results, we reran our analysis with individual months as training data. (Rerunning BioMiCo with our splits into training and test data is infeasible due to its excessive running time.)
We train supervised NMF on different months and predict the identity of the two individuals of all other months. The number of types used for each dataset is the same as we mentioned above. Even though a smaller number of samples are used to train the model, we still get very high-classification accuracy. The accuracy is between 98.1 and 99.8% when using the gut dataset and 85.4 and 92.9% when using the tongue dataset. This is almost the same as BioMiCo's accuracy, between 98.6 and 99.3% for the gut dataset and between 85 and 93% for the tongue dataset. However, we also get very high accuracy when using the palm data, between 88.9 and 93% when using left palm dataset and between 77.8 and 83% when using right palm dataset. This is significantly higher than BioMiCo's results (40 to 75%). Palm data are more challenging because human palms are exposed to the external environment. The comparison with BioMiCo concludes that the supervised NMF is not only efficient in terms of computation but also better at finding discriminant features of individuals even with very noisy data.
We also compared supervised NMF with support vector machine, random forest and random forest with sparse variables removed on this moving picture data. We split each body part's data in the same way as that in supervised NMF. A 10-fold cross-validation is applied to the training part to calculate the best tuning parameters. Models with the best tuning parameters then are trained on the whole training data and used to predict the test data. The results are summarized in Table 1. The comparison for moving picture data shows that supervised NMF gives comparable or better classification results than other methods except for the left and right palm dataset. For these datasets, random forest on the most abundant OTUs performed better than NMF. For the left palm data set, random forest on all variables performed better than NMF.
UniFrac is a widely used unsupervised method. To compare the separation of two individuals, we project the samples on principal coordinates of the unweighted UniFrac distance matrix (based on rarefied samples) in the right-hand column of Fig. 3 with the numbers of the principle coordinates equal to the numbers of types we have used for each case, presented using SimplePlot. We can see a clear separation of the two individuals from the gut dataset. Plots of tongue data and palm data show separations to some degree, but not as clear as in our unsupervised NMF plots (left panels in Fig. 3). This shows NMF is an alternative and possibly more useful data exploratory method for such data. In addition, NMF has a natural interpretation in terms of mixtures of communities, but the results from UniFrac are hard to interpret, as they cannot show what causes the grouping effects or where the differences in microbime composition lie.
Interpretation of the results
To examine the main aspects of the features identified, we plot the relative abundance of OTUs for different features in Fig. 4. The feature vectors are of the same dimension as the original observations. A natural side effect of NMF is that the resulting feature vectors are usually sparse. The feature vectors consist of non-negative elements with each vector sum equal to 1. The non-zero values can be interpreted as the percentages of the OTU composition in a particular feature. To get a better illustration, we use a cut-off of 3% for each feature vector in Fig. 4. That is, only those OTUs with above 3% composition in at least one feature are included in the plot.
Outstanding OTUs in features of moving picture data: The light and dark red bars are two features from person 1 and the blue bars are features from person 2. The OTUs from the same class are in the same block which is labeled by their class name and the bars are labeled by the genus of the OTUs. The two unlabeled bars in left palm data are the same OTUs with these unlabeled bars in the right palm plots. They are two different unclassified classes in Cyanobacteria phylum. a Outstanding OTUs in features of gut data. b Outstanding OTUs in features of tongue data. c Outstanding OTUs in features of left palm data. d Outstanding OTUs in features of right palm data
Figure 4 a shows the main OTUs for the gut data. We find only 17 out of more than 3000 OTUs are larger than the cut-off of 3%. Among these major OTUs, the two features within each individual bear some similarities. But the features between two different individuals are quite different. This is reflected by the fact that several of the most common OTUs in individual 1's features are not present in individual 2's features and vice versa. Since each individual's data can be best represented by his/her own two features and their two features are largely different, this partially explains why the classification of two individuals based on the gut data is an easy problem.
Figure 4 b shows the main OTUs for the tongue data. There are only around 20 OTUs in tongue features above the cut-off of 3%. Again the type matrix of the tongue data is highly sparse. Unlike the features of the gut data, the features of the tongue data for these two individuals are more similar. By looking at the compositions of the most dominant OTUs in each feature, we can easily see similarities between person 1's type 1 and person 2's type 1. Also person 1's type 2 is similar to person 2's type 2 for OTUs in the classes Fusobacteria and Gammaproteobacteria and similar to person 2's type 3 for OTUs in the class Bacilli. This suggests that there are similar variation patterns between the two individuals, with the same groups of OTUs increasing or decreasing together. Naturally, the classification for the tongue data is a harder problem.
Figure 4 c shows the main OTUs for the left palm data. Seventeen out of more than twelve thousand OTUs in the left palm features are larger than the cut-off of 3%. Among these major OTUs, the two features within individual 1 have OTUs present and absent together with some variations in their values. The features within individual 2 show a different pattern with each OTU mainly represented by one of the three features. Left palm features within each individual are quite different because the palm's microbial environment is more variable. Features between individuals are also quite different for most of these major OTUs. Several OTUs in individual 2's features are not present in individual 1's features. This may explain why the left palm data can achieve high classification accuracy but lower than the gut data.
Figure 4 d shows the main OTUs for the right palm data. There are only 13 OTUs in right palm features above the cut-off of 3%. The patterns of features within each individual are similar to their left palm data. But features between individuals are more similar except differences in the two unlabeled OTUs. This explains the difficulty in separating two individuals from the right palm data. We also find major OTUs in the right palm features are nearly all present in the left palm features. We do not find the same situation in gut and tongue features. This may be because an individual's left and right hands are usually exposed to the same environment. It could also be caused by contact between the two hands.
In many of the examples, NMF can act like a variable selection method—identifying individual reactions or OTUs which show different abundances in the two groups of samples. However, in the moving picture tongue dataset, we do not obtain such good classification by looking at individual OTUs. Instead, we look more deeply at the community structure identified by NMF. By examining community-level differences, we were able to classify the individuals with a very high degree of accuracy. We now look in more detail at the communities involved, in an attempt to understand why unsupervised NMF was less effective in this case, and why supervised NMF was able to resolve this problem. This also demonstrates more of the range of interpretability offered by NMF. In addition to highlighting individual OTUs or reactions that differ between the two classes, it is able to isolate bacterial subcommunities from which the microbiome is built up and offer insights into the different structures of these communities.
Figure 5 shows the profiles of the types extracted from the two individuals, with graphs of abundance of each genus in that type. For individual 1, we see that type 1 contains higher abundances of Neisseria, Haemophilus, Porphyromonas, Fusobacterium and the unclassified genus from the Pasteurellaceae family, while type 2 includes higher abundance of Streptococcus, Prevotella, Rothia, Actinomyces and Veillonella. This may well be associated with the action of Porphyromonas. One species, Porphyromonas gingivalis, has been shown in [32] to manipulate the host immune system, allowing pathogens to colonise the community. While the OTU from the genus Porphyromonas in this dataset is unclassified at species level, it could have a similar effect to the studied species P. gingivalis. This would seem consistent with type 1 having higher levels of various Proteobacteria and Fusobacteria closely related to known pathogens. When we look at the features for individual 2, we see a similar picture, with types representing varying levels of Porphyromonas. Again, we see with increased Porphyromonas, we have an increase in Neisseria, Haemophilus, Fusobacterium, and the unclassified genus of the Pasteurellaceae family, and a corresponding decrease in Streptococcus, Prevotella, Actinobacteria and Veillonella. Type 2 may show that the effect of Porphyromonas is non-linear with Prevotella actually increasing in abundance with low levels of Porphyromonas.
Major genera for tongue feature matrix. The light and dark red bars are two features from individual 1 and the blue bars are features from individual 2. Each bar is labeled by the name of the genus or family
We also examine the types in the absence of Porphyromonas (type 2 for individual 1 and type 3 for individual 2). For both individuals, we see that these types are dominated by Streptococcus, Prevotella and Veillonella. However, Fig. 4 b shows the differences between these types. We see that individual 2 has more Actinomyces and a different distribution between OTUs within the genus Veillonella. Similarly, there are subtle differences between the types with high abundance of Porphyromonas (type 1 for both individuals). Individual 1's type 1 has higher levels of Streptococcus than individual 2's. This might be partially explained by the use of three types to model individual 2, allowing separate types to model both high and low levels of Streptococcus in cases with high levels of Porphyromonas. However, in Fig. 4 b, we see the presence of higher abundance of a second OTU from the genus Neisseria in individual 2's type 1. This cannot be explained by the different numbers of types used to analyse the two individuals. Supervised NMF is able to identify these subtle differences and use them to identify the individuals, even in situations where the large-scale community structure varies a lot between samples within each individual.
We also consider the idea that the types correspond to communities of microbes. When we look at the type without Porphyromonas, we can see the makings of a community structure, with a number of microbes (such as Prevotella, Streptococcus and Actinobacteria) that metabolise glucose into pyruvate, which is later metabolised into lactate, and other microbes such as Veillonella which metabolise lactate.
Temporal dynamics
To investigate the temporal dynamics of the four body sites' microbiomes, the weight matrices for the gut data and the tongue data are plotted in Fig. 6. When we apply NMF to person 2 with three types for the gut data (see the upper panel of Fig. 6), there is a clear shift at around 2009-08-14. This timepoint is highlighted in Fig. 6. For the gut weight matrix, the dominant weight is initially type 2 and changes to type 3 after this time. For person 2's tongue data, this shift is not very clear when we use only three types. However, with four types, we can identify a more apparent shift in their weight matrix time series plots. For this data one more feature can bring out more details in the variation of the data. In the lower panel of Fig. 6, the weight matrix time series plots for the tongue data relative to these two features show that type 1 is consistently more represented than type 3 in the early part of the study although not always dominant due to the effects of types 2 and 4; type 3 is more represented than type 1 after the changing point (highlighted on the plot). The shift occurs first in the tongue weight matrix and then can be detected about 4 days later in the gut weight matrix. This suggests that some significant change has taken place in person 2's system at around this time and that the change has influenced both the gut and the tongue microbiomes.
Gut and tongue weight matrix time series plot for person two. The top plot shows the gut weight matrix on the second type (red line) and third type (blue line) from NMF with 3 types. The bottom plot shows the tongue weights on the first type (red line) and the third type (blue line) from NMF for tongue data with 4 types
In order to compare the changes which we have identified as taking place in these microbiomes, the distributions of different phyla and classes of OTUs in each feature are presented in Fig. 7. The top features in this plot are the ones that are more represented in the earlier part of the data (i.e. type 2 for the gut data, and type 1 for the tongue data). The bottom features in this plot are those that are more represented in the later part of the data.
Class and phylum proportions in gut and tongue type matrices. The left panels contain two types from person 2's gut data and the right panels are for his tongue data. The top plots present the dominant types at the beginning in the time series plot. The bottom plots present the dominant types after the shift in the time series plot. Similar colours in classes are from the same phylum
Figure 7 shows a similar shift of composition between the two features for both gut and tongue. In both cases, the type which was more represented in the earlier part of the study has a lower proportion of Bacteroidia and a higher proportion of Clostridia. The proportion of Bacteroidia increases and the proportion of Clostridia decreases for both representative features of gut and tongue data in the later part of the study. The consistency of these changes between the two datasets gives further support to our conjecture that this represents a systematic change at this time. The differences between the types are more pronounced in the tongue data. This could be because the tongue is more exposed to external influences, so its microbiome may be more variable. It might also be because we were using four types to model the tongue data and only three for the gut data. Fitting more types gives the types more room to spread out, allowing for more extreme types and amplifying the differences between the fitted types.
We see that the changes shown in Fig. 7 are consistent with the earlier interpretation of the types in Fig. 5. We used four types here to model the microbiome, but we can see in Fig. 7 that the dominant type after the transition includes much higher abundances of Bacteroidetes (including Porphyromonas and Prevotella, which has been associated with Periodontal disease [33]) and Proteobacteria (including Neisseria and Haemophilus) and lower levels of Firmicutes (including Streptococcus and Veillonella) and Actinobacteria (including Rothia and Actinomyces). Note that the types in Fig. 5 are fitted from the training data, which is entirely before the state change in person 2.
Having identified the state change using NMF, we ask whether NMF was a necessary tool for identifying the change. First, we compare a naive examination of the composition of the microbiome by class. Figure 8 shows the smoothed proportion of each class over time in person 2's gut and tongue microbiomes. We see that there are no clear changes in composition at this level, indicating that this is not an obvious change to identify.
Moving average of class proportions in gut and tongue observations
For comparison, we also use UniFrac and PCoA for person 2's gut and tongue data. We see that the first three principal coordinates for the gut data and the first four coordinates for the tongue data do not reveal this change. It is only when we examine the 4th principle coordinate for the gut data and the 5th principle coordinate for the tongue that we are able to detect the changes. The difficulty of finding this explains why this pattern was not found in the many previously published analyses of these data. This is made more difficult by the common practice of examining only the first three principal coordinates. It is possible to find the pattern using UniFrac, if one knows what to look for, but NMF certainly makes the pattern much easier to find.
The Qin data
The Qin dataset [28] contains human gut metagenome samples extracted from 99 healthy people and 25 IBD patients. The data include 2804 different reactions.
Unsupervised NMF results for Qin data
We choose the number of types for the Qin data as six and apply unsupervised NMF on the data. A projection of the data onto a plane is shown on the left panel of Fig. 9. From the plot, we can see that about 19 of the IBD patients can be separated from healthy people. The separation is similar to the results of BiomeNet [15]. The plot shows that two of these features are more related to IBD patients and the other four more related to healthy people. This is consistent with what we find using supervised NMF.
Left: unsupervised NMF based on 6 types. The blue points are from IBD patients and the green ones are from healthy people. Right: supervised NMF on both training and test data. The blue points are training data from patients, and green points are training data from healthy people; the dark blue points are test data from patients, and the dark green points are test data from healthy people. a Unsupervised NMF. b Supervised NMF
Supervised NMF results for Qin data
The sample size of patients is much smaller than the sample size of healthy people. So we perform a classification giving the patients a weight of 4 to balance the class sizes. (For supervised NMF, these weights do not affect the fitted matrices T and W, only the classifier applied to the weight matrix W.) This means that the classifier that assigns all samples to one class will have an accuracy of about 50%. We perform a 10-fold cross-validation on the whole data. Each time, we use nine folds as training data and the remaining observations as test data.
We find two types are enough for patients and four types for healthy people. We perform supervised NMF and fit a logistic regression using the training data weight matrices (with patients given weights of 4) and perform a prediction on the test data. The average of the weighted prediction error over the 10 folds is 0.233 with a standard error of 0.0487.
The projections of both training and test data in one fold of the 10-fold cross-validation are plotted in the right panel of Fig. 9. It shows a quite good separation between these two groups. The classification is not perfect, but is an improvement upon previous methods, such as BiomeNet [15].
The comparisons with support vector machine and random forest methods are summarized in Table 1. The dataset is split to the same 10 folds as supervised NMF. The best parameters are tuned by a 10-fold cross-validation on the whole dataset. The best cost parameter in SVM function is 3 for radial basis kernel and 1 for other three kernels. The best gamma parameter is 10−4 for radial basis kernel, 0.1 for polynomial kernel and 0.001 for sigmoid kernel. No method performed significantly better than supervised NMF.
The six type vectors are highly sparse with each vector sum equal to 1. We use a cut-off of 0.5% for each type to find the distribution of each type over the major reaction groups. Here, each reaction group includes the different reactions that correspond to the same enzyme-coding gene; thus, each category can also be understood as corresponding to one enzyme-coding gene. The type distribution over 17 enzyme-coding genes or reaction groups is shown in Fig. 10. We can observe that the IBD Type 2 is quite different from other types, with large abundance on the fourth and fifth enzyme-coding genes and that both IBD types have weight zero on the second enzyme-coding gene. Each individual's metagenome profile is expressed as a linear combination of these six types; the weight distribution over each type is shown in Fig. 11, where the top part of each bar presents the distribution of the weights for healthy individuals for the corresponding type, and the bottom part of each bar is for the weight distribution of IBD patients with each patient counted as four times to make the results comparable to the healthy individuals. From Fig. 11, we can see the IBD patients mainly have non-zero weights on IBD Type1, IBD Type2, Healthy Type 1 and Healthy Type 2, and healthy individuals mainly have non-zero weights on Healthy Type 1, Healthy Type 2 and Healthy Type 4. It seems that the IBD Type 2 typically represents a group of IBD patients and Healthy Type 2 represents a group of healthy individuals with these two types distributed very differently over the enzyme-coding genes shown in Fig. 10.
Qin data: the distribution of each type over major enzyme-coding genes: IBD Type 2 typically represents a group of IBD patients and Healthy Type 2 represents a group of healthy individuals with these two types distributed very differently over the enzyme-coding genes
The weights distribution over each type for heathy individuals (top for each bar) and IBD patients (bottom for each bar): the IBD patients mainly have non-zero weights on IBD Type1, IBD Type2, Healthy Type 1 and Healthy Type 2, and healthy individuals mainly have non-zero weights on Healthy Type 1, Healthy Type 2 and Healthy Type 4
According to Fig. 10, the first three reaction groups contribute more to healthy types and the fourth and fifth reaction groups contribute more to IBD patients (mainly to IBD Type 2). Reactions in the first group are all in macrolide biosynthesis. Macrolides are protein synthesis inhibitors and can be used as an antibiotics treatment of inflammatory diseases including inflammatory bowel disease [34–36]. The second reaction group is involved in polycyclic aromatic hydrocarbon degradation and the third group is in carotenoid biosynthesis. Polycyclic aromatic hydrocarbons (PAHs) are one family of ubiquitous environmental toxicants. This family has contributed significantly to the development of colorectal cancer (CRC), a disease highly linked to IBD [37]. Carotenoids can enhance the human immune system's effectiveness [38]. As IBD is a kind of autoimmune disease, this could explain why these two compounds are lower in IBD patients' features. The fourth group in IBD Type 2 is involved in ascorbate and aldarate metabolism and the fifth group in amino sugar and nucleotide sugar metabolism, fructose and mannose metabolism, glycolysis and gluconeogenesis and additional pathways. These are concordant with BiomeNet's findings in subnetworks 38, 64 and 73. Comparing our reaction groups with the three subnetworks, we notice that reaction group 4 can be found in subnetwork 64 and group 5 has some overlaps with subnetwork 38 and subnetwork 73. These three subnetworks were discovered to have a larger contribution to IBD samples than healthy ones.
Simulation based on NMF
We perform simulations in this section to evaluate the performance of our proposed method with regard to the number of types selected and prediction accuracy. We use types estimated from the Qin data [28] to do the simulation. We simulate data according to our proposed model. The data follows a Poisson distribution with mean (TW) ij . To generate these data, we first generate the mean TW.
The mean is a linear combination of different features (different columns of T). We fix T to be the features obtained by applying NMF to the two classes in the Qin dataset [28].
We generate the W matrix by generating each entry from a uniform distribution on [ 0,1], then normalizing the column vectors so that the column sums of W are equal to the column sums of the IBD data.
The product TW gives us the mean, and we add four levels of noise to the product TW. The noise is normally distributed with mean 0 and four different standard deviations, to study the effects of different signal-noise ratios (SNR).
$$SNR=+\infty : {sd}_{0}=0 $$
$$SNR=4 : {sd}_{1}={sd}(T)/4 $$
$$SNR=1 : {sd}_{3}={sd}(T) $$
Here, the sd(T) is a vector of standard deviations for each row of T. This is a vector of length p (the number of genes or OTUs) which measures the variability for each gene or OTU across different features in T.
The column of TW plus the noise is the Poisson mean we use in the simulation. Each element of X is generated following an independent Poisson distribution with the mean given by the mean matrix described above.
We simulate data with number of types equal to 2, 5, 10 for class 1 and 3, 6, 9 for class 2. So the number of different combinations is 9 in total. They are 2&3, 2&6, 2&9, 5&3, 5&6, 5&9, 10&3, 10&6, 10&9. Considering the different noise levels, we have 36 scenarios. For each scenario, we simulate 25 replicates. In each replicate, we simulate 200 observations for each class. Then, we separate the data into two parts: the first 200 observations (100 from each class) as the training data and the other 200 as the test data.
We choose the number of types from the training data using a 10-fold cross-validation. After the number of types is chosen, we perform a prediction on the test data using the trained logistic regression model on the training data based on the chosen number of types for each simulated data set.
The NMF, RF and SVM prediction errors are shown in Tables 2, 3 and 4 respectively for different noise levels. We find when the true numbers of types get larger, the NMF prediction errors tend to increase but the RF prediction errors tend to decrease. That may be because we have more accurately estimated the number of types in the cases when the true numbers of types are small. But overall, the prediction errors are quite small for all cases which means our supervised NMF method works well in prediction. NMF performs better in prediction than RF when number of types is small and better than SVM in all scenarios.
NMF mean prediction test errors for 25 data sets with the standard errors for the mean prediction errors (mean/SE)
class 1 ∖class 2
+∞
0.0002/0.001
0.004/0.0066
0.0104/0.0126
The rows are the true number of types for class 2, and the columns are the true number of types for class 1
RF mean prediction test errors for 25 data sets with the standard errors for the mean prediction errors (mean/SE)
SVM mean prediction test errors for 25 data sets with the standard errors for the mean prediction errors (mean/SE)
0.13/0.0167
Table 5 summarizes the results of the number of types chosen. It shows that the algorithm tends to output slightly larger values than the true number of types in most scenarios, but the true numbers of types mostly are within one standard deviation of the mean of the chosen number of types. Note also, the number of types are chosen only by performing the Wilcoxon Rank-Sum test for each class (see the appendix), the results are not modified through optimizing the classification results based on combined types.
Simulation summary of the estimated numbers of types
For example, the first entry 2/3.4 means when the true number of types is 2 for class 1, 3 for class 2 and SNR=+∞, the mean numbers of types our method chooses are 2 for class 1 and 3.4 for class 2
Table 5 also shows that in most replicates, when the noise level becomes higher, the difference between the mean and the true number of types will increase. Nevertheless, these results demonstrate that our method is quite effective in finding the appropriate number of types.
Further simulation results (not shown in this paper) have shown that when we apply NMF with the true number of types on the simulated data, the features computed from the data can match very closely with the true features that were used to generate the data. Applying NMF with the wrong number of features can recover a space with the true features embedded in it. The study of consistency of the NMF method is not a trivial topic and deserves further research.
Simulation with outliers
We designed this simulation to measure how our method performs when the data contain outliers. We perform this simulation based on data generated in the last section. We use the generated data of scenarios 2&3 types, 5&6 types and 10&9 types, with SNR=1. We generate outliers by mislabeling the class of observations in the training data. We run simulations with 5, 10 and 20% of observations in the training data mislabeled. We used the same procedure as in the previous section to calculate the prediction errors. The results in Table 6 show that while RF is more robust in this simulation, NMF still predicts fairly well when there are outliers in the data.
Mean prediction test errors and the associated standard errors (mean/SE) for simulation with outliers
Number of types
Outliers proportion
2&3 types
10&9 types
Simulation with zero inflated weight matrix
In the previous simulations, we generated the weight matrix of the Poisson mean from the uniform distribution. The sparsity of the generated datasets is around 24%, which is less than is typically observed in practice. We therefore use a Dirichlet distribution with all parameters 0.005 for the weights, in order to generate zero-inflated data. This results in a sparsity of around 39%. We follow the same steps from the first section of the simulations, to generate 36 scenarios and 25 replicates in each scenario. The prediction errors and the associated standard errors of NMF, RF and SVM are shown in Tables 7, 8 and 9. The results show that NMF and SVM are robust when the data become more sparse. RF performs worse in this simulation than in the original simulation.
NMF mean prediction test errors and the associated standard errors for simulation with zero-inflated weight matrix
RF mean prediction test errors and the associated standard errors for simulation with zero-inflated weight matrix
SVM mean prediction test errors and the associate standard errors for simulation with zero-inflated weight matrix
0.207/0.01
Simulation based on Dynamic Ecology Models
The interpretability of NMF is based on the assumption that the microbial community can be interpreted as a mixture of subcommunities. In this section, we study the question of whether realistic community dynamics can give rise to this assumption. Current knowledge of the community dynamics of the microbiome is woefully inadequate; with a few available suggested models, none of which fit the data very well. In this section, we simulate community dynamics under a Holling type II model [39], given by
$$\frac{dM_{i}}{dt}=M_{i}\left(r_{i}(1-c_{i}M_{i})+\sum_{j\ne i}\frac{b_{ij}a_{ij}M_{j}}{1+a_{ij}{T_{H}}_{ij}M_{j}}\right) $$
Here, for OTU i, r i is the intrinsic growth rate; c i is the coefficient of negative intraspecific interaction, which is the inverse of the carrying capacity of this OTU in isolation; a ij is attack rate; T H ij is handling time; and b ij is the interaction coefficient between OTUs. When a ij T H ij M j is very small, the 1 term dominates the denominator, so the derivative approximately follows generalised Lotka-Volterra type dynamics for these OTUs; when a ij T H ij M j is large such that it dominates the denominator of the fraction, then the term becomes approximately \(\frac {b_{ij}}{{T_{H}}_{ij}}\), and the influence of OTU j on OTU i is limited by this quantity.
The reason we choose the Holling Type II model, rather than the more commonly used generalised Lotka-Volterra dynamics is that the Holling model seems to have more capacity for overlapping communities to coexist without influencing one-another excessively, because the Holling type II model incorporates a limit on the effect of one OTU on another. This makes intuitive sense when the interaction consists of one OTU providing some metabolite to another OTU. We expect the growth of an OTU to be limited by multiple metabolites, and when one metabolite is used up, increasing the supply of another metabolite would not be expected to have a significant increase on the growth rate. This limit on the effect allows overlapping subcommunities to mix in an approximately linear way. We anticipate that a detailed model based on flux balance equations could be developed which would both model community dynamics more accurately and follow the assumptions behind NMF more closely. However, developing new models for the dynamics of microbial ecology is beyond the scope of this paper.
We use the fixed network structure shown in Fig. 12 for the simulations. We can see that the network used is made up from three overlapping clusters (M1– M10, M9– M18 and M17– M26). The intuition is that for each cluster there is a metabolic subcommunity, representing the stable state of the system when restricted to that cluster, and that the overall community is made up as a mixture of these subcommunities. For each black link in the network in Fig. 12, we simulate the species interaction coefficient b ij as following a uniform distribution between 0 and 0.008. For the blue links in the network, we simulate b ij from a uniform distribution between −0.002 and 0.008, and for the red ones, we simulate b ij from a uniform between −0.08 and 0. We set T H ij around 10−5 by generating \(\frac {1}{{T_{H}}_{ij}}\) from 105×beta(5,1). This scale of T H ij allows the Holling type II dynamics to take effect—if T H ij is much larger, the effect of one OTU on another is limited, so the OTUs become almost independent, losing the subcommunity structure. If T H ij is much smaller, then the interspecific interaction term is approximately linear, so we get gLV dynamics, which are less suited for overlapping clusters. We allow r i and c i to vary between samples in each dataset, with r i simulated from a uniform distribution between 0 and 1, and \(\frac {1}{c_{i}}-1\) simulated from 99×beta(1,2). The idea is that these parameters are related to the suitability of the environment for OTU i, so different samples would have different values. The other parameters are kept fixed for all samples, since these represent the inherent ability of these OTUs to interact, so should not be expected to vary greatly between environments. We simulate 10 values of the parameters b ij for the given network. For each of these simulated values, we simulate one data set with 50 samples, one with 100 samples and one with 200 samples. To construct each sample, we simulate values of r i and c i for each OTU and simulate the dynamics from Eq. 1, using 1,000,000 iterations with a stepsize of 0.001.
Network used for community dynamics simulations. The red nodes represent OTUs from cluster 1, blue nodes are OTUs from cluster 2, green nodes are from cluster 3 and yellow nodes are isolated OTUs not in any subcommunity. The purple and cyan nodes are overlapping OTUs of cluster 1 and cluster 2, or cluster 2 and cluster 3, respectively
For each dataset, we apply NMF with four types. We compare the fitted types with the known subcommunities, both visually and using a formal loss function.
We also calculate the co-occurrence networks [40] of the simulated data and compare the results with NMF. The co-occurrence network is produced by calculating the correlation of each pair of nodes in the simulated data. A null distribution for each pair is generated by permuting the abundance of one of the pair and re-calculating the correlation. The resampling is performed 1000 times, and the distribution is used to calculate p values. The p values are then corrected using Benjamini-Hochberg [41] to control the false discovery rate.
Neither NMF nor co-occurrence networks are designed exactly to identify the network structures or parameters of the Holling model. However, from the network structure in Fig. 12, we see that the network can be reasonably decomposed as containing three large subcommunities (shown in red, blue and green in that figure, with nodes in multiple subcommunities coloured in mixed colours, purple and cyan). Both NMF and co-occurrence networks have some capacity to recover these subcommunities. For NMF, these subcommunities would be recovered as the most abundant OTUs in a type, while for co-occurrence networks, they would arise as connected components in the networks. We can attempt to compare the extent to which the two methods succeed at recovering these subcommunities. This extent is somewhat subjective. For an NMF type, we form clusters of OTUs as the OTUs with abundance above some threshold in that type. For co-occurrence networks, we form clusters as the connected components of the network at a certain significance level. We then choose unions of these clusters to recover the subcommunities used for simulation. To allow comparison, we have defined the following loss function for each true subcommunity to measure how far each such union is from the true subcommunity.
For each OTU in the subcommunity, but not in the union of clusters, the loss is 1.
For each OTU in the union of clusters, but not in the subcommunity, the loss is 1.
For each additional cluster after the first in the union, the loss is 1.
For example, the loss for the red, green and blue subcommunities in Fig. 13 are respectively 1, 1 and 5, and the loss for the red, green and blue subcommunities in Fig. 14 are respectively 6, 7 and 9 (the blue community being best approximated by a singleton connected component). The abundance thresholds in the NMF type and the significance levels in the co-occurrence networks are chosen to minimise the total loss for each subcommunity. We allow different significance levels for different connected components here. Note that the example calculation above was meant to demonstrate how the loss function is calculated for a given set of clusters, based on the single figure, not on clusters with different p values, so the values calculated may not be the actual loss function for that dataset.
NMF features extracted from data simulated under a Holling type II model
Co-occurrence network calculated from data simulated under a Holling type II model
Table 10 shows that NMF most often is able to recover the subcommunities used to simulate the data, especially when sample size is large. Note that the blue subcommunity (M9–M18) has weaker interactions between OTUs, so is less clearly a subcommunity, and is therefore not identified as well as the others. Since the loss function is somewhat ad hoc, Figs. 13 and 14 show typical examples of the recovered types from one simulation with 200 data points, to allow more direct comparisons visually. As we can see, NMF has done a better job in recovering the subcommunities. It is also worth noting that co-occurrence networks tend to create many small clusters, which gives the method an advantage over NMF for the above defined loss function, particularly for subcommunities which are not identified well.
Mean/standard deviation of the optimum loss scores for each true subcommunity under two different methods, with 10 simulations for each sample size
M1−M10
M17−M26
Co-occurrence network
The fact that NMF is able to recover the true subcommunities in the simulated data does not necessarily mean that the subcommunities found by NMF on real data are genuine subcommunities, because the dynamics of the real microbial community could be different from those in this simulation. We have however shown that vaguely realistic models of microbial community dynamics can produce subcommunity structures similar to those modeled by NMF. Given that NMF is able to uncover these structures, we have better justification to support that the true community dynamics might also be well represented in terms of the subcommunities identified by NMF and that these subcommunities have meaningful biological structure.
Simulation for the performance of NMF as a clustering method
We construct the simulation following the method in McMurdie and Holmes simulation A [11]. The real microbial data ocean and feces from the GlobalPatterns dataset are used to obtain two basic sets of multinomial probabilites. We then produce new multinomial probabilites for two classes as linear mixtures of the original two sets. The ratio of these mixtures is determined by the parameter effect size, \(s_{e}\geqslant {1}\). One class mixes the basic sets in the ratio 1:s e , the other mixes them in the ratio s e :1. When s e =1, the classes are identical, so we expect no separation. As s e increases, the difference between the classes becomes larger, so the clustering problem becomes easier. We simulate 200 samples for each class with effect size set to 1.01, 1.05, 1.1, 1.3 and 1.5 and sequencing depth set to 10000. For each value of the effect size, we simulate 30 replicates.
For each simulated data set, we calculate the NMF weight matrix on the original count data using two types and then calculate the Euclidean distance between samples based on the NMF weight matrix. For comparison, we also calculate Bray-Curtis dissimilarity, Euclidean distance, weighted UniFrac and rarefied Unweighted UniFrac on proportional data. We perform clustering analysis using Partitioning Around Medoids (PAM) with the number of clusters fixed as two and measure the performance of the methods by the mis-clustering errors. The results are shown in Table 11.
Mean/standard error of the mis-clustering errors from the 30 replicates
Effect size
Weighted UniFrac
Unweighted UniFrac
Bray-Curtis
Euclidean
From the table, we see that NMF performs generally better than other methods except weighted UniFrac. This simulation was based on phylogenetically very different classes, which gives UniFrac an advantage, and fixed sequencing depths, which nullifies one of UniFrac's limitations. Despite this, the results are mostly comparable, and NMF outperforms other commonly used methods. This shows that the dimension reduction by NMF could help to filter out the noise and retain the major dissimilarity signals of the data.
The NMF analysis can provide a range of interpretable conclusions about the data sets. For metagenomic data, the features extracted can be mapped to metabolic pathways. For OTU data, the features correspond to communities of OTUs and can be studied in terms of the proportion of each phylum, class or genus. In any case, looking at the results of the NMF can reveal important patterns or differences between individuals that are not apparent from the original data. We were able to identify this type of pattern in all three real data sets—the difference in macrolide synthesis pathways for the non-ruminant herbivores; the change in composition of the gut and tongue microbiomes for person 2 in the moving picture data; and the differences in various pathways for the Qin data.
The simulation results show that supervised NMF can recover the right number of types based on which a good classification result can be achieved. Supervised NMF can effectively reduce the dimensionality of the data to a non-negative and most often sparse data matrix, which contains sufficient discriminative information for classification purposes. In addition to the accuracy for classification, these typical features are the community signatures for each class of objects and their interpretation can often uncover important information about the differences between different classes of objects. Simulations of community dynamics under a Holling type II model show that plausible models of community dynamics can lead to the type of additive subcommunity structure assumed by NMF, and that in such a case, NMF is able to identify biologically meaningful types representing the subcommunities.
There are a number of ways the work could be extended in future. The following are some of the most promising and related problems:
Choosing the number of types is still a difficult problem. The method used in this paper can give an answer based on what is needed to make each class different from other classes. However, the non-parametric method has limited efficiency and, as was shown in the simulation, can be quite far from the true values.
NMF fitting does not always have a unique solution. There are a variety of methods in the literature to fix a "best" solution, based on decisions of which aspects of the solution should be penalised. For example, sparsity constraints can be added [42] to make T or W even more sparse. More work is needed to determine which form of penalty is most appropriate for microbiome data. This penalty could be used to incorporate the phylogenetic structure into NMF. There is a strong intuition in the field that the phylogenetic structure should be important in analysing microbiome data, although there is no clear idea of exactly how it should be used. A penalty could be added to encourage closely related OTUs to be included in the same type. By examining the structure of types for unpenalised NMF, we could gain insight into the appropriate form for this penalty.
As yet, there is no goodness-of-fit test for NMF. That means that we are not certain whether the features identified really represent biologically meaningful entities. There is support for this belief from the fact that they allow us to accurately classify samples and also because the features have a biological interpretation which makes sense. However, a formal test to confirm that NMF fits the model well would be a valuable tool. It would also help with the next topic in our future work.
More theoretical work is needed to justify that NMF can recover the true underlying communities. This is complicated by the non-uniqueness of the solution. Once a method for resolving this non-uniqueness is chosen, it should be possible to identify conditions under which it will recover the true subcommunities, given enough data.
Non-negative Poisson regression
Our purpose is to find the non-negative coefficients for a Poisson regression with identity link and without intercept, by maximizing the Poisson log-likelihood. We now focus on the regression of one sample X j =(X 1j ,X 2j ,⋯,X pj ) on T. The resulting coefficients W j =(w 1j ,⋯,w kj ) thus will be either positive or 0, with 0 coefficients corresponding to the variables in T removed from this regression. We aim to find a list of positive coefficients with the corresponding variables, so that adding another variable to the list cannot improve the likelihood and still maintain the non-negative constraint. This is achieved through a backwards-forwards Poisson regression procedure as follows.
We start by recursively fitting a Poisson regression on T and removing the variables corresponding to the negative coefficients in W j =(w 1j ,⋯,w kj ) until all the coefficients are positive. Using the remaining variables, we calculate the log-likelihood value. Then, we test each removed variable by adding it back with a small positive coefficient, if this increases the log-likelihood value, we add this variable back to the remaining variables and repeat the above steps; otherwise, we remove this variable and test the next one.
The algorithm follows these steps:
Fit a Poisson regression with identity link but without intercept on T with the initial value of W j set as the coefficients of linear least square regression of X j on T. Eliminate those variables corresponding to negative coefficients.
If any variables were removed, go back to step 1 until all the coefficients are positive. In the end, the matrix consisting of remaining variables is \(T^{+}_{j}\). Since X, T and W are all non-negative, the resulting \(T^{+}_{j}\) cannot be empty unless X is a zero vector.
Calculate the log-likelihood for \(T^{+}_{j}\).
$$L\left(T^{+}_{j}\right)=\sum_{i=1}^{p}\left(X_{ij}\log\left(T^{+}_{j}W_{j}\right)_{i}-\left(T^{+}_{j}W_{j}\right)_{i}\right), $$
where \(\left (T^{+}_{j}W_{j}\right)_{i}\) denotes the ith element of the vector \(T^{+}_{j}W_{j}\).
Add one variable in the removed pool to \(T^{+}_{j}\), denote the new feature matrix as \({T^{+}_{j}}_{new}\) and calculate the log-likelihood again.
$$\begin{aligned} L\left({T^{+}_{j}}_{new}\right)=&\sum_{i=1}^{p}\left(X_{ij}\log\left({T^{+}_{j}}_{new}{W_{j}}_{new}\right)_{i}\right.\\ &\left.-\left({T^{+}_{j}}_{new}{W_{j}}_{new}\right)_{i}\right), \end{aligned} $$
where W j new =(W j (1−ε),ε), ε is a very small positive number close to 0. For this paper, we use 10−7 as the value of ε.
Compare \(L\left (T^{+}_{j}\right)\) with \(L\left ({T^{+}_{j}}_{new}\right)\), if \(L\left (T^{+}_{j}\right)<L\left ({T^{+}_{j}}_{new}\right)\), use this new \({T^{+}_{j}}_{new}\) composed of \(T^{+}_{j}\) and the new variable to repeat steps 1 to 5. Otherwise, remove this variable and try to add another variable in the removed pool to \(T^{+}_{j}\) and repeat steps 4 to 5, until all removed variables have been tested.
In step 4, we add back one removed variable each time into the positive T matrix and calculate the new log-likelihood value. To decide if this variable should be added back, we do not need to refit the Poisson regression when calculating the new log-likelihood value. As the old coefficient matrix is a local maximization for the log-likelihood function with the remaining variables, the derivative of the log-likelihood at that point should be 0 with respect to all remaining variables. When we add another variable with a small positive coefficient into the system, if we are near to the original maximum, the log-likelihood for the new point will either increase or decrease, depending whether the derivative with respect to the newly added variable is positive or negative. So if we want to see whether a variable could increase the log-likelihood, we can just add a very small weight ε for the new variable, then calculate the new log-likelihood with the new rescaled weight matrix. We need to rescale the W j vector, so that Wj′ new 1=Xj′1, where 1=(1,⋯,1). This is because we assume the data follow the Poisson distribution, so the sum of the observations X j should be equal to the sum of the mean vector TW j . As each column of T has unit sum, \(W^{\prime }_{j}1=W'_{j}T'1=X'_{j}1\).
We compare this new log-likelihood value with the old one. If it decreases, the derivative is negative which means points with positive weight on the new variable will decrease the log-likelihood. Then, the new variable should not be added. If the new one is larger than the old one, add this variable into the positive T matrix and do a Poisson regression on this new positive T matrix again and repeat the above steps until no variable can be added. In this way, we can make sure that each time we decide to add a new variable to the positive T matrix, the likelihood becomes larger. This procedure keeps the log-likelihood function increasing under the constraints that all elements in W j remain non-negative.
To see that the algorithm will converge, a key point is that our algorithm is only dealing with the discrete part of the optimization, and the Poisson regression takes care of the continuous optimization. Since we are optimizing over a finite number of possible sets of positive variables, convergence is guaranteed by the fact that each step increases the likelihood.
In order to choose the best number of types for the first class, we will look at the deviance statistics to see how well the chosen types will fit the first class better than other classes. (Deviance is a measure of fit between data and model, given by the difference in log-likelihood between the current model, and a saturated model. Smaller deviance corresponds to better fit.) Since the types are chosen from the first class, to make the comparison objective, the deviance statistics need to be calculated on a test set of the first class. We obtain one deviance statistic for each data point in the test set. We use cross-validation, so that every data point is in one test set. The deviance statistics are not normally distributed; thus, we will use the Wilcoxon Rank-Sum test [43] based on the deviance statistics to test how well the classes are separated. The idea is to rank the deviance statistics from the test data points. If there is no discrimination between the classes, then the ranks should be distributed randomly between the classes. The Wilcoxon Rank-Sum test computes a statistic which measures how unevenly the ranks are distributed between the classes. This statistic is then standardised so that it (approximately) follows a standard normal distribution under the assumption that the ranks are randomly distributed between classes. We refer to this standardised statistic as a Z-value. We obtain one Z-value for each fold of the cross-validation. Our overall measure of difference is the sum of the Z-values for each fold, divided by \(\sqrt {r}\), where r is the number of folds. (Dividing by \(\sqrt {r}\) ensures that if the model is equally good at fitting the data from the two classes, then this overall measure follows a standard normal distribution.) We have one Z-value from each fold of the cross-validation, so by calculating the standard deviation of these Z-values, we are able to obtain a standard error for our overall statistic. For each class, we will try a sequence of values for the number of types and find the best value to discriminate this class from other classes.
We use a 2-class data case as an example to illustrate the ideas. We use an r-fold cross-validation on training data for both classes. In each cross-validation, we separate the training data into a training fold and a test fold. To choose the number of types for class 1, we apply the following steps to a range of values for k:
For each fixed value k, fit k types on the training folds from class 1 to get the type matrix T.
Fit the remaining test fold data from class 1 and one fold of data from class 2 on T.
Calculate the deviance for each fitting (one deviance value for each data point in the test folds).
Use a Wilcoxon Rank-Sum test on these deviances to get one Z-value for each fold.
Sum the values of Z statistics from each fold of the cross-validations and divide by \(\sqrt {r}\); denote this statistic as Z all . This statistic should follow a normal distribution with mean of zero and standard deviation of 1 under the null hypothesis that the distributions of deviance values from both classes are the same.
Choose the smallest k for which Z all is within one standard deviation of the largest Z all -value, where the standard deviation is calculated as the sample standard deviation of the Z-values from the different folds for each k.
Note that the purpose is to choose k such that the deviances from two classes are best separated, not a hypothesis test to test the equality of means. Thus, the sample standard deviation of Z all is calculated from the different folds in the last step, instead of using 1, which is the standard deviation under the null hypothesis. By using r-fold cross-validation and combined Z-values, we can effectively increase the power of this test, which is particularly important when the number of observations is small.
When the classification problem is an easy one, there is a clear separation between the deviances resulting from the class for which we are selecting the number of types and that from other classes. The near complete separation often results in the almost equal Z-values from the different folds; thus, the sample standard deviation of Z all is small. When the classification problem is hard, the resulting Z-values from different folds tend to have larger variance. The number of types selected in the easy case usually is small and clear cut; the number of types selected in the harder case usually tends to be large. After we run the above procedure to select numbers of types for all classes, we will fix the number of types for the easy case and select the best matching number of types for the other class so that the misclassification error is minimized.
Irritable bowel disease
NMF:
Non-negative matrix factorisation
Principal coordinate analysis
PCA:
The first author is supported by a Nova Scotia International Student Scholarship and by the Herzberg-derived funding for HQP from Dr. Ford Doollittle's Herzberg award. The second author is supported by NSERC grant RGPIN/250043-2006. The third author is supported by NSERC grant RGPIN/04945-2014.
The data analysed in this manuscript have all been previously published in other papers.
The Moving picture data set is in Qiita study 550.
The other two datasets are available in the format used for our analysis with the BiomeNet package at http://sourceforge.net/projects/biomenet/.
All three authors contributed to the development of methodology, design of the simulations, interpretation of the real data analysis, and writing of the paper. YC and TK contributed to the implementation of the method. All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Department of Mathematics and Statistics, Dalhousie, Halifax, Canada
Arrigo KR. Marine microorganism and global nutrient cycles. Nature. 2005; 437:349–55.View ArticlePubMedGoogle Scholar
Fujimura KE, Slusher NA, Cabana MD, Lynch SV. Role of the gut microbiota in defining human health. Expert Rev Anti-Infect Ther. 2010; 8(4):435–54.View ArticlePubMedPubMed CentralGoogle Scholar
Sekirov I, Russell SL, Antunes LCM, Finlay BB. Gut microbiota in health and disease. Physiol Rev. 2010; 90(3):859–904.View ArticlePubMedGoogle Scholar
Caporaso JG, et al. Moving pictures of the human microbiome. Genome Biol. 2011; 12:50.View ArticleGoogle Scholar
Gilbert JA, Steele JA, Caporaso JG. Defining seasonal marine microbial community dynamics. ISME J. 2012; 6:298–308.View ArticlePubMedGoogle Scholar
Phelan VV, Liu WT, Pogliano K, Dorrestein P. Microbial metabolic exchange—the chemotype-to-phenotype link. Nat Chem Biol. 2012; 8:26–35.View ArticleGoogle Scholar
Hastie T, Tibshirani R, Friedman J. The elements of statistical learning: data mining, inference, and prediciton. Math Intell. 2005; 27:83–5.Google Scholar
Knights D, Costello EK, Knight R. Supervised classification of human microbiota. FEMS Microbiol Rev. 2011; 35:343–59.View ArticlePubMedGoogle Scholar
Ramette A. Multivariate analyses in microbial ecology. FEMS Microbiol Ecol. 2007; 62(2):142–60.View ArticlePubMedPubMed CentralGoogle Scholar
Lozupone C, Lladser ME, Knights D, Stombaugh J, Knight R. Unifrac: an effective distance metric for microbial community comparison. ISME J. 2011; 5(2):169.View ArticlePubMedGoogle Scholar
McMurdie PJ, Holmes S. Waste not, want not: why rarefying microbiome data is inadmissible. PLoS Comput Biol. 2014; 10(4):1003531.View ArticleGoogle Scholar
Holmes I, Harris K, Quince C. Dirichlet multinomial mixtures: generative models for microbial metagenomics. PLos One. 2012; 7:30126.View ArticleGoogle Scholar
Knights D, Kuczynski J, Charlson ES. Bayesian community-wide culture-independent microbial source tracking. Nat Methods. 2011; 8:761–3.View ArticlePubMedPubMed CentralGoogle Scholar
Shafiei M, Dunn KA, Boon E, MacDonald SM, Walsh DA, Gu H, Bielawski JP. Biomico: a supervised bayesian model for inference of microbial community structure. Microbiome. 2015; 3(1):1.View ArticleGoogle Scholar
Shafiei M, Dunn KA, Chipman H, Gu H, Bielawski JP. Biomenet: A bayesian model for inference of metabolic divergence among microbial communities. PLoS Comput Biol. 2014; 10:1003918.View ArticleGoogle Scholar
Devarajan K. Nonnegative matrix factorization: an analytical and interpretive tool in computational biology. PLoS Comput Biol. 2008; 4(7):1000029.View ArticleGoogle Scholar
Jiang X, Langille MG, Neches RY, Elliot M, Levin SA, Eisen JA, Weitz JS, Dushoff J. Functional biogeography of ocean microbes revealed through non-negative matrix factorization. PloS ONE. 2012; 7(9):43866.View ArticleGoogle Scholar
Jiang X, Weitz JS, Dushoff J. A non-negative matrix factorization framework for identifying modular patterns in metagenomic profile data. J Math Biol. 2012; 64:697–711.View ArticlePubMedGoogle Scholar
Lee DD, Seung HS. Learning the parts of objects by non-negative matrix factorization. Nature. 1999; 401:788–91.View ArticlePubMedGoogle Scholar
Lee DD, Seung HS. Algorithm for non-negative matrix factorization. In: Leen TK, Dietterich TG, Tresp V, editors. Advances in Neural Information Processing Systems 13 (NIPS 2000). Neural Information Processing Systems 2000: 2001. p. 556–69.Google Scholar
Gonzalez E, Zhang Y. Accelerating the lee-seung algorithm for nonnegative matrix factorization. Dept. Comput. & Appl. Math., Rice Univ., Houston, TX, Tech. Rep. TR-05-02. 2005.Google Scholar
Lin CJ. On the convergence of multiplicative update algorithms for non-negative matrix factorization. IEEE Trans Neural Netw. 2007; 18:1589–96.View ArticleGoogle Scholar
Hoyer P. Non-negative matrix factorization with sparseness constraints. J Mach Learn Res. 2004; 5:1457–69.Google Scholar
Shahnaz F, Berry M, Plemmons R. Document clustering using nonnegative matrix factorization. Inf Process Manag. 2006; 42:373–86.View ArticleGoogle Scholar
Berry MW, Browne M. Algorithms and applications for approximate nonnegative matrix factorization. Comput Stat Data Anal. 2007; 52:155–73.View ArticleGoogle Scholar
Renaud G, Cathal S. A flexible r package for nonnegative matrix factorization. BMC Bioinforma. 2010; 11(1):367. doi:10.1186/1471-2105-11-367.View ArticleGoogle Scholar
Muegge BD, et al. Diet drives convergence in gut microbiome functions across mammalian phylogeny and within humans. Science. 2011; 332:970–4.View ArticlePubMedPubMed CentralGoogle Scholar
Qin J, et al. A human gut microbial gene catalogue established by metagenomic sequencing. Nature. 2010; 464:59–65.View ArticlePubMedPubMed CentralGoogle Scholar
Dimitriadou E, Hornik K, Leisch F, Meyer D, Weingessel A, Leisch MF. Package 'e1071'. R Software package version 1.6–8, avaliable at http://cran.rproject.org/web/packages/e1071/index.html. 2009.
Liaw A, Wiener M. The package randomForest: manual pages. CRAN: 2015. R package version 4.6–12. http://cran.r-project.org/package=randomForest.
Giguere S, Prescott JF, Baggot JD, Walker RD, Dowling PM. Antimicrobial therapy in veterinary medicine (4th Ed.)USA: Wiley-Blackwell; 2006.Google Scholar
Darveau R, Hajishengallis G, Curtis M. Porphyromonas gingivalis as a potential community activist for disease. J Dent Res. 2012; 91:816–820. 0022034512453589.View ArticlePubMedPubMed CentralGoogle Scholar
Costalonga M, Herzberg MC. The oral microbiome and the immunobiology of periodontal disease and caries. Immunol Lett. 2014; 162(2):22–38.View ArticlePubMedPubMed CentralGoogle Scholar
Shinkai M, Henke MO, Rubin BK. Macrolide antibiotics as immunomodulatory medications: proposed mechanisms of action. Pharmacol Ther. 2008; 117(3):393–405.View ArticlePubMedGoogle Scholar
Mencarelli A, Distrutti E, Renga B, et al. Development of non-antibiotic macrolide that corrects inflammation-driven immune dysfunction in models of inflammatory bowel diseases and arthritis. Eur J Pharmacol. 2011; 665(1):29–39.View ArticlePubMedGoogle Scholar
Kwiatkowska B, Maślińska M. Macrolide therapy in chronic inflammatory diseases. Mediat Inflamm. 2012;2012. Article ID 636157.Google Scholar
Diggs DL, et al. Polycyclic aromatic hydrocarbons and digestive tract cancers: a perspective. J Environ Sci Health C. 2011; 29(4):324–57.View ArticleGoogle Scholar
Hughes DA. Effects of carotenoids on human immune function. Proc Nutr Soc. 1999; 58(03):713–8.View ArticlePubMedGoogle Scholar
Holling C. Some characteristics of simple types of predation and parasitism. Can Entomol. 1959; 91:385–98.View ArticleGoogle Scholar
Berry D, Widder S. Deciphering microbial interactions and detecting keystone species with co-occurrence networks. Front Microbiol. 2014; 5:219.View ArticlePubMedPubMed CentralGoogle Scholar
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B (Methodol). 1995; 57:289–300.Google Scholar
Kim H, Park H. Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis. Bioinformatics. 2007; 23:1495–502.View ArticlePubMedGoogle Scholar
Wilcoxon F. Individual comparisons by ranking methods. Biom Bull. 1945; 1:80–83.View ArticleGoogle Scholar
Kanehisa M, Goto S. KEGG: kyoto encyclopedia of genes and genomes. Nucleic acids research. 2000; 28(1):27–30. Oxford University Press.View ArticlePubMedPubMed CentralGoogle Scholar | CommonCrawl |
Journal of Inequalities and Applications
Main results and discussion
Proof of the main results
A general slicing inequality for measures of convex bodies
Yufeng Yu1Email authorView ORCID ID profile
Journal of Inequalities and Applications20192019:139
Received: 27 March 2018
We consider the following inequality:
$$\begin{aligned} \mu (L)^{\frac{n-k}{n}} \leq C^{k}\max_{H\in \mathit{Gr}_{n-k}}\mu (L \cap H), \end{aligned}$$
which is a variant of the notable slicing inequality in convex geometry, where L is an origin-symmetric star body in \({{\mathbb{R}}}^{n}\) and is μ-measurable, μ is a nonnegative measure on \({\mathbb{R}} ^{n}\), \(\mathit{Gr}_{n-k}\) is the Grassmanian of an \(n-k\)-dimensional subspaces of \({\mathbb{R}}^{n}\), and C is a constant. By constructing the generalized k-intersection body with respect to μ, we get some results on this inequality.
Convex bodies
Intersection bodies
Generalized measures
The notable slicing problem in convex geometry asks whether there exists a constant C such that for any positive integer \(n\geq 1\) and any origin-symmetric convex body [1] L in \({\mathbb{R}}^{n}\)
$$\begin{aligned} \vert L \vert ^{\frac{n-1}{n}} \leq C^{k}\max_{\xi \in S^{n-1}} \bigl\vert L\cap \xi ^{\bot } \bigr\vert , \end{aligned}$$
where \(\xi ^{\bot }\) is the hyperplane in \({\mathbb{R}}^{n}\), perpendicular to ξ passing through the origin, and \(|L|\) stands for volume of proper dimension. There is a lot of literature focusing on this problem. We refer the reader to [2–5] [6, Theorem 9.4.11] for the history and more results. Iterating (1.1) one gets the lower slicing problem asking whether the inequality
$$\begin{aligned} \vert L \vert ^{\frac{n-k}{n}} \leq C^{k}\max_{H\in \mathit{Gr}_{n-k}} \vert L\cap H \vert \end{aligned}$$
holds with an absolute constant C, where \(1\leq k< n\) and \(\mathit{Gr}_{n-k}\) is the Grassmanian of \((n-k)\)-dimensional subspaces of \({\mathbb{R}} ^{n}\); see [7, 8].
A more general problem considered in [9] is: Does there exist an absolute constant C such that for every positive integer n and every integer \(1\leq k< n\), and for every origin-symmetric convex body L and every measure μ with nonnegative even continuous density in \({\mathbb{R}}^{n}\),
$$\begin{aligned} \mu (L) \leq C^{k}\max_{H\in \mathit{Gr}_{n-k}}\mu (L\cap H) \vert L \vert ^{\frac{k}{n}}, \end{aligned}$$
where \(|L|\) stands for volume of proper dimension?
This question is an extension to that of (1.2), general measures taking place of volumes, a major open problem in convex; see [4, 10–12]. By this reason, (1.3) is also called slicing inequality in convex geometry; see [9, 13].
In the literature [9] one proved (1.3) for unconditional convex bodies and for duals of bodies with bounded volume ratio. And it also was proved that for every \(\lambda \in (0, 1)\) there exists a constant \(C=C(\lambda )\) such that (1.3) holds for every positive integer n, for every origin-symmetric convex body L, the codimensions of whose sections in \({\mathbb{R}}^{n}\) \(k\geq \lambda n\), and for every measure μ with continuous density.
Inequality (1.3) gives a link between \(\mu (L)\) and \(\mu (L\cap H)\), which denote different dimensional measures. Observe (1.3) and we found that there are two kinds of measures with respect to L, \(\mu (L)\) and the Lebesgue measure \(|L|\). Therefore, we consider a problem that whether the Lebesgue measure \(|L|\) in (1.3) could be replaced by the general measure \(\mu (L)\), that is, whether the following inequality holds:
$$\begin{aligned} \mu (L)^{\frac{n-k}{n}} \leq C^{k}\max_{H\in \mathit{Gr}_{n-k}}\mu (L \cap H) \end{aligned}$$
for some constant C. Inequality (1.4) is a variant of (1.3) and but more concise than (1.3). And inequality (1.4) is the purpose of this article. Next, we introduce the main tool employed in this article and the program of this article.
The k-intersection body, introduced by [7, 14], plays an important role in the solution to the Busemann–Petty problem, which is equivalent to the slicing problem; see [6, 7, 9, 15–17]. In this article, we define the generalized k-intersection body with measure μ, and denote by \(\mathcal{B}\mathcal{P}^{n}_{k,\mu }\) the class of generalized k-intersection bodies with measure μ, with \(\mathcal{B}\mathcal{P}^{n}_{k}\) being the class of k-intersection bodies. If μ is the Lebesgue measure on considered set, then the generalized k-intersection body with measure μ becomes the k-intersection body, and \(\mathcal{B}\mathcal{P}^{n}_{k,\mu }\) becomes \(\mathcal{B}\mathcal{P}^{n}_{k}\). Using the outer measure ratio distance from μ-measurable set L to the class \(\mathcal{B}\mathcal{P} ^{n}_{k,\mu }\), denoted by \(o.m.r. (L,\mathcal{B}\mathcal{P}^{n} _{k,\mu } )\) (see (2.3)), we get inequality (1.4) for some constant C. We also give a comparison of \(o.m.r. (L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } )\) and \(o.v.r. (L,\mathcal{B}\mathcal{P}^{n}_{k} )\)
$$\begin{aligned} o.m.r. \bigl(L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } \bigr)\leq C o.v.r. \bigl(L,\mathcal{B}\mathcal{P}^{n}_{k} \bigr), \end{aligned}$$
for some constant C, which depends only on μ. Then the results for \(o.v.r. (L,\mathcal{B}\mathcal{P}^{n}_{k} )\) [8, 9, 13] can transfer to that for \(o.m.r. (L, \mathcal{B}\mathcal{P}^{n}_{k,\mu } )\).
This article is arranged naturally. In Sect. 2, we define the class of generalized k-intersection body with measure μ, \(\mathcal{B}\mathcal{P}^{n}_{k,\mu }\), and the outer measure ratio distance from μ-measurable set L to the class \(\mathcal{B} \mathcal{P}^{n}_{k,\mu }\). The main results are exposed in Sect. 3. Section 4 contains the proof of the main results. Finally, in the last section, we give the conclusions and some remarks.
2 Preliminaries
The k-intersection body plays an important role in the solution to the Busemann–Petty problem, which is equivalent to the slicing problem. An analog to the k-intersection body plays an important role in the solution of our problem. In this section, we will construct the class of generalized k-intersection body with measure μ, \(\mathcal{B}\mathcal{P}^{n}_{k,\mu }\) and define the outer measure ratio distance from μ-measurable set L to the class \(\mathcal{B} \mathcal{P}^{n}_{k,\mu }\).
Let g be a continuous nonnegative even function on \({\mathbb{R}} ^{n}\) (\(g(x)=g(-x)\) for \(x\in {\mathbb{R}}^{n}\); see [9]). For any Lebesgue measurable set A in \({\mathbb{R}}^{n}\), let
$$\begin{aligned} \mu (A)= \int _{A}g(x)\,dx, \end{aligned}$$
where dx is the Lebesgue measure. Then μ is a nonnegative measure on \({\mathbb{R}}^{n}\), and function g is called the density of measure μ [9]. At this time, A is called μ-measurable, and \(\mu (A)\) is the μ-measure of A.
A closed bounded set L in \({\mathbb{R}}^{n}\) is called a star body if every straight line passing through the origin crosses the boundary of L at exactly two points different from the origin and the origin is an interior point of L, where the Minkowski functional of L defined by
$$\begin{aligned} \Vert x \Vert _{L} = \min \{a\geq 0:x\in aL\} \end{aligned}$$
is a continuous function on \({\mathbb{R}}^{n}\); see [9]. The radial function of a star body L is defined by
$$\begin{aligned} \rho _{L}(x) = \Vert x \Vert _{L}^{-1} \quad \text{for } x\in {\mathbb{R}}^{n} \text{ and } x \neq 0; \end{aligned}$$
see [9]. If \(x\in {\mathbb{S}}^{n-1}\) then \(\rho _{L}(x)\) is the radius of L in the direction x.
The generalized k-intersection body was introduced in [9]. An origin-symmetric star body L in \({{\mathbb{R}}}^{n}\) is a generalized k-intersection body, and write \(L\in \mathcal{B} \mathcal{P}^{n}_{k}\), if there exists a finite Borel nonnegative measure \(\mu _{1}\) on \(\mathit{Gr}_{n-k}\) so that for every \(\varphi \in C (S^{n-1} )\) (class of continuous functions on \(S^{n-1}\))
$$\begin{aligned} \int _{S^{n-1}} \Vert \theta \Vert _{L}^{-k} \varphi (\theta )\,d\theta = \int _{\mathit{Gr}_{n-k}}R_{n-k}\varphi (H)\,d\mu _{1}(H), \end{aligned}$$
where \(R_{n-k}:C (S^{n-1} )\rightarrow C (\mathit{Gr}_{n-k} )\) is the \((n-k)\)-dimensional spherical Radon transform, defined by
$$\begin{aligned} R_{n-k}\varphi (H)= \int _{S^{n-1}\cap H}\varphi (x)\,dx \end{aligned}$$
for every function \(\varphi \in C (S^{n-1} )\) and for every \(H\in \mathit{Gr}_{n-k}\).
Putting the measure μ into the generalized k-intersection body, we define the generalized k-intersection body with respect to measure μ. An origin-symmetric star body K in \({{\mathbb{R}}}^{n}\) is called a generalized k-intersection body with respect to measure μ, denoted by \(K\in \mathcal{B}\mathcal{P}^{n}_{k,\mu }\) (or \(K\in \mathcal{B}\mathcal{P}^{n}_{k,g}\)), if there exists a nonnegative finite Borel measure \(\mu _{1}\) on \(\mathit{Gr}_{n-k}\) such that for every φ in \(C (S^{n-1} )\)
$$\begin{aligned} \int _{S^{n-1}} \biggl(n \int _{0}^{\Vert \theta \Vert _{K}^{-1} }r ^{n-1}g(r\theta )\,dr \biggr)^{\frac{k}{n}}\varphi (\theta )\,d\theta = \int _{\mathit{Gr}_{n-k}}R_{n-k}\varphi (H)\,d\mu _{1}(H), \end{aligned}$$
where g is the density of measure μ.
Note that when \(g\equiv 1\) (namely μ is the Lebesgue measure), \(\mathcal{B}\mathcal{P}^{n}_{k,g}\) becomes \(\mathcal{B}\mathcal{P} ^{n}_{k}\).
For a convex body L in \({{\mathbb{R}}}^{n}\), the outer volume ratio distance from L to \(\mathcal{B}\mathcal{P}^{n}_{k}\) is defined by
$$\begin{aligned} o.v.r. \bigl(L,\mathcal{B}\mathcal{P}^{n}_{k} \bigr)=\inf \biggl\{ \biggl(\frac{ \vert K \vert }{ \vert L \vert } \biggr) ^{\frac{1}{n}}:L\subset K,K\in \mathcal{B}\mathcal{P}^{n}_{k} \biggr\} ; \end{aligned}$$
see [9].
Similarly, for a Lebesgue measurable convex body L in \({{\mathbb{R}}} ^{n}\), we define the outer measure ratio distance with respect to measure μ from L to the class \(\mathcal{B}\mathcal{P}^{n}_{k, \mu }\) by
$$\begin{aligned} o.m.r. \bigl(L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } \bigr)=\inf \biggl\{ \biggl(\frac{ \mu (K)}{\mu (L)} \biggr)^{\frac{1}{n}}:L\subset K,K\in \mathcal{B} \mathcal{P}^{n}_{k,\mu } \biggr\} . \end{aligned}$$
Now we turn to the main results in the next section.
3 Main results and discussion
In this section, the goal is to establish the inequalities, namely that similar to (1.4), which reveals the relationship of the general measures of origin-symmetric star bodies and their \(n-k\) (\(1 \leq k\leq n-1\)) dimensional intersection bodies, and is a variant of the classical slicing problems (1.3) in convex geometry.
Theorem 3.1
Let L be an origin-symmetric star body in \({{\mathbb{R}}}^{n}\) and μ-measurable with density g, and \(m=\inf_{x\in K} \{g(x) \}>0\). Then, for \(1\leq k\leq n-1\),
$$\begin{aligned} \bigl(\mu (L) \bigr)^{\frac{n-k}{n}} \leq \bigl(o.m.r. \bigl(L, \mathcal{B} \mathcal{P}^{n}_{k,\mu } \bigr) \bigr)^{k} \biggl( \frac{n}{m} \biggr) ^{\frac{k}{n}} \frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S ^{n-k-1} \vert }\max _{H\in \mathit{Gr}_{n-k}}\mu (L\cap H) . \end{aligned}$$
Let L be an origin-symmetric star body in \({{\mathbb{R}}}^{n}\) and μ-measurable with density g, \(R=\frac{1}{2}\operatorname{diam}(L)\), and
$$\begin{aligned} \int _{0}^{\|\theta \|_{L}^{-1}}r^{n-1}g(r\theta )\,dr\geq \frac{1}{n} \end{aligned}$$
for all \(\theta \in S^{n-1}\). Then, for \(1\leq k\leq n-1\),
$$\begin{aligned} \bigl(\mu (L) \bigr)^{\frac{n-k}{n}} \leq \bigl(o.m.r. \bigl(L, \mathcal{B} \mathcal{P}^{n}_{k,\mu } \bigr) \bigr)^{k}R^{k}n^{ \frac{k}{n}} \frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S ^{n-k-1} \vert }\max_{H\in \mathit{Gr}_{n-k}}\mu (L\cap H). \end{aligned}$$
By Theorem 3.2 and the relationship (Proposition 4.7) between \(o.m.r. (L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } )\) and \(o.v.r. (L,\mathcal{B}\mathcal{P}^{n}_{k} )\), we have the following.
Under the assumptions of Theorem 3.2,
$$\begin{aligned} \bigl(\mu (L) \bigr)^{\frac{n-k}{n}} \leq \bigl(o.v.r. \bigl(L, \mathcal{B} \mathcal{P}^{n}_{k} \bigr) \bigr)^{k} \biggl( \frac{M}{m} \biggr) ^{\frac{2k}{n}}R^{k}n^{\frac{k}{n}} \frac{ \vert S^{n-1} \vert ^{ \frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \max_{H\in \mathit{Gr}_{n-k}}\mu (L\cap H), \end{aligned}$$
where \(m=\inf_{x\in {\mathbb{R}}^{n}}g(x)>0\) and \(M=\sup_{x\in {\mathbb{R}}^{n}}g(x)<+\infty \).
Theorems 3.1 and 3.2 both contain the coefficient: the outer measure ratio distance with respect to measure μ from L to the class \(\mathcal{B}\mathcal{P}^{n}_{k,\mu }\). Moreover, the coefficient in Theorem 3.1 is also relevant to the measure μ, the coefficient in Theorem 3.2 is also relevant to the diameter of L.
Under the assumptions of Theorem 3.2, the outer measure ratio distance with respect to measure μ from L to the class \(\mathcal{B}\mathcal{P}^{n}_{k,\mu }\), \(o.m.r. (L,\mathcal{B} \mathcal{P}^{n}_{k,\mu } )\) can be replaced by the outer volume ratio distance from L to \(\mathcal{B}\mathcal{P}^{n}_{k}\), \(o.v.r. (L,\mathcal{B}\mathcal{P}^{n}_{k} )\), which essentially is \(o.m.r. (L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } )\) specialized by letting μ be the Lebesgue measure. This result is stated as Theorem 3.3. The coefficient in Theorem 3.3 is also relevant to the measure μ.
4 Proof of the main results
First, using the polar formula for volume of a star body L we get a useful formula
$$\begin{aligned} \vert L \vert =\frac{1}{n} \int _{{\mathbb{S}}^{n-1}} \Vert \theta \Vert ^{-n}_{L}\,d \theta . \end{aligned}$$
To prove Theorem 3.1, we first give the following result.
Lemma 4.1
Let K be in \(\mathcal{B}\mathcal{P}^{n}_{k,g}\) and μ-measurable with density g, and \(m=\inf_{x\in K}g(x)>0\). Assume that f is a continuous nonnegative even function on K, and \(\varepsilon >0\). If for every \(H\in \mathit{Gr}_{n-k}\),
$$\begin{aligned} \int _{K\cap H}f(x)\,dx\leq \varepsilon , \end{aligned}$$
then, for \(1\leq k\leq n-1\),
$$\begin{aligned} \int _{K}f(x)\,dx\leq \biggl(\frac{n}{m} \biggr)^{\frac{k}{n}}\frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \bigl(\mu (K) \bigr)^{\frac{k}{n}} \varepsilon . \end{aligned}$$
Writing the integrals in spherical coordinates, we get
$$\begin{aligned} \int _{K}f(x)\,dx= \int _{S^{n-1}} \biggl( \int _{0}^{\Vert \theta \Vert _{K}^{-1} }r^{n-1}f(r\theta )\,dr \biggr)\,d \theta \end{aligned}$$
$$\begin{aligned} \int _{K\cap H}f(x)\,dx &= \int _{S^{n-1}\cap H} \biggl( \int _{0}^{\Vert \theta \Vert _{K}^{-1} }r^{n-k-1}f(r\theta )\,dr \biggr)\,d \theta \\ &=R_{n-k} \biggl( \int _{0}^{\Vert \cdot \Vert _{K}^{-1} }r^{n-k-1}f(r \cdot )\,dr \biggr) (H). \end{aligned}$$
So the condition of the lemma can be written as
$$\begin{aligned} R_{n-k} \biggl( \int _{0}^{\Vert \cdot \Vert _{K}^{-1} }r^{n-k-1}f(r \cdot )\,dr \biggr) (H)\leq \varepsilon ,\quad \text{for all } H\in \mathit{Gr}_{n-k}. \end{aligned}$$
Integrate both sides with respect to the measure \(\mu _{1}\) that corresponds to K as a generalized k-intersection body with respect to measure μ by (2.1). We get
$$\begin{aligned} \int _{\mathit{Gr}_{n-k}}R_{n-k} \biggl( \int _{0}^{\Vert \cdot \Vert _{K} ^{-1} }r^{n-k-1}f(r\cdot )\,dr \biggr) (H)\,d\mu _{1}(H)\leq \varepsilon \mu _{1}( \mathit{Gr}_{n-k}). \end{aligned}$$
Estimate the integral in the left-hand side of (4.3) using \(K\in \mathcal{B}\mathcal{P}^{n}_{k,g}\) and \(m>0\), then we have
$$\begin{aligned} & \int _{\mathit{Gr}_{n-k}}R_{n-k} \biggl( \int _{0}^{ \Vert \cdot \Vert _{K} ^{-1} }r^{n-k-1}f(r\cdot )\,dr \biggr) (H)\,d\mu _{1}(H) \\ &\quad = \int _{S^{n-1}} \biggl(n \int _{0}^{ \Vert \theta \Vert _{K} ^{-1} }r^{n-1}g(r\theta )\,dr \biggr)^{\frac{k}{n}} \int _{0}^{ \Vert \theta \Vert _{K}^{-1} }r^{n-k-1}f(r\theta )\,dr \,d\theta \\ &\quad \geq m^{\frac{k}{n}} \int _{S^{n-1}} \Vert \theta \Vert _{K} ^{-k} \int _{0}^{ \Vert \theta \Vert _{K}^{-1} }r^{n-k-1}f(r \theta )\,dr \,d\theta . \end{aligned}$$
Noting that \(\Vert \theta \Vert _{K}^{-1}\geq r\) in the right-hand side of (4.4), we get
$$\begin{aligned} \int _{S^{n-1}} \Vert \theta \Vert _{K}^{-k} \int _{0}^{ \Vert \theta \Vert _{K}^{-1} }r^{n-k-1}f(r\theta )\,dr \,d\theta \geq \int _{S^{n-1}} \int _{0}^{ \Vert \theta \Vert _{K}^{-1} }r^{n-1}f(r \theta )\,dr \,d\theta = \int _{K}f(x)\,dx. \end{aligned}$$
Now we estimate \(\mu _{1}(\mathit{Gr}_{n-k})\) in the right-hand side of (4.3).
By the assumptions of Lemma 4.1 and the integral transform of spherical coordinates, we get
$$\begin{aligned} \mu (K)= \int _{K}g(x)\,dx= \int _{S^{n-1}} \int _{0}^{\Vert \theta \Vert _{K}^{-1} }r^{n-1}g(r\theta )\,dr \,d\theta . \end{aligned}$$
Using \(1=R_{n-k}1(H)/|S^{n-k-1}|\) for every \(H\in \mathit{Gr}_{n-k}\), definition (2.1) and Hölder's inequality, we have
$$\begin{aligned} \mu _{1}(\mathit{Gr}_{n-k}) &=\frac{1}{ \vert S^{n-k-1} \vert } \int _{\mathit{Gr}_{n-k}}R_{n-k}1(H)\,d\mu _{1}(H) \\ &=\frac{1}{ \vert S^{n-k-1} \vert } \int _{S^{n-1}} \biggl(n \int _{0}^{ \Vert \theta \Vert _{K}^{-1} }r^{n-1}g(r\theta )\,dr \biggr)^{\frac{k}{n}}\,d \theta \\ &\leq \frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \biggl(n \int _{S^{n-1}} \int _{0}^{ \Vert \theta \Vert _{K}^{-1} }r^{n-1}g(r \theta )\,dr \,d\theta \biggr)^{\frac{k}{n}}. \end{aligned}$$
Putting (4.6) into the right-hand side of (4.7), we have
$$\begin{aligned} \mu _{1}(\mathit{Gr}_{n-k})\leq \frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } n ^{\frac{k}{n}} \bigl(\mu (K) \bigr)^{\frac{k}{n}}. \end{aligned}$$
Combination of (4.3),(4.4),(4.5) and (4.8) gives
$$\begin{aligned} \int _{K}f(x)\,dx\leq \biggl(\frac{n}{m} \biggr)^{\frac{k}{n}}\frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \bigl(\mu (K) \bigr)^{\frac{k}{n}} \varepsilon , \end{aligned}$$
which completes the proof Lemma 4.1. □
Next let us prove Theorem 3.1.
Proof of Theorem 3.1
Set constant \(C>o.m.r. (L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } )\). Then there exists a star body \(K\in \mathcal{B}\mathcal{P}^{n}_{k,\mu }\) such that \(L\subset K\) and \((\mu (K) )^{\frac{1}{n}}\leq C (\mu (L) )^{\frac{1}{n}}\)
Let \(f=g\chi _{L}\), where \(\chi _{L}\) is the indicator function of L, then f is nonnegative on K.
Put \(\varepsilon =\max_{H\in \mathit{Gr}_{n-k}}\int _{K\cap H}f(x)\,dx=\max_{H \in \mathit{Gr}_{n-k}}\int _{L\cap H}g(x)\,dx= \max_{H\in \mathit{Gr}_{n-k}}\mu (L\cap H)\). Apply Lemma 4.1 to f and K (f may be not continuous, but we do an easy approximation) and we have
$$\begin{aligned} \mu (L)&= \int _{L}g(x)\,dx= \int _{K}f(x)\,dx \leq \biggl(\frac{n}{m} \biggr) ^{\frac{k}{n}}\frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S ^{n-k-1} \vert } \bigl(\mu (K) \bigr)^{\frac{k}{n}}\max _{H \in \mathit{Gr}_{n-k}}\mu (L\cap H) \\ &\leq C^{k} \biggl(\frac{n}{m} \biggr)^{\frac{k}{n}} \frac{ \vert S ^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \bigl(\mu (L) \bigr) ^{\frac{k}{n}}\max_{H\in \mathit{Gr}_{n-k}} \mu (L\cap H) . \end{aligned}$$
Let \(C\rightarrow o.m.r. (L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } )\) in (4.9). Then
$$\begin{aligned} \mu (L)\leq \bigl(o.m.r. \bigl(L,\mathcal{B}\mathcal{P}^{n}_{k, \mu } \bigr) \bigr)^{k} \biggl(\frac{n}{m} \biggr)^{\frac{k}{n}} \frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \bigl(\mu (L) \bigr)^{\frac{k}{n}}\max_{H\in \mathit{Gr}_{n-k}} \mu (L\cap H) , \end{aligned}$$
$$\begin{aligned} \bigl(\mu (L) \bigr)^{\frac{n-k}{n}}\leq \bigl(o.v.r. \bigl(L, \mathcal{B} \mathcal{P}^{n}_{k,\mu } \bigr) \bigr)^{k} \biggl( \frac{n}{m} \biggr) ^{\frac{k}{n}}\frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S ^{n-k-1} \vert }\max_{H\in \mathit{Gr}_{n-k}} \mu (L\cap H) , \end{aligned}$$
which completes the proof of Theorem 3.1. □
To prove Theorem 3.2, we need to apply the following lemma which comes from the proof of Lemma 4.1.
Assume that K is in \(\mathcal{B}\mathcal{P}^{n}_{k,g}\) and μ-measurable with density g, \(m=\inf_{x\in K}g(x)>0\). Let f be continuous nonnegative even function on K, \(1\leq k \leq n-1\) and \(\varepsilon >0\). If
$$\begin{aligned} \int _{K\cap H}f(x)\,dx\leq \varepsilon \quad \textit{for all } H\in \mathit{Gr}_{n-k}, \end{aligned}$$
$$\begin{aligned} &\int _{S^{n-1}} \biggl(n \int _{0}^{ \Vert \theta \Vert _{K}^{-1} }r ^{n-1}g(r\theta )\,dr \biggr)^{\frac{k}{n}} \int _{0}^{ \Vert \theta \Vert _{K}^{-1} }r^{n-k-1}f(r\theta )\,dr \,d\theta \\ &\quad \leq n^{\frac{k}{n}}\frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \bigl(\mu (K) \bigr)^{\frac{k}{n}} \varepsilon . \end{aligned}$$
Now we use Lemma 4.2 to prove Theorem 3.2.
Set constant \(C>o.m.r. (L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } )\). Then there exists a star body \(K\in \mathcal{B}\mathcal{P}^{n}_{k,\mu }\) such that \(L\subset K\) and
$$\begin{aligned} \bigl(\mu (K) \bigr)^{\frac{1}{n}}\leq C \bigl(\mu (L) \bigr)^{ \frac{1}{n}}. \end{aligned}$$
Put \(f=g\chi _{L}\), where \(\chi _{L}\) is the indicator function of L, then f is nonnegative on K.
$$\begin{aligned} \varepsilon =\max_{H\in \mathit{Gr}_{n-k}} \int _{K\cap H}f(x)\,dx=\max_{H \in \mathit{Gr}_{n-k}} \int _{L\cap H}g(x)\,dx= \max_{H\in \mathit{Gr}_{n-k}}\mu (L\cap H). \end{aligned}$$
Apply Lemma 4.2 to f and K (f may be not continuous, but we can do an easy approximation) and we have
By \(L\subset K\), we get
$$\begin{aligned} \Vert \theta \Vert _{L}^{-1}\leq \Vert \theta \Vert _{K}^{-1} \end{aligned}$$
for every \(\theta \in S^{n-1}\). Then (3.2) and (4.14) implies
$$\begin{aligned} & \int _{S^{n-1}} \biggl(n \int _{0}^{\Vert \theta \Vert _{K}^{-1} }r ^{n-1}g(r\theta )\,dr \biggr)^{\frac{k}{n}} \int _{0}^{\Vert \theta \Vert _{K}^{-1} }r^{n-k-1}f(r\theta )\,dr \,d\theta \\ &\quad \geq \int _{S^{n-1}} \biggl(n \int _{0}^{\Vert \theta \Vert _{L} ^{-1} }r^{n-1}g(r\theta )\,dr \biggr)^{\frac{k}{n}} \int _{0}^{\Vert \theta \Vert _{L}^{-1} }r^{n-k-1}g(r\theta )\,dr \,d\theta \\ &\quad \geq \frac{1}{nR^{k}} \int _{S^{n-1}} \biggl(n \int _{0}^{\Vert \theta \Vert _{L}^{-1} }r^{n-1}g(r\theta )\,dr \biggr)^{1+ \frac{k}{n}}\,d\theta \\ &\quad \geq \frac{1}{R^{k}}\mu (L). \end{aligned}$$
Combination of (4.11), (4.12), (4.13) and (4.15) gives
$$\begin{aligned} \frac{1}{R^{k}}\mu (L)\leq C^{k}n^{\frac{k}{n}}\frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \bigl(\mu (L) \bigr) ^{\frac{k}{n}}\max_{H\in \mathit{Gr}_{n-k}}\mu (L\cap H) . \end{aligned}$$
Let \(C\rightarrow o.m.r. (L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } )\) in (4.16). Then
$$\begin{aligned} \bigl(\mu (L) \bigr)^{\frac{n-k}{n}} \leq \bigl(o.m.r. \bigl(L, \mathcal{B} \mathcal{P}^{n}_{k,\mu } \bigr) \bigr)^{k}R^{k}n^{ \frac{k}{n}} \frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S ^{n-k-1} \vert }\max_{H\in \mathit{Gr}_{n-k}}\mu (L\cap H) , \end{aligned}$$
In order to prove Theorem 3.3, we need using Proposition 4.3∼4.7 below.
First, in \({\mathbb{R}}^{n}\) the dialation of a generalized k-intersection body with respect to measure μ is also a generalized k-intersection body with respect to measure μ.
Proposition 4.3
If \(K\in \mathcal{B}\mathcal{P}^{n}_{k,g}\) then \(TK\in \mathcal{B} \mathcal{P}^{n}_{k,g (T^{-1}\cdot )}\), where T is a dialation in \({\mathbb{R}}^{n}\).
Suppose that \(Tx=ax\) (\(a>0\)) for all \(x\in {\mathbb{R}}^{n}\), where a is a constant. By the definition of \(K\in \mathcal{B}\mathcal{P}^{n}_{k,g}\) (see (2.1)), there exists a nonnegative finite Borel measure \(\mu _{1}\) on \(\mathit{Gr}_{n-k}\) such that, for every φ in \(C (S^{n-1} )\),
$$\begin{aligned} & \int _{S^{n-1}} \biggl(n \int _{0}^{\Vert \theta \Vert _{K}^{-1} }r ^{n-1}g(r\theta )\,dr \biggr)^{\frac{k}{n}}\varphi (\theta ) \,d\theta = \int _{\mathit{Gr}_{n-k}}R_{n-k}\varphi (H)\,d\mu _{1}(H). \end{aligned}$$
Then from
$$\begin{aligned} \Vert \theta \Vert _{aK}^{-1}=a \Vert \theta \Vert _{K} ^{-1}\quad \text{for every } \theta \in S^{n-1}, \end{aligned}$$
and (4.17) it follows that
$$\begin{aligned} &a^{k} \int _{S^{n-1}} \biggl(n \int _{0}^{\Vert \theta \Vert _{K} ^{-1} }r^{n-1}g(r\theta )\,dr \biggr)^{\frac{k}{n}}\varphi (\theta ) \,d \theta \\ &\quad = \int _{S^{n-1}} \biggl(n \int _{0}^{\Vert \theta \Vert _{aK} ^{-1} }r^{n-1}g \biggl( \frac{r}{a}\theta \biggr)\,dr \biggr)^{ \frac{k}{n}}\varphi (\theta ) \,d\theta \\ &\quad = \int _{\mathit{Gr}_{n-k}}R_{n-k}\varphi (H)\,d\mu _{2}(H), \end{aligned}$$
where \(\mu _{2}=a^{k}\mu _{1}\) is a nonnegative finite Borel measure on \(\mathit{Gr}_{n-k}\). This implies that \(aK\in \mathcal{B}\mathcal{P}^{n}_{k,g (a^{-1}\cdot )}\), i.e. \(TK\in \mathcal{B}\mathcal{P}^{n} _{k,g (T^{-1}\cdot )}\). □
For given generalized k-intersection body, we can construct a generalized k-intersection body with respect to some measure μ.
Suppose that \(K\in \mathcal{B}\mathcal{P}^{n}_{k}\) and
$$\begin{aligned} \Vert \theta \Vert _{K}^{-1}\leq \biggl(n \int _{0}^{+\infty }r ^{n-1}g(r\theta )\,dr \biggr)^{\frac{1}{n}}\quad \textit{for every } \theta \in S ^{n-1}. \end{aligned}$$
Let the star body D satisfy
$$\begin{aligned} n \int _{0}^{ \Vert \theta \Vert _{D}^{-1}}r^{n-1}g(r\theta )\,dr= \Vert \theta \Vert _{K}^{-n}\quad \textit{for every } \theta \in S^{n-1}. \end{aligned}$$
Then \(D\in \mathcal{B}\mathcal{P}^{n}_{k,g}\).
This result follows from the definitions of \(\mathcal{B}\mathcal{P} ^{n}_{k}\) and \(\mathcal{B}\mathcal{P}^{n}_{k,g}\). □
Under the assumptions of Proposition 4.4, let \(m=\inf_{x \in {\mathbb{R}}^{n}} g(x)>0\) and \(M=\sup_{x\in {\mathbb{R}} ^{n}} g(x)<+\infty \). Then
$$\begin{aligned} \frac{1}{M^{1/n}}K\subset D\subset \frac{1}{m^{1/n}}K. \end{aligned}$$
By (4.18) for every \(\theta \in S^{n-1}\)
$$\begin{aligned} m \Vert \theta \Vert _{D}^{-n}\leq \Vert \theta \Vert _{K}^{-n}\leq M \Vert \theta \Vert _{D}^{-n}, \end{aligned}$$
which implies
$$\begin{aligned} \frac{1}{M^{1/n}} \Vert \theta \Vert _{K}^{-1}\leq \Vert \theta \Vert _{D}^{-1}\leq \frac{1}{m^{1/n}} \Vert \theta \Vert _{K}^{-1}, \end{aligned}$$
$$\begin{aligned} \frac{1}{M^{1/n}}\rho _{K}(\theta )\leq \rho _{D}(\theta )\leq \frac{1}{m ^{1/n}}\rho _{K}(\theta ), \end{aligned}$$
where \(\rho _{K}\) is the radial function of K. Therefore,
Suppose that \(K\in \mathcal{B}\mathcal{P}^{n}_{k}\), \(m=\inf_{x \in {\mathbb{R}}^{n}}g(x)>0\) and \(M=\sup_{x\in {\mathbb{R}} ^{n}}g(x)<+\infty \). Let the star body D satisfy
$$\begin{aligned} n \int _{0}^{ \Vert \theta \Vert _{D}^{-1}}r^{n-1}g\bigl(M^{ \frac{1}{n}}r \theta \bigr)\,dr= \Vert \theta \Vert _{K}^{-n}\quad \textit{for every } \theta \in S^{n-1}. \end{aligned}$$
Then \(K\subset M^{\frac{1}{n}}D\in \mathcal{B}\mathcal{P}^{n}_{k,g}\) and
$$\begin{aligned} \frac{\mu (M^{\frac{1}{n}}D )}{\mu (K)}\leq \frac{M}{m}. \end{aligned}$$
Similarly to the proof of Proposition 4.5 and Proposition 4.4, we get
$$\begin{aligned} K\subset M^{\frac{1}{n}}D\in \mathcal{B}\mathcal{P}^{n}_{k,g}. \end{aligned}$$
By the polar formula for integrals and (4.19),
$$\begin{aligned} \mu \bigl(M^{\frac{1}{n}}D \bigr) &= \int _{M^{\frac{1}{n}}D}g(x)\,dx \\ &= \int _{S^{n-1}} \int _{0}^{ \Vert \theta \Vert _{M^{1/n}D}^{-1}} r ^{n-1}g(r\theta )\,dr \,d\theta \\ &=M \int _{S^{n-1}} \int _{0}^{ \Vert \theta \Vert _{D}^{-1} }r ^{n-1}g \bigl(M^{\frac{1}{n}}r\theta \bigr)\,dr \,d\theta \\ &=\frac{M}{n} \int _{S^{n-1}} \Vert \theta \Vert _{K}^{-n}\,d \theta \\ &=M \vert K \vert , \end{aligned}$$
$$\begin{aligned} \frac{\mu (M^{\frac{1}{n}}D )}{\mu (K)}=\frac{M \vert K \vert }{\int _{K}g(x)\,dx}\leq \frac{M}{m}. \end{aligned}$$
Next using Proposition 4.6, for given measure μ with density function g, we have a result on \(o.m.r. (L,\mathcal{B} \mathcal{P}^{n}_{k,\mu } )\) and \(o.v.r. (L,\mathcal{B} \mathcal{P}^{n}_{k} )\).
Let L be an origin-symmetric star body in \({{\mathbb{R}}}^{n}\) and μ-measurable with density g,
$$\begin{aligned} m=\inf_{x\in {\mathbb{R}}^{n}}g(x)>0,\qquad M=\sup_{x\in {\mathbb{R}}^{n}}g(x)< + \infty \quad \textit{and}\quad 1\leq k\leq n-1. \end{aligned}$$
$$\begin{aligned} o.m.r. \bigl(L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } \bigr)\leq \biggl(\frac{M}{m} \biggr) ^{\frac{2}{n}}o.v.r. \bigl(L, \mathcal{B}\mathcal{P}^{n}_{k} \bigr). \end{aligned}$$
By Proposition 4.6 and the definition of \(o.m.r. (L, \mathcal{B}\mathcal{P}^{n}_{k,\mu } )\) (see (2.1)) and (2.2)), we can get Proposition 4.7. □
Combining Theorem 3.2 and Proposition 4.7, we can get Theorem 3.3. □
This article discusses the following inequality:
which is a variant of the notable slicing inequality in convex geometry but more concise, for some constant C, for every positive integer n and every integer \(1\leq k< n\), and for every origin-symmetric convex body L and every measure μ with nonnegative even continuous density in \({\mathbb{R}}^{n}\). The k-intersection body, introduced by [7], plays an important role in the solution to the Busemann–Petty problem, which is equivalent to the slicing problem. By constructing the tool of generalized k-intersection body with measure μ and relevant concepts, we get (5.1) for some constant C.
Next, we give some remarks as the end of this article. Equation (4.21) gives a relationship between \(o.m.r. (L, \mathcal{B}\mathcal{P}^{n}_{k,\mu } )\) and \(o.v.r. (L, \mathcal{B}\mathcal{P}^{n}_{k} )\). Combining this result with Theorem 3.1 or Theorem 3.2, we get estimates for \(\mu (L)\) and \(\max_{H\in \mathit{Gr}_{n-k}}\mu (L\cap H)\), in which \(o.m.r. (L,\mathcal{B}\mathcal{P}^{n}_{k,\mu } )\) is replaced by \(o.v.r. (L,\mathcal{B}\mathcal{P}^{n}_{k} )\). Theorem 3.3 can also follows from the combination of (4.21) and Theorem 3.2.
The combination of (4.21) and Theorem 3.1 yields the following theorem.
$$\begin{aligned} \bigl(\mu (L) \bigr)^{\frac{n-k}{n}} \leq \bigl(o.v.r. \bigl(L, \mathcal{B} \mathcal{P}^{n}_{k} \bigr) \bigr)^{k} \frac{M^{ \frac{2k}{n}}}{m^{\frac{3k}{n}}}n^{\frac{k}{n}} \frac{ \vert S^{n-1} \vert ^{\frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \max_{H\in \mathit{Gr}_{n-k}} \mu (L\cap H) . \end{aligned}$$
It is worth noting that Theorem 5.1 can also be derived from Corollary 1 in [9].
Another point that needs noticing is that all the estimates for \(o.v.r. (L,\mathcal{B}\mathcal{P}^{n}_{k} )\) (for example, [8, 9, 13]) can lead to different results on \(\mu (L)\) and \(\max_{H\in \mathit{Gr}_{n-k}}\mu (L\cap H)\), just using Theorem 3.3 or Theorem 5.1.
For example, there is an estimate [8] for \(o.v.r. (L, \mathcal{B}\mathcal{P}^{n}_{k} )\) as follows.
(see [8]) Let K be a symmetric convex body in \({{\mathbb{R}}} ^{n}\) and \(1\leq k\leq n-1\). Then
$$\begin{aligned} o.v.r. \bigl(K,\mathcal{B}\mathcal{P}^{n}_{k} \bigr)\leq c \sqrt{\frac{n \log \frac{en}{k}}{k}}, \end{aligned}$$
where \(c>0\) is an absolute constant.
Then this result united with Theorem 3.2 can yields the following.
Corollary 5.3
$$\begin{aligned} \bigl(\mu (L) \bigr)^{\frac{n-k}{n}} \leq c^{k} \biggl( \frac{n\log \frac{en}{k}}{k} \biggr)^{\frac{k}{2}} \biggl(\frac{M}{m} \biggr)^{ \frac{2k}{n}}R^{k}n^{\frac{k}{n}} \frac{ \vert S^{n-1} \vert ^{ \frac{n-k}{n}}}{ \vert S^{n-k-1} \vert } \max _{H\in \mathit{Gr}_{n-k}}\mu (L\cap H), \end{aligned}$$
where \(m=\inf_{x\in {\mathbb{R}}^{n}}g(x)>0\), \(M=\sup_{x \in {\mathbb{R}}^{n}}g(x)<+\infty \), and \(c>0\) is an absolute constant.
The shortcoming of this paper is that the direct estimate of the outer measure ratio distance with respect to measure μ from L to the class \(\mathcal{B}\mathcal{P}^{n}_{k,\mu }\), \(o.m.r. (L, \mathcal{B}\mathcal{P}^{n}_{k,\mu } )\) is not offered, and Theorems 3.1–3.3 contain the coefficient relevant to the measure μ or the diameter of the star body L.
We want to obtain the best result as follows:
where C is an absolute constant, irrelevant to μ and L. But we have not realized it at present.
The author would like to thank professor Zhongkai Li for valuable comments and insightful suggestions, and to thank Shanxi Normal University for her financial support.
The author is supported by Shanxi Normal University for the publication of the article, and the grant number is 050502070350.
YY completed the work alone, and read and approved the final manuscript.
School of Mathematics and Computer Science, Shanxi Normal University, Linfen, China
Grinberg, E., Zhang, G.: Convolutions, transforms and convex bodies. Proc. Lond. Math. Soc. 78, 77–115 (1999) MathSciNetView ArticleGoogle Scholar
Klartag, B.: On convex perturbations with a bounded isotropic constant. Geom. Funct. Anal. 16, 1274–1290 (2006) MathSciNetView ArticleGoogle Scholar
Bourgain, J.: On the distribution of polynomials on high-dimensional convex sets. In: Morel, J.M., Teissier, B. (eds.) Lecture Notes in Mathematics, vol. 1469, pp. 127–137. Springer, Berlin (1991) Google Scholar
Milman, V., Pajor, A.: Isotropic position and inertia ellipsoids and zonoids of the unit ball of a normed n-dimensional space. In: Lindenstrauss, J., Milman, V. (eds.) Lecture Notes in Math., vol. 1376, pp. 64–104. Springer, Heidelberg (1989) Google Scholar
Koldobsky, A.: Slicing inequalities for subspaces of \(l_{p}\). Proc. Am. Math. Soc. 144, 787–795 (2016) View ArticleGoogle Scholar
Gardner, R.J.: Geometric Tomography, 2nd edn. Cambridge University Press, Cambridge (2006) View ArticleGoogle Scholar
Zhang, G.: Sections of convex bodies. Am. J. Math. 118, 319–340 (1996) MathSciNetView ArticleGoogle Scholar
Koldobsky, A., Paouris, G., Zymonopoulou, M.: Isomorphic properties of intersection bodies. J. Funct. Anal. 261, 2697–2716 (2011) MathSciNetView ArticleGoogle Scholar
Koldobsky, A.: Slicing inequality for measures of convex bodies. Adv. Math. 283, 437–488 (2015) View ArticleGoogle Scholar
Ball, K.: Isometric problems in l p $l_{p}$ and sections of convex sets. Ph.D. thesis, Trinity College (1986) Google Scholar
Bourgain, J.: On high-dimensional maximal functions associated to convex bodies. Am. J. Math. 108, 1467–1476 (1986) MathSciNetView ArticleGoogle Scholar
Bourgain, J.: Geometry of Banach spaces and harmonic analysis. In: Proceedings of the International Congress of Mathematicians, Berkeley, California, 3–11 August 1986. Am. Math. Soc., Providence (1987) Google Scholar
Koldobsky, A., Pajor, A.: A remark on measures of sections of \(l_{p}\)-balls. In: Morel, J.M., Teissier, B. (eds.) Lecture Notes in Mathematics, vol. 2169, pp. 213–220. Springer, Berlin (2017) Google Scholar
Goodey, P., Well, W.: Intersection bodies and ellipsoids. Mathematika 42, 295–304 (1995) MathSciNetView ArticleGoogle Scholar
Koldobsky, A.: Intersection bodies, positive definite distributions, and the Busemann–Petty problem. Am. J. Math. 120, 827–840 (1998) MathSciNetView ArticleGoogle Scholar
Koldobsky, A.: Fourier analysis in convex geometry. In: Bona, J.L. et al. (eds.) Mathematical Survey and Monographs, vol. 116. Am. Math. Soc., Providence (2005) Google Scholar
Lutwak, E.: Intersection bodies and dual mixed volumes. Adv. Math. 71, 232–261 (1988) MathSciNetView ArticleGoogle Scholar | CommonCrawl |
JGM Home
Nonlinear constraints in nonholonomic mechanics
December 2014, 6(4): 549-566. doi: 10.3934/jgm.2014.6.549
A dynamical condition for differentiability of Mather's average action
Alexandre Rocha 1, and Mário Jorge Dias Carneiro 2,
IEF, Campus UFV-Florestal, Universidade Federal de Viçosa, Florestal, MG 35690-000, Brazil
ICEX, Universidade Federal de Minas Gerais, Belo Horizonte, MG 30161-970, Brazil
Received October 2013 Revised August 2014 Published December 2014
We prove the differentiability of Mather's average action on all rotation vectors of measures whose supports are contained in a Lipschitz Lagrangian asymptotically isolated graph, invariant by Tonelli Hamiltonians. We also show the relationship between differentiability of $\beta $ and local integrability of the Hamiltonian flow.
Keywords: integrability of the Hamiltonian flow., differentiability of Mather's average action, Mather's theory.
Mathematics Subject Classification: Primary: 37J50, 37J15; Secondary: 37J3.
Citation: Alexandre Rocha, Mário Jorge Dias Carneiro. A dynamical condition for differentiability of Mather's average action. Journal of Geometric Mechanics, 2014, 6 (4) : 549-566. doi: 10.3934/jgm.2014.6.549
M-C. Arnaud, The tiered Aubry set for autonomous Lagrangian functions, Ann. Inst. Fourier (Grenoble), 58 (2008), 1733-1759. doi: 10.5802/aif.2397. Google Scholar
M-C. Arnaud, A particular minimization property implies $C^{0}$-integrability, Journal of Differential Equations, 250 (2011), 2389-2401. doi: 10.1016/j.jde.2010.12.002. Google Scholar
V. Bangert, Minimal geodesics, Erg. Theory and Dynamical Systems, 10 (1999), 263-286. doi: 10.1017/S014338570000554X. Google Scholar
P. Bernard, Existence of $C^{1,1}$ critical sub-solutions of the Hamilton-Jacobi equation on compact manifolds, Ann. Sci. École Norm. Sup., 40 (2007), 445-452. doi: 10.1016/j.ansens.2007.01.004. Google Scholar
P. Bernard and G. Contreras, A generic property of families of Lagrangian systems, Annals of Mathematics, 167 (2008), 1099-1108. doi: 10.4007/annals.2008.167.1099. Google Scholar
P. Bernard, On the Conley decomposition of Mather sets, Rev. Mat. Iberoamericana, 26 (2010), 115-132. doi: 10.4171/RMI/596. Google Scholar
D. Burago, S. Ivanov and B. Kleiner, On the structure of the stable norm of periodic metrics, Math. Research Letters, 4 (1997), 791-808. doi: 10.4310/MRL.1997.v4.n6.a2. Google Scholar
G. Contreras and R. Iturriaga, Global Minimizers of Autonomous Lagrangians, $22^{\circ }$ Colóquio Brasileiro de Matemática IMPA, 1999. Google Scholar
G. Contreras, L. Macarini and G. Paternain, Periodic orbits for exact magnetic flows on surfaces, International Mathematics Research Notices, 2004 (2004), 361-387. doi: 10.1155/S1073792804205050. Google Scholar
G. Contreras, J. Delgado and R. Iturriaga, Lagrangian flows: The dynamics of globally minimizing orbits-II, Bol. Soc. Brasil. Mat., 28 (1997), 155-196. doi: 10.1007/BF01233390. Google Scholar
M. J. Dias Carneiro, On minimizing measures of the action of autonomous Lagrangians, Nonlinearity, 8 (1995), 1077-1085. doi: 10.1088/0951-7715/8/6/011. Google Scholar
M. J. Dias Carneiro and A. Lopes, On the minimal action function of autonomous lagrangians associated to magnetic fields, Annales de l'I. H. P., 16 (1999), 667-690. doi: 10.1016/S0294-1449(00)88183-4. Google Scholar
A. Fathi and A. Siconolf, Existence of $C^{1}$ critical sub-solutions of the Hamilton-Jacobi equation, Invent. Math., 155 (2004), 363-388. doi: 10.1007/s00222-003-0323-6. Google Scholar
A. Fathi, Weak KAM Theorem and Lagrangian Dynamics Preliminary Version Number 10,, 2008., (). Google Scholar
A. Fathi, A. Figalli and L. Rifford, On the Hausdorff dimension of the Mather quotient, Comm. Pure Appl. Math., 62 (2009), 445-500. doi: 10.1002/cpa.20250. Google Scholar
A. Fathi, A. Giuliani and A. Sorrentino, Uniqueness of Invariant Lagrangian Graphs in a Homology or a Cohomology Class, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 8 (2009), 659-680. Google Scholar
M. Herman, Inégalités "a priori" pour des tores lagrangiens invariants par des difféomorphismes symplectiques, Inst. Hautes Études Sci. Publ. Math., 70 (1989), 47-101. Google Scholar
R. Mañé, Generic properties and problems of minimizing measure of Lagrangian dynamical systems, Nonlinearity, 9 (1996), 273-310. doi: 10.1088/0951-7715/9/2/002. Google Scholar
R. Mañé, Global Variational Methods in Conservative Dynamics, IMPA, 1993. Google Scholar
D. Massart, On Aubry sets and Mather's action functional, Israel J. Math., 134 (2003), 157-171. doi: 10.1007/BF02787406. Google Scholar
D. Massart, Vertices of Mather's Beta function, II, Ergodic Theory Dynam. Systems, 29 (2009), 1289-1307. doi: 10.1017/S0143385708000631. Google Scholar
D. Massart, Aubry sets vs Mather sets in two degrees of freedom, Cal. Var. Partial Diff. Eqns, 42 (2011), 429-460. doi: 10.1007/s00526-011-0393-z. Google Scholar
D. Massart, Stable norm of surfaces: Local structure of the unit ball at rational directions, Geom. Funct. Anal., 7 (1997), 996-1010. doi: 10.1007/s000390050034. Google Scholar
D. Massart and A. Sorrentino, Differentiability of Mather's average action and integrability on closed surfaces, Nonlinearity, 24 (2011), 1777-1793. doi: 10.1088/0951-7715/24/6/005. Google Scholar
J. N. Mather, Action minimizing invariant measures for positive definite Lagrangian Systems, Math. Zeitschrift, 207 (1991), 169-207. doi: 10.1007/BF02571383. Google Scholar
J. R. Munkres, Elements of Algebraic Topology, Addison-Wesley Publ. Co., Menlo Park, CA, 1984. Google Scholar
G. Paternain, L. Polterovich and K. Siburg, Boundary rigidity for Lagrangian submanifolds, non-removable intersections, and Aubry-Mather theory, Mosc. Math. J., 3 (2003), 593-619. Google Scholar
A. Sorrentino, On the integrability of Tonelli Hamiltonians, Trans. Amer. Math. Soc., 363 (2011), 5071-5089. doi: 10.1090/S0002-9947-2011-05492-9. Google Scholar
Alfonso Sorrentino. Computing Mather's $\beta$-function for Birkhoff billiards. Discrete & Continuous Dynamical Systems, 2015, 35 (10) : 5055-5082. doi: 10.3934/dcds.2015.35.5055
Mads R. Bisgaard. Mather theory and symplectic rigidity. Journal of Modern Dynamics, 2019, 15: 165-207. doi: 10.3934/jmd.2019018
Kaizhi Wang, Lin Wang, Jun Yan. Aubry-Mather theory for contact Hamiltonian systems II. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 555-595. doi: 10.3934/dcds.2021128
Ugo Bessi. Viscous Aubry-Mather theory and the Vlasov equation. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 379-420. doi: 10.3934/dcds.2014.34.379
Hans Koch, Rafael De La Llave, Charles Radin. Aubry-Mather theory for functions on lattices. Discrete & Continuous Dynamical Systems, 1997, 3 (1) : 135-151. doi: 10.3934/dcds.1997.3.135
Rodolfo Ríos-Zertuche. Characterization of minimizable Lagrangian action functionals and a dual Mather theorem. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2615-2639. doi: 10.3934/dcds.2020143
Fabio Camilli, Annalisa Cesaroni. A note on singular perturbation problems via Aubry-Mather theory. Discrete & Continuous Dynamical Systems, 2007, 17 (4) : 807-819. doi: 10.3934/dcds.2007.17.807
Yasuhiro Fujita, Katsushi Ohmori. Inequalities and the Aubry-Mather theory of Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2009, 8 (2) : 683-688. doi: 10.3934/cpaa.2009.8.683
Fabio Cipriani, Gabriele Grillo. On the $l^p$ -agmon's theory. Conference Publications, 1998, 1998 (Special) : 167-176. doi: 10.3934/proc.1998.1998.167
Alicia Cordero, José Martínez Alfaro, Pura Vindel. Bott integrable Hamiltonian systems on $S^{2}\times S^{1}$. Discrete & Continuous Dynamical Systems, 2008, 22 (3) : 587-604. doi: 10.3934/dcds.2008.22.587
Artur O. Lopes, Rafael O. Ruggiero. Large deviations and Aubry-Mather measures supported in nonhyperbolic closed geodesics. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1155-1174. doi: 10.3934/dcds.2011.29.1155
Bassam Fayad. Discrete and continuous spectra on laminations over Aubry-Mather sets. Discrete & Continuous Dynamical Systems, 2008, 21 (3) : 823-834. doi: 10.3934/dcds.2008.21.823
Diogo A. Gomes. Viscosity solution methods and the discrete Aubry-Mather problem. Discrete & Continuous Dynamical Systems, 2005, 13 (1) : 103-116. doi: 10.3934/dcds.2005.13.103
Siniša Slijepčević. The Aubry-Mather theorem for driven generalized elastic chains. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2983-3011. doi: 10.3934/dcds.2014.34.2983
Danilo Coelho, David Pérez-Castrillo. On Marilda Sotomayor's extraordinary contribution to matching theory. Journal of Dynamics & Games, 2015, 2 (3&4) : 201-206. doi: 10.3934/jdg.2015001
Ursula Hamenstädt. Bowen's construction for the Teichmüller flow. Journal of Modern Dynamics, 2013, 7 (4) : 489-526. doi: 10.3934/jmd.2013.7.489
Ammari Zied, Liard Quentin. On uniqueness of measure-valued solutions to Liouville's equation of Hamiltonian PDEs. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 723-748. doi: 10.3934/dcds.2018032
Sonja Hohloch, Silvia Sabatini, Daniele Sepe. From compact semi-toric systems to Hamiltonian $S^1$-spaces. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 247-281. doi: 10.3934/dcds.2015.35.247
Regina Martínez, Carles Simó. Non-integrability of the degenerate cases of the Swinging Atwood's Machine using higher order variational equations. Discrete & Continuous Dynamical Systems, 2011, 29 (1) : 1-24. doi: 10.3934/dcds.2011.29.1
David Blázquez-Sanz, Juan J. Morales-Ruiz. Lie's reduction method and differential Galois theory in the complex analytic context. Discrete & Continuous Dynamical Systems, 2012, 32 (2) : 353-379. doi: 10.3934/dcds.2012.32.353
Alexandre Rocha Mário Jorge Dias Carneiro | CommonCrawl |
August 2012 , Volume 33, Issue 1–2, pp 69–88 | Cite as
Performance benchmarking of quadrotor systems using time-optimal control
Markus Hehn
Robin Ritz
First Online: 02 March 2012
Frequently hailed for their dynamical capabilities, quadrotor vehicles are often employed as experimental platforms. However, questions surrounding achievable performance, influence of design parameters, and performance assessment of control strategies have remained largely unanswered. This paper presents an algorithm that allows the computation of quadrotor maneuvers that satisfy Pontryagin's minimum principle with respect to time-optimality. Such maneuvers provide a useful lower bound on the duration of maneuvers, which can be used to assess performance of controllers and vehicle design parameters. Computations are based on a two-dimensional first-principles quadrotor model. The minimum principle is applied to this model to find that time-optimal trajectories are bang-bang in the thrust command, and bang-singular in the rotational rate control. This paper presents a procedure allowing the computation of time-optimal maneuvers for arbitrary initial and final states by solving the boundary value problem induced by the minimum principle. The usage of the computed maneuvers as a benchmark is demonstrated by evaluating quadrotor design parameters, and a linear feedback control law as an example of a control strategy. Computed maneuvers are verified experimentally by applying them to quadrocopters in the ETH Zurich Flying Machine Arena testbed.
Aerial robotics Motion planning and control Quadrotor control UAV design Benchmarking of UAV controllers
This research was funded in part by the Swiss National Science Foundation (SNSF).
Performance benchmarking. (MP4 18.1 MB)
10514_2012_9282_MOESM2_ESM.tar (2.4 mb)
Performance benchmarking of quadrotor systems using time-optimal control. (TAR 2.4 MB)
Appendix: Algorithm for calculation of time-optimal maneuvers
This appendix discusses the numerical algorithm presented in Sect. 4 in more detail, with a focus on how the individual steps were implemented. This implementation (in Matlab) of the algorithm is available for free use on the first author's website, and is submitted along with this article.
This appendix follows the outline of Sect. 4, first introducing maneuvers containing no singular arcs in Sects. 9.1–9.3, and then showing modifications for bang-singular maneuvers in Sect. 9.4.
Figure 13 shows a flowchart diagram of the algorithm for bang-bang maneuvers, and in the following, the three steps are introduced in detail.
Flowchart diagram of the algorithm that computes bang-bang maneuvers satisfying the minimum principle. The three steps are presented in detail in Sects. 9.1–9.3. In this graph, ≈ is used to denote that the equation must be solved to acceptable accuracy
9.1 Switching time optimization
Due to the assumption that the optimal solution is a bang-bang maneuver, the control trajectory u can be efficiently parameterized by the initial control vector u(t=0) and the switching times of the two control inputs, denoted by the sets
$$\begin{array}{l@{\quad}l}\{{T}_{u_R}\} = {T}_{u_R}^i & \hspace {7mm} \text{for}~i = 1,2,\dots,N_R, \\[3pt]\{{T}_{u_T}\} = {T}_{u_T}^j &\hspace{7mm} \text{for}~j = 1,2,\dots,N_T.\end{array} $$
N R and N T are the number of switches of the rotational control input and the thrust input, respectively. The principle of STO is to choose N R and N T , and to then improve an initial choice of the switching times \(\{{T}_{u_{R}}\}_{\mathit{ini}}\) and \(\{{T}_{u_{T}}\}_{\mathit{ini}}\), until a control trajectory is found that guides the quadrotor from x 0 to x T with an acceptable accuracy. The final state error is measured using the scalar final state residual function
$$P_{\mathit{res}}\bigl(\{{T}_{u_R}\},\{{T}_{u_T}\},{T}\bigr) = \bigl(\mathbf {x}({T}) - \mathbf {x}_T\bigr)^T W \bigl(\mathbf {x}({T}) - \mathbf {x}_T\bigr), $$
where the matrix W=diag(w 1,w 2,w 3,w 4,w 5) contains the weights of the different state errors. The final state x(T) resulting from the chosen switching times can be obtained by numerically integrating the system dynamics f(x,u) over the interval [0,T], where u is defined by the initial control inputs u(t=0) and the switching times \(\{{T}_{u_{R}}\}\) and \(\{{T}_{u_{T}}\}\). The maneuver duration T is not known a priori and we seek the minimum T for which P res =0 can be obtained. The problem can be written as
$$\begin{array}{l@{\quad}l}\multicolumn{2}{l}{\text{find} \quad \{{T}_{u_R}\}, \{{T}_{u_T}\}, {T}} \\[3pt]\text{subject to} & P_{\mathit{res}}\bigl(\{{T}_{u_R}\},\{{T}_{u_T}\},{T}\bigr) = 0, \\[3pt]& {T} \leq\{{T}\}_{ach},\end{array} $$
where {T} ach is the set of all T for which P res =0 is achievable, implying that the maneuver to be found is the one with the shortest possible duration.
The solution of (41) is computed by a two-step algorithm: For an initially small, fixed maneuver duration T, the state residual P res is minimized by varying the switching times \(\{{T}_{u_{R}}\}\) and \(\{{T}_{u_{T}}\}\) using a simplex search method (this choice was based on the observation that derivative-free optimization algorithms have shown to perform significantly better in this optimization). After the minimization, T is increased using the secant method
$${T}_{i+1} = {T}_i + \frac{{T}_i-{T}_{i-1}}{(P_{\mathit{res},i-1}/P_{\mathit{res},i})-1}, $$
or by a constant value if convergence of the secant method is not assumed, see Dahlquist and Björck (2003). These two steps are repeated until P res =0 is achieved. Since the initial value of T is chosen to be too small to complete the maneuver, and since T is successively increased, the algorithm delivers a value close to the smallest T for which P res =0 is achievable.
The choice of the number of switches is based on the user's intuition and experience from the computation of other maneuvers. If the number is chosen too high, the algorithm can converge to the correct result by producing dispensable switching times, as discussed below. The initial guess for the duration of the maneuver T must be chosen to be too short to complete the maneuver, and can be obtained from a guess based on the vehicle's translational acceleration capabilities, or on similar maneuvers.
9.2 Parameter extraction
After having found a bang-bang trajectory that brings the quadrotor from the initial state x 0 to the desired final state x T , it is necessary to verify that it is a solution to BVP (32). Therefore, the constant vector c=(c 1,c 2,c 3,c 4) must be determined, based on the trajectories resulting from the STO.
9.2.1 Dispensable switching times
If the number of switches N R and N T was chosen too high, then the STO may converge to a solution containing dispensable switching times, which in fact do not represent switches. Therefore, before the constant vector c is computed, all switches at t=0 and t=T are removed, and the initial control vector u(0) is adjusted accordingly. Furthermore, two switches of the same control input, which occur at the same time, are dispensable as well and must, consequently, also be removed.
9.2.2 Conditions on the trajectory of Φ R
The switching function Φ R must be zero whenever the control input u R switches. From the STO, the set of switching times \(\{{T}_{u_{R}}\}\) is given, and for each element of this set, Φ R must vanish. This leads to the conditions
$$\Phi_R\bigl({T}_{u_R}^i\bigr) = 0 \quad \text{for}~i =1,2,\dots,N_R. $$
As shown in Sect. 3, only the derivative \(\dot{\Phi}_{R}\) of the switching function is known a priori. However, once the state trajectories are known from the STO, the condition H≡0 (which must hold if the maneuver is time-optimal) can be used to compute Φ R . Recalling the Hamiltonian (14) and using the definition Φ R =p 5 yields
$$\Phi_R = \frac{1 + p_1 \dot{{x}} + p_2 u_T\sin{\theta } + p_3 \dot{{z}} + p_4 (u_T\cos{\theta} - 1)}{ -u_R}. $$
As shown in (17), the first four costates p i are all linear in c. The above equation can therefore be written as a linear function of c:
Given the linear form of Φ R , (43) states N R linear conditions on the constant vector c.
The derivative \(\dot{\Phi}_{R}\) is given by (31). For a trajectory that satisfies the minimum principle, the integral of \(\dot{\Phi}_{R}\) must coincide with the trajectory of Φ R given by (45). Hence, for an arbitrary interval [t 1,t 2]∈[0,T],
must hold, where the left side of the equation is computed using H≡0, i.e. by (45). The costates p 2 and p 4 are linear functions of c, and the above equation can be written as
To set up conditions on c based on (47), the maneuver interval [0,T] is divided into N R +1 subintervals that are separated by the switching times \(\{{T}_{u_{R}}\}\), i.e.
$$[0,{T}] = \bigcup\bigl\{\bigl[0,{T}_{u_R}^1\bigr],\bigl[{T}_{u_R}^1,{T}_{u_R}^2\bigr],\dots,\bigl[{T}_{u_R}^{N_R},T\bigr] \bigr\}. $$
This choice is beneficial with respect to the computational effort, because the switching function Φ R must vanish at the switching times; the left side of (47) can be set to zero for all intervals, except for the first and the last one. The N R +1 intervals describe N R +1 additional linear conditions on the constant vector c.
9.2.3 Conditions on the trajectory of Φ T
Since the thrust switching function Φ T is known explicitly, the conditions resulting from \(\{{T}_{u_{T}}\}\) are straightforward. From the fact that Φ T must vanish at each switch of u T , the condition
$$\Phi_T\bigl({T}_{u_T}^i\bigr) = 0 \quad \text{for}~i =1,2,\dots,N_T $$
must be satisfied, where the set \(\{{T}_{u_{T}}\}\) is given by the STO. The thrust switching function (26) is a linear function of the costates p 2 and p 4, and again linear in c:
$$\Phi_T = -c_1 t \sin{\theta} + c_2 \sin{\theta} - c_3 t \cos{\theta} + c_4 \cos{\theta}. $$
This linear form of the thrust switching function Φ T allows one to define N T additional linear conditions on the elements of the constant vector c, based on the conditions from (49).
9.2.4 Condition matrix equation
For the minimum principle to be satisfied, a constant vector c that fulfills all the linear conditions to an acceptable accuracy must exist. The conditions on c derived above are therefore combined into a matrix equation, which we denote as
$$A \mathbf{c} = r. $$
The matrix A is of size (N c ×4) and the vector r has the length N c , where N c is the total number of linear conditions:
$$N_c = 2N_R + N_T + 1. $$
For all maneuvers considered here, the system of (51) is overdetermined, permitting no exact solution. Therefore, the least squares solution of (51) is computed (Bernstein 2005), which is given by
$$\mathbf{c}^* = \bigl(A^T A\bigr)^{-1} A^T r. $$
To verify that a solution to the overdetermined system of equations exists, c ∗ is substituted back into (51). If the error vector exceeds the expected numerical discrepancies,3 then the solution is considered to be invalid. In the context of the optimal control problem, this implies that there exists no constant vector c for which the minimum principle is fulfilled, and consequently the trajectories x and u resulting from the STO do not satisfy the minimum principle. A possible reason is that the chosen number of switches N R and N T and the initial values \(\{{T}_{u_{R}}\}_{ini}\) and \(\{{T}_{u_{T}}\}_{ini}\) did not cause the STO to converge to the desired maneuver. This may be corrected by varying these parameters. Another reason for the lack of a solution could be that the time-optimal maneuver for the given boundary conditions contains singular arcs, a case that will be discussed in Sect. 9.4.
If the condition matrix equation is satisfied to an acceptable accuracy, then a valid parameter vector c has been found and the parameter extraction step is complete.
9.3 BVP solver
To verify that BVP (32) is fulfilled and to minimize numerical errors, a last step is performed where the BVP is solved numerically: The state residual P res is minimized by varying the constant vector c and the maneuver duration T. The problem can be written as
$$\begin{array}{l@{\quad}l}\multicolumn{2}{l}{\text{minimize} \quad P_{\mathit{res}}(\mathbf{c},{T})} \\[3pt]\text{subject to} & \dot{\mathbf{x}}_a = f_a({t},\mathbf{x}_a), \\[3pt]& \mathbf{x}_a(0) = \bigl(\mathbf {x}_0,\Phi_R(0)\bigr). \\\end{array} $$
The constants c resulting from the parameter extraction and the maneuver duration T obtained by the STO are used as initial values. The optimization over the constants c and the terminal time T is carried out using a simplex algorithm. As these initial values are close to the exact solution, the BVP solver converges quickly, provided that the solution resulting from the STO is indeed a solution to the minimum principle. The initial value of the switching function Φ R (0) can be obtained by the condition H≡0, i.e. by (45), evaluated at t=0. If P res is sufficiently small after the minimization, the maneuver satisfies the boundary conditions of the final state being reached, and the algorithm has terminated successfully.
9.4 Modified algorithm for bang-singular maneuvers
The algorithm described above is able to solve BVP (32), provided that the resulting maneuver does not contain singular arcs. In the general case, however, the time-optimal maneuver is bang-singular, and the algorithm needs to be modified to take possible singular arcs into account.
Within a singular arc, the trajectory of u R is given by (23) and depends on the constants c. Due to this dependency, computing the constants c after the STO is no longer sufficient, since they determine the singular input and have an impact on the maneuver trajectory. The parameter extraction is therefore embedded into the STO, and the resulting algorithm consists of two successive steps:
Applying STO, a maneuver that brings the quadrotor to the desired final state is found, and in parallel, a constant vector c that fulfills the condition matrix equation resulting from the parameter extraction is computed.
Having a reasonable initial guess of the switching times, of the maneuver duration T, and of the constant vector c, a BVP solver that computes a solution to BVP (32) is applied.
Figure 14 shows a flowchart diagram of the algorithm to find bang-singular solutions.
Flowchart diagram of the algorithm to compute bang-singular maneuvers that satisfy the minimum principle. The symbol ≈ is used to denote that the equation must be solved to acceptable accuracy
9.4.1 Switching time optimization with embedded parameter extraction
For bang-singular maneuvers, u R may stay within a singular arc for a particular duration each time it switches. We introduce a new set of parameters that describes the durations of the singular arcs, and denote the duration within the singular arc at the switching time \({T}_{u_{R}}^{i}\) as \({D}_{s,u_{R}}^{i}\). At the time \({T}_{u_{R}}^{i}\) the control input u R enters the singular arc, and at time \({{T}_{u_{R}}^{i}+{D}_{s,u_{R}}^{i}}\) the singular arc is left and u R switches to −1 or +1.4 A bang-singular maneuver is characterized by the sets
$$\begin{array}{l@{\quad}l}\{{T}_{u_R}\} = {T}_{u_R}^i &\text{for}~i = 1,2,\dots,N_R, \\[3pt]\{{D}_{s,u_R}\} = {D}_{s,u_R}^i&\text{for}~i = 1,2,\dots,N_R, \\[2pt]\{{T}_{u_T}\} = {T}_{u_T}^j &\text{for}~j = 1,2,\dots,N_T.\end{array} $$
Within a singular arc, u R is given by (23) and its trajectory depends on the constants c. The final state residual P res is therefore not only a function of the maneuver duration T and of the sets of the switching times, but also of the constant vector c. Accordingly, the state residual may be written as
The new parameter set \(\{{D}_{s,u_{R}}\}\) and the constant vector c are additional optimization variables during the STO.
If the solution is to satisfy the minimum principle, the optimization variables overconstrain the problem: For the solution to satisfy the optimality conditions, the control inputs must be the optimal control inputs, as specified by (24) and (30). These optimal inputs could be found using c to compute the switching functions. This is avoided, however, because the separate optimization of the switching times and c has shown to be more robust.
Because only constants c that satisfy the condition matrix equation A c=r from the parameter extraction are a valid choice, we define the condition residual to be
$$C_{\mathit{res}}\bigl(\{{T}_{u_R}\},\{{T}_{u_T}\},\{{D}_{s,u_R}\},\mathbf{c},{T}\bigr) = (A \mathbf{c} -r)^T W_c (A \mathbf{c} - r), $$
where W c is a diagonal matrix containing the weights of the different linear conditions. It is important to note that the matrix A and the vector r are functions of the switching times \(\{{T}_{u_{R}}\}\) and \(\{{T}_{u_{T}}\}\), of the singular arc durations \(\{{D}_{s,u_{R}}\}\), of the maneuver duration T, and of the constants c. For a maneuver that satisfies the minimum principle, the condition residual C res must vanish. Consequently, the STO problem for bang-singular maneuvers can be written as
$$\begin{array}{l@{\quad}l}\multicolumn{2}{l}{\text{find} \quad \{{T}_{u_R}\}, \{{T}_{u_T}\}, \{{D}_{s,u_R}\}, \mathbf{c},{T}} \\[3pt]\text{subject to} & P_{\mathit{res}}(\{{T}_{u_R}\},\{{T}_{u_T}\},\{{D}_{s,u_R}\},\mathbf {c},{T}) = 0, \\[3pt]& C_{\mathit{res}}(\{{T}_{u_R}\},\{{T}_{u_T}\},\{{D}_{s,u_R}\},\mathbf{c},{T}) = 0, \\[3pt]& {T} \leq\{{T}\}_{ach},\end{array} $$
where {T} ach denotes the set of all T for which P res =0 and C res =0 is achievable.
For bang-singular maneuvers, the sum of the state and the condition residual P res +C res is minimized during the STO. For the computation of C res , the matrix A and the vector r are required: The parameter extraction is no longer an isolated step, but needs to be performed for each evaluation of C res within the STO minimization. The parameter extraction is not used to compute the constants c (which are optimization variables), but to compute A and r.
9.4.2 Additional linear conditions for bang-singular maneuvers
For the parameter extraction of bang-singular maneuvers, which is needed to obtain A and r, there exist additional linear conditions that take the requirements on the switching functions within singular arcs into account.
Additional conditions on the trajectory of Φ R
Considering bang-singular maneuvers, the rotational switching function Φ R must not only have a zero-crossing at each \({T}_{u_{R}}^{i}\), but it must also stay at zero for the duration of the corresponding singular arc \({D}_{s,u_{R}}^{i}\). An additional set of constraints is introduced, requiring that Φ R is zero at the beginning and at the end of the singular arcs:
$$\begin{array}{l@{\quad}l}\Phi_R\bigl({T}_{u_R}^i\bigr) = 0 & \text{for}~i =1,2,\dots,N_R, \\[3pt]\Phi_R\bigl({T}_{u_R}^i+{D}_{s,u_R}^i\bigr) = 0 &\text{for}~i = 1,2,\dots,N_R.\end{array} $$
Because these conditions do not imply that Φ R is zero during the entire singular arc, it is necessary to verify the trajectory of Φ R after the computation. If a switch \({T}_{u_{R}}^{i}\) has no singular arc, i.e. if \({D}_{s,u_{R}}^{i}=0\), then the corresponding two conditions in (59) are identical. From this it follows that one additional condition results for each singular arc. We denote the number of singular arcs as N s , hence (59) describes N R +N s conditions. This means that N s additional conditions have been identified, compared to the bang-bang case. As derived in Sect. 9.2, these conditions are linear with respect to c.
As the derivative of the rotational switching function \(\dot{\Phi}_{R}\) is known explicitly, we demand that the integration value of \(\dot{\Phi }_{R}\) between two switches of u R is zero for bang-bang maneuvers. For bang-singular maneuvers, we pose similar conditions, but extra time intervals over the singular arcs are created. An integration value of zero does not imply that Φ R stays at zero during the whole singular arc, but constant drifts of Φ R are penalized. Hence, the intervals over which \(\dot{\Phi}_{R}\) is integrated are
$$\begin{array}{rcl}[0,{T}] & = & \bigcup\bigl\{\bigl[0,{T}_{u_R}^1\bigr],\bigl[{T}_{u_R}^1,T_{s,u_R}^1\bigr],\bigl[T_{s,u_R}^1,{T}_{u_R}^2\bigr],\dots\\[6pt]& & \hphantom{\bigcup\bigl\{} \dots,\bigl[T_{s,u_R}^{N_R-1},{T}_{u_R}^{N_R}\bigr],\bigl[{T}_{u_R}^{N_R},T_{s,u_R}^{N_R}\bigr],\bigl[T_{s,u_R}^{N_R},T\bigr] \bigr\},\end{array} $$
where \(T_{s,u_{R}}^{i}={T}_{u_{R}}^{i}+{D}_{s,u_{R}}^{i}\) is used for a more compact notation. Analogously to the bang-bang case, a linear condition for each of these intervals can be constructed using (47). If a switch has no singular arc, then \({{D}_{s,u_{R}}^{i}=0}\) and the corresponding interval vanishes. Hence, for bang-singular maneuvers, N R +N s +1 linear conditions on the constant vector c result. Compared to a bang-bang maneuver, N s additional conditions are introduced.
Assuming that the thrust input u T does not switch at the edges of the singular intervals, \(\dot{\Phi}_{R}\) is continuous over the border of the singular arcs, as can be seen from (21). Consequently, the switching function Φ R enters and leaves a singular arc tangentially. We therefore impose the conditions that the derivative \(\dot{\Phi}_{R}\) is zero at the edges of every singular arc. For each singular arc, i.e. for each \({D}_{s,u_{R}}^{i}>0\), two additional conditions result:
$$\begin{array}{l@{\quad}l}\dot{\Phi}_R\bigl({T}_{u_R}^i\bigr) = 0 &\text{for}~i = 1,2,\dots,N_s, \\[3pt]\dot{\Phi}_R\bigl({T}_{u_R}^i+{D}_{s,u_R}^i\bigr)= 0 & \text{for}~i = 1,2,\dots,N_s .\end{array} $$
The derivative of the rotational switching function is given by
$$\dot{\Phi}_R = (c_1 {t} - c_2)u_T\cos{\theta} + (c_4 - c_3 {t})u_T\sin{\theta}, $$
which has been derived in Sect. 3. This is a linear function of the constants c, and yields 2N s additional conditions.
General condition matrix equation
In total, 4N s additional conditions have been identified. It follows that in the case of a bang-singular maneuver, the condition matrix equation
$$A \mathbf{c} = r $$
has N c rows, with a total number of conditions of
$$N_c = 2N_R + N_T + 4N_s + 1. $$
The condition matrix equation is overdetermined as soon as the maneuver has at least one singular arc.
9.4.3 BVP solver for bang-singular maneuvers
Similar to the algorithm for bang-bang maneuvers, the final step is the reduction of errors through the application of a BVP solver. If the maneuver contains singular arcs, Φ R stays at zero for a nontrivial interval of time. Since the system is integrated numerically, Φ R is near zero during the singular arcs, but does not vanish completely due to numerical inaccuracies. As Φ R enters and leaves the singular arcs tangentially, defining a threshold value below which Φ R is considered to be zero is not a straightforward task. For this reason, the rotational control trajectory u R is not determined using the optimal control law (i.e. based on its switching function Φ R ), but is based on the sets \(\{{T}_{u_{R}}\}\) and \(\{{D}_{s,u_{R}}\}\). Consequently, \(\{{T}_{u_{R}}\}\) and \(\{{D}_{s,u_{R}}\}\) are optimizing variables during the BVP minimization, because they impact the control trajectory u. Further, since the switching times of u R are not determined based on the constants c, the optimal control laws are not implicitly satisfied. One must thus ensure that the condition matrix equation is fulfilled, which is the case if C res vanishes. Thus, as during the switching time optimization, the sum of the state residual P res and the condition residual C res is minimized. The BVP solver problem for bang-singular maneuvers becomes
$$\begin{array}{l@{\quad}l}\multicolumn{2}{l}{\text{minimize}\quad P_{\mathit{res}} + C_{\mathit{res}}} \\[3pt]\text{subject to} & \dot{\mathbf{x}} = f(\mathbf {x},\mathbf {u}), \\[3pt]& \mathbf {x}(0) = \mathbf {x}_0,\end{array} $$
where the control trajectory u R is computed according to the switching times and singular arc durations, and u T according to the optimal control law (30). Note that the arguments \((\{{T}_{u_{R}}\},\{{D}_{s,u_{R}}\},\mathbf{c},{T})\) of P res and C res have been omitted for reasons of clarity.
In the BVP Solver step, the N T linear conditions resulting from the thrust input are trivially satisfied, because u T is computed based on its switching function Φ T . Hence, when the matrix condition equation is computed for the evaluation of C res during the BVP minimization, it has only
$$N_{c,u_R} = 2N_R + 4N_s + 1 $$
rows, since the conditions resulting from u T can be neglected.
For bang-singular maneuvers, the BVP solver is similar to the STO. The only differences are that the thrust input u T is determined based on its control law (30), and that the maneuver duration T is an optimization variable, too, and not kept constant during the minimization of P res +C res .
Because u R is not determined by its control law, and since a vanishing condition residual C res does not guarantee that the control law holds, it is necessary to verify that the control law (24) is satisfied by inspecting the switching function Φ R .
If the residuals P res and C res are sufficiently small after the minimization, and if the control law for the rotational input u R is fulfilled, then the maneuver is a solution to BVP (32), and therefore satisfies the minimum principle with respect to time-optimality.
Bernstein, D. S. (2005). Matrix mathematics. Princeton: Princeton University Press. zbMATHGoogle Scholar
Bertsekas, D. P. (2005). Dynamic programming and optimal control Vol. I (3rd edn.). Athena Scientific. zbMATHGoogle Scholar
Bouabdallah, S., Noth, A., & Siegwart, R. (2004). PID vs LQ control techniques applied to an indoor micro quadrotor. In Proceedings of the international conference on intelligent robots and systems. Google Scholar
Bouktir, Y., Haddad, M., & Chettibi, T. (2008). Trajectory planning for a quadrotor helicopter. In Proceedings of the Mediterranean conference on control and automation. Google Scholar
Cowling, I. D., Yakimenko, O. A., & Whidborne, J. F. (2007). A prototype of an autonomous controller for a quadrotor UAV. In Proceedings of the European control conference. Google Scholar
Dahlquist, G., & Björck, A. (2003). Numerical methods. New York: Dover. zbMATHGoogle Scholar
Geering, H. P. (2007). Optimal control with engineering applications. Berlin: Springer. zbMATHGoogle Scholar
Gurdan, D., Stumpf, J., Achtelik, M., Doth, K. M., Hirzinger, G., & Rus, D. (2007). Energy-efficient autonomous four-rotor flying robot controlled at 1 kHz. In Proceedings of the IEEE international conference on robotics and automation. Google Scholar
Hehn, M., & D'Andrea, R. (2011). Quadrocopter trajectory generation and control. In Proceedings of the IFAC world congress. Google Scholar
Hoffmann, G. M., Huang, H., Waslander, S. L., & Tomlin, C. J. (2007). Quadrotor helicopter flight dynamics and control: theory and experiment. In Proceedings of the AIAA guidance, navigation and control conference. Google Scholar
Hoffmann, G. M., Waslander, S. L., & Tomlin, C. J. (2008). Quadrotor helicopter trajectory tracking control. In Proceedings of the IEEE conference on decision and control. Google Scholar
How, J. P., Bethke, B., Frank, A., Dale, D., & Vian, J. (2008). Real-time indoor autonomous vehicle test environment. IEEE Control Systems Magazine, 28(2), 51–64. MathSciNetCrossRefGoogle Scholar
Huang, H., Hoffmann, G. M., Waslander, S. L., & Tomlin, C. J. (2009). Aerodynamics and control of autonomous quadrotor helicopters in aggressive maneuvering. In Proceedings of the IEEE international conference on robotics and automation. Google Scholar
Lai, L. C., Yang, C. C., & Wu, C. J. (2006). Time-optimal control of a hovering quad-rotor helicopter. Journal of Intelligent & Robotic Systems, 45(2), 115–135. CrossRefGoogle Scholar
Ledzewicz, U., Maure, H., & Schattler, H. (2009). Bang-bang and singular controls in a mathematical model for combined anti-angiogenic and chemotherapy treatments. In Proceedings of the conference on decision and control. Google Scholar
Lupashin, S., & D'Andrea, R. (2011). Adaptive open-loop aerobatic maneuvers for quadrocopters. In Proceedings of the IFAC world congress. Google Scholar
Lupashin, S., Schöllig, A., Sherback, M., & D'Andrea, R. (2010). A simple learning strategy for high-speed quadrocopter multi-flips. In Proceedings of the IEEE international conference on robotics and automation. Google Scholar
Mellinger, D., Michael, N., & Kumar, V. (2010). Trajectory generation and control for precise aggressive maneuvers with quadrotors. In Proceedings of the international symposium on experimental robotics. Google Scholar
Michael, N., Mellinger, D., Lindsey, Q., & Kumar, V. (2010). The GRASP multiple micro UAV testbed. IEEE Robotics & Automation Magazine, 17(3), 56–65. CrossRefGoogle Scholar
Pounds, P., Mahony, R., & Corke, P. (2006). Modelling and control of a quad-rotor robot. In Proceedings of the Australasian conference on robotics and automation. Google Scholar
Purwin, O., & D'Andrea, R. (2011). Performing and extending aggressive maneuvers using iterative learning control. Robotics and Autonomous Systems, 59(1), 1–11. CrossRefGoogle Scholar
Roxin, E. (1962). The existence of optimal controls. The Michigan Mathematical Journal, 9(2), 109–119. MathSciNetzbMATHCrossRefGoogle Scholar
Schoellig, A., Hehn, M., Lupashin, S., & D'Andrea, R. (2011). Feasibility of motion primitives for choreographed quadrocopter flight. In Proceedings of the American control conference. Google Scholar
Zandvliet, M., Bosgra, O., Jansen, J., Vandenhof, P., & Kraaijevanger, J. (2007). Bang-bang control and singular arcs in reservoir flooding. Journal of Petroleum Science & Engineering, 58(1–2), 186–200. CrossRefGoogle Scholar
© Springer Science+Business Media, LLC 2012
1.Institute for Dynamic Systems and ControlETH ZurichZurichSwitzerland
Hehn, M., Ritz, R. & D'Andrea, R. Auton Robot (2012) 33: 69. https://doi.org/10.1007/s10514-012-9282-3
Received 15 July 2011
Accepted 02 February 2012
First Online 02 March 2012
Publisher Name Springer US | CommonCrawl |
Tag: research
Three CS Students Recognized By The Computing Research Association
For this year's Outstanding Undergraduate Researcher Award, Payal Chandak, Sophia Kolak, and Yanda Chen were among students recognized by the Computing Research Association (CRA) for their work in an area of computing research.
Payal Chandak
Using Machine Learning to Identify Adverse Drug Effects Posing Increased Risk to Women
Payal Chandak Columbia University, Nicholas Tatonetti Columbia University
The researchers developed AwareDX – Analysing Women At Risk for Experiencing Drug toXicity – a machine learning algorithm that identifies and predicts differences in adverse drug effects between men and women by analyzing 50 years' worth of reports in an FDA database. The algorithm automatically corrects for biases in these data that stem from an overrepresentation of male subjects in clinical research trials.
Though men and women can have different responses to medications – the sleep aid Ambien, for example, metabolizes more slowly in women, causing next-day grogginess – doctors may not know about these differences because most clinical trial data itself is biased toward men. This trickles down to impact prescribing guidelines, drug marketing, and ultimately, patients' health. Unfortunately, pharmaceutical companies have a history of ignoring complex problems and clinical trials have singularly studied men, not even including women. As a result, there is a lot less information about how women respond to drugs compared to men. The research tries to bridge this information gap.
Sophia Kolak
It Takes a Village to Build a Robot: An Empirical Study of The ROS Ecosystem
Sophia Kolak Columbia University, Afsoon Afzal Carnegie Mellon University, Claire Le Goues Carnegie Mellon University, Michael Hilton Carnegie Mellon University, Christopher Steven Timperley Carnegie Mellon University
The Robot Operating System (ROS) is the most popular framework for robotics development. In this paper, the researchers conducted the first major empirical study of ROS, with the goal of understanding how developers collaborate across the many technical disciplines that coalesce in robotics.
Building a complete robot is a difficult task that involves bridging many technical disciplines. ROS aims to simplify development by providing reusable libraries, tools, and conventions for building a robot. Still, as building a robot requires domain expertise in software, mechanical, and electrical engineering, as well as artificial intelligence and robotics, ROS faces knowledge-based barriers to collaboration. The researchers wanted to understand how the necessity of domain-specific knowledge impacts the open-source collaboration model in ROS.
Virtually no one is an expert in every subdomain of robotics: experts who create computer vision packages likely need to rely on software designed by mechanical engineers to implement motor control. As a result, the researchers found that development in ROS is centered around a few unique subgroups each devoted to a different specialty in robotics (i.e. perception, motion). This is unlike other ecosystems, where competing implementations are the norm.
Detecting Performance Patterns with Deep Learning
Sophia Kolak Columbia University
Performance has a major impact on the overall quality of a software project. Performance bugs—bugs that substantially decrease run-time—have long been studied in software engineering, and yet they remain incredibly difficult for developers to handle. In this project, the researchers leveraged contemporary methods in machine learning to create graph embeddings of Python code that can be used to automatically predict performance.
Using un-optimized programming language concepts can lead to performance bugs and the researchers hypothesized that statistical language embeddings could help reveal these patterns. By transforming code samples into graphs that captured the control and data flow of a program, the researchers studied how various unsupervised embeddings of these graphs could be used to predict performance.
Implementing "sort" by hand as opposed to using the built-in Python sort function is an example of a choice that typically slows down a program's run-time. When the researchers embedded the AST and data flow of a code snippet in Euclidean space (using DeepWalk), patterns like this were captured in the embedding and allowed classifiers to learn which structures are correlated with various levels of performance.
"I was surprised by how often research changes directions," said Sophia Kolak. In both projects, they started out with one set of questions but answered completely different ones by the end. "It showed me that, in addition to persistence, research requires open-mindedness."
Yanda Chen
Cross-language Sentence Selection Via Data Augmentation and Rationale Training
Yanda Chen Columbia University, Chris Kedzie Columbia University, Suraj Nair University of Maryland, Petra Galuscakova University of Maryland, Rui Zhang Yale University, Douglas Oard University of Maryland, and Kathleen McKeown Columbia University
In this project, the researchers proposed a new approach to cross-language sentence selection, where they used models to predict sentence-level query relevance with English queries over sentences within document collections in low-resource languages such as Somali, Swahili, and Tagalog.
The system is used as part of cross-lingual information retrieval and query-focused summarization system. For example, if a user puts in a query word "business activity" and specifies Swahili as the language of source documents, then the system will automatically retrieve the Swahili documents that are related to "business activity" and produce short summaries that are then translated from Swahili to English.
A major challenge of the project was the lack of training data for low-resource languages. To tackle this problem, the researchers proposed to generate a relevance dataset of query-sentence pairs through data augmentation based on parallel corpora collected from the web. To mitigate the spurious correlations learned by the model, they proposed the idea of rationale training where they first trained a phrase-based statistical machine translation system and used the alignment information to provide additional supervision for the models.
The approach achieved state-of-the-art results on both text and speech across three languages – Somali, Swahili, and Tagalog.
Natural Language Processing Papers Accepted to EMNLP 2020
Six papers from the Speech & NLP group were accepted to the Empirical Methods in Natural Language Processing (EMNLP) conference.
Generating Similes Effortlessly Like a Pro: A Style Transfer Approach for Simile Generation
Tuhin Chakrabarty Columbia University, Smaranda Muresan Columbia University, and Nanyun Peng University of Southern California and University of California, Los Angeles
Literary tropes, from poetry to stories, are at the crux of human imagination and communication. Figurative language, such as a simile, goes beyond plain expressions to give readers new insights and inspirations. We tackle the problem of simile generation. Generating a simile requires proper understanding for effective mapping of properties between two concepts. To this end, we first propose a method to automatically construct a parallel corpus by transforming a large number of similes collected from Reddit to their literal counterpart using structured common sense knowledge. We then fine-tune a pre-trained sequence to sequence model, BART (Lewis et al., 2019), on the literal-simile pairs to generate novel similes given a literal sentence. Experiments show that our approach generates 88% novel similes that do not share properties with the training data. Human evaluation on an independent set of literal statements shows that our model generates similes better than two literary experts 37%1 of the times, and three baseline systems including a recent metaphor generation model 71%2 of the times when compared pairwise.3 We also show how replacing literal sentences with similes from our best model in machine-generated stories improves evocativeness and leads to better acceptance by human judges.
Content Planning for Neural Story Generation with Aristotelian Rescoring
Seraphina Goldfarb-Tarrant University of Southern California and University of Edinburgh, Tuhin Chakrabarty Columbia University, Ralph Weischedel University of Southern California and Nanyun Peng University of Southern California and University of California, Los Angeles
Long-form narrative text generated from large language models manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion. We posit that many of the problems of story generation can be addressed via high-quality content planning, and present a system that focuses on how to learn good plot structures to guide story generation. We utilize a plot-generation language model along with an ensemble of rescoring models that each implement an aspect of good story-writing as detailed in Aristotle's Poetics. We find that stories written with our more principled plot structure are both more relevant to a given prompt and higher quality than baselines that do not content plan, or that plan in an unprincipled way.
Severing the Edge Between Before and After: Neural Architectures for Temporal Ordering of Events
Miguel Ballesteros Amazon AI, Rishita Anubhai Amazon AI, Shuai Wang Amazon AI, Nima Pourdamghani Amazon AI, Yogarshi Vyas Amazon AI, Jie Ma Amazon AI, Parminder Bhatia Amazon AI, Kathleen McKeown Columbia University and Amazon AI and Yaser Al-Onaizan Amazon AI
In this paper, we propose a neural architecture and a set of training methods for ordering events by predicting temporal relations. Our proposed models receive a pair of events within a span of text as input and they identify temporal relations (Before, After, Equal, Vague) between them. Given that a key challenge with this task is the scarcity of annotated data, our models rely on either pre-trained representations (i.e. RoBERTa, BERT or ELMo), transfer, and multi-task learning (by leveraging complementary datasets), and self-training techniques. Experiments on the MATRES dataset of English documents establish a new state-of-the-art on this task.
Controllable Meaning Representation to Text Generation: Linearization and Data Augmentation Strategies
Chris Kedzie Columbia University and Kathleen McKeown Columbia University
We study the degree to which neural sequenceto-sequence models exhibit fine-grained controllability when performing natural language generation from a meaning representation. Using two task-oriented dialogue generation benchmarks, we systematically compare the effect of four input linearization strategies on controllability and faithfulness. Additionally, we evaluate how a phrase-based data augmentation method can improve performance. We find that properly aligning input sequences during training leads to highly controllable generation, both when training from scratch or when fine-tuning a larger pre-trained model. Data augmentation further improves control on difficult, randomly generated utterance plans.
Zero-Shot Stance Detection: A Dataset and Model using Generalized Topic Representations
Emily Allaway Columbia University and Kathleen McKeown Columbia University
Stance detection is an important component of understanding hidden influences in everyday life. Since there are thousands of potential topics to take a stance on, most with little to no training data, we focus on zero-shot stance detection: classifying stance from no training examples. In this paper, we present a new dataset for zero-shot stance detection that captures a wider range of topics and lexical variation than in previous datasets. Additionally, we propose a new model for stance detection that implicitly captures relationships between topics using generalized topic representations and show that this model improves performance on a number of challenging linguistic phenomena.
Unsupervised Cross-Lingual Part-of-Speech Tagging for Truly Low-Resource Scenarios
Ramy Eskander Columbia University, Smaranda Muresan Columbia University, and Michael Collins Columbia University
We describe a fully unsupervised cross-lingual transfer approach for part-of-speech (POS) tagging under a truly low resource scenario. We assume access to parallel translations between the target language and one or more source languages for which POS taggers are available. We use the Bible as parallel data in our experiments: small size, out-of-domain, and covering many diverse languages. Our approach innovates in three ways: 1) a robust approach of selecting training instances via cross-lingual annotation projection that exploits best practices of unsupervised type and token constraints, word-alignment confidence and density of projected POS, 2) a Bi-LSTM architecture that uses contextualized word embeddings, affix embeddings and hierarchical Brown clusters, and 3) an evaluation on 12 diverse languages in terms of language family and morphological typology. In spite of the use of limited and out-of-domain parallel data, our experiments demonstrate significant improvements in accuracy over previous work. In addition, we show that using multi-source information, either via projection or output combination, improves the performance for most target languages.
New Tool Detects Unsafe Security Practices in Android Apps
Open-source CRYLOGGER is the first tool that detects cryptographic misuses by running the Android app instead of analyzing its code.
Research by CS Undergrad Published in Cell
Payal Chandak (CC '21) developed a machine learning model, AwareDX, that helps detect adverse drug effects specific to women patients. AwareDX mitigates sex biases in a drug safety dataset maintained by the FDA.
Below, Chandak talks about how her internship under the guidance of Nicholas Tatonetti, associate professor of biomedical informatics and a member of the Data Science Institute, inspired her to develop a machine learning tool to improve healthcare for women.
How did the project come about?
I initiated this project during my internship at the Tatonetti Lab (T-lab) the summer after my first year. T-lab uses data science to study the side effects of drugs. I did some background research and learned that women face a two-fold greater risk of adverse events compared to men. While knowledge of sex differences in drug response is critical to drug prescription, there currently isn't a comprehensive understanding of these differences. Dr. Tatonetti and I felt that we could use machine learning to tackle this problem and that's how the project was born.
How many hours did you work on the project? How long did it last?
The project lasted about two years. We refined our machine learning (ML) model, AwareDX, over many iterations to make it less susceptible to biases in the data. I probably spent a ridiculous number of hours developing it but the journey has been well worth it.
Were you prepared to work on it or did you learn as the project progressed?
As a first-year student, I definitely didn't know much when I started. Learning on the go became the norm. I understood some things by taking relevant CS classes and through reading Medium blogs and GitHub repositories –– this ability to learn independently might be one of the most valuable skills I have gained. I am very fortunate that Dr. Tatonetti guided me through this process and invested his time in developing my knowledge.
What were the things you already knew and what were the things you had to learn while working on the project?
While I was familiar with biology and mathematics, computer science was totally new! In fact, T-Lab launched my journey to exploring computer science. This project exposed me to the great potential of artificial intelligence (AI) for revolutionizing healthcare, which in turn inspired me to explore the discipline academically. I went back and forth between taking classes relevant to my research and applying what I learned in class to my research. As I took increasingly technical classes like ML and probabilistic modelling, I was able to advance my abilities.
Looking back, what were the skills that you wished you had before the project?
Having some experience with implementing real-world machine learning projects on giant datasets with millions of observations would have been very valuable.
Was this your first project to collaborate on? How was it?
This was my first project and I worked under the guidance of Dr. Tatonetti. I thought it was a wonderful experience – not only has it been extremely rewarding to see my work come to fruition, but the journey itself has been so valuable. And Dr. Tatonetti has been the best mentor that I could have asked for!
Did working on this project make you change your research interests?
I actually started off as pre-med. I was fascinated by the idea that "intelligent machines" could be used to improve medicine, and so I joined T-Lab. Over time, I've realized that recent advances in machine learning could redefine how doctors interact with their patients. These technologies have an incredible potential to assist with diagnosis, identify medical errors, and even recommend treatments. My perspective on how I could contribute to healthcare shifted completely, and I decided that bioinformatics has more potential to change the practice of medicine than a single doctor will ever have. This is why I'm now hoping to pursue a PhD in Biomedical Informatics.
Do you think your skills were enhanced by working on the project?
Both my knowledge of ML and statistics and my ability to implement my ideas have grown immensely as a result of working on this project. Also, I failed about seven times over two years. We were designing the algorithm and it was an iterative process – the initial versions of the algorithm had many flaws and we started from scratch multiple times. The entire process required a lot of patience and persistence since it took over 2 years! So, I guess it has taught me immense patience and persistence.
Why did you decide to intern at the T-Lab?
I was curious to learn more about the intersection of artificial intelligence and healthcare. I'm endlessly fascinated by the idea of improving the standards of healthcare by using machine learning models to assist doctors.
Would you recommend volunteering or seeking projects out to other students?
Absolutely. I think everyone should explore research. We have incredible labs here at Columbia with the world's best minds leading them. Research opens the doors to work closely with them. It creates an environment for students to learn about a niche discipline and to apply the knowledge they gain in class.
Demystifying the Dissertation: PhD Research Discussions
This summer seminar series highlights 14 computer science PhD students. The handpicked group of students hosted individual Zoom sessions to discuss their experiences and research projects.
New Twitter Feature Challenges Users to Read Articles Before Sharing
Discovering How The Brain Works Through Computation
A team led by professor Christos Papadimitriou proposes a new computational system to expand the understanding of the brain at an intermediate level, between neurons and cognitive phenomena such as language.
CS Undergrads Recognized by the Computing Research Association
For this year's Outstanding Undergraduate Researcher Award, three computer science students received honorable mentions – Lalita Devadas, Dave Epstein, and Jessy Xinyi Han. The Computing Research Association (CRA) recognized the undergraduates for their work in an area of computing research.
Secure Montgomery Multiplication and Repeated Squares for Modular Exponentiation
Lalita Devadas Columbia University and Justin Bloom Oregon State University
The researchers worked on using some recent advances in garbling of arithmetic circuits for secure exponentiation mod N, a vital operation in many cryptosystems, including in the RSA public-key cryptosystem.
A garbled circuit is a cryptographic protocol which allows for secure two-party computation, in which two parties, Alice and Bob, each with a private input, want to compute some shared function of their inputs without either party learning the other's input.
Their novel approach implemented the Montgomery multiplication method, which uses clever arithmetic to avoid costly division by the modulus being multiplied in. The best method they found had each wire in a circuit representing one digit of a number in base p. They developed a system of base p arithmetic which is asymptotically more efficient in the given garbled circuit architecture than any existing protocols.
They measured performance for both approaches by counting the ciphertexts communicated for a single multiplication (a typical measure of efficiency for garbled circuit operations). They found that the base p Montgomery multiplication implementation vastly outperformed all other implementations for values of N with bit length greater than 500 (i.e., all N used for applications like RSA encryption).
"Unfortunately, our best implementations showed only incremental improvement over existing non-Montgomery-based implementations for values of N used in practice," said Lalita Devadas. "We are still looking into further optimizations using Montgomery multiplication."
Secure multiparty computation has many applications outside of computer science. For example, suppose five friends want to know their cumulative net worth without anyone learning anyone else's individual net worth. This is actually a secure computation problem, since the friends want to perform some computation of their inputs while keeping said inputs private from other parties.
Oops! Predicting Unintentional Action in Video
Dave Epstein Columbia University, Boyuan Chen Columbia University, and Carl Vondrick Columbia University
The paper trains models to detect when human action is unintentional using self-supervised computer vision, an important step towards machines that can intelligently reason about the intentions behind complex human actions.
Despite enormous scientific progress over the last five to ten years, machines still struggle with tasks learned quickly and autonomously by young children, such as understanding human behavior or learning to speak a language. Epstein's research tackles these types of problems by using self-supervised computer vision, a paradigm that predicts information naturally present in large amounts of input data such as images or videos. This stands in contrast with supervised learning, which relies on humans manually labelling data (e.g. "this is a picture of a dog").
"I was surprised to learn that failure is an expected part of research and that it can take a long time to realize you're failing," said Dave Epstein. "Taking a failed idea, identifying the promising parts, and trying again leads to successful research."
Seeding Network Influence in Biased Networks and the Benefits of Diversity
Ana-Andreea Stoica Columbia University, Jessy Xinyi Han Columbia University, and Augustin Chaintreau Columbia University
The paper explores the problem of social influence maximization and how information is diffused in a social network.
For example, it might be about what kind of news people read on social media, how many people know about job opportunities or who hears about the latest loan options from a bank. So given a social network, classical algorithms are focused on picking the best k early-adopters based on how central they are in a network, say, based on their number of connections, to maximize outreach.
However, since social inequalities are reflected in the uneven networks, classical algorithms which ignore demographics often amplify such inequalities in information access.
"We were wondering if we can do better than an algorithm that ignores demographics," said Jessy Xinyi Han. "'Better' here means more people in total and more people from the disadvantaged group can receive the information."
Through a network model with unequal communities, they developed new heuristics to take demographics into account, showing that including sensitive features in the input of most natural seed selection algorithms substantially improves diversity but also often leaves efficiency untouched or even provides a small gain.
Such analytical condition turned out to be a closed-form condition on the number of early adopters. They also validated this result on the real CS co-authorship network from DBLP.
21 papers from CS researchers accepted to NeurIPS 2019
The 33rd Conference on Neural Information Processing Systems (NeurIPS 2019) fosters the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects.
The annual meeting is one of the premier gatherings in artificial intelligence and machine learning that featured talks, demos from industry partners as well as tutorials. Professor Vishal Misra, with colleagues from the Massachusetts Institute of Technology (MIT), held a tutorial on synthetic control.
At this year's NeurIPS, 21 papers from the department were accepted to the conference. Computer science professors and students worked with researchers from the statistics department and the Data Science Institute.
Noise-tolerant Fair Classification
Alex Lamy Columbia University, Ziyuan Zhong Columbia University, Aditya Menon Google, Nakul Verma Columbia University
Fairness-aware learning involves designing algorithms that do not discriminate with respect to some sensitive feature (e.g., race or gender) and is usually done under the assumption that the sensitive feature available in a training sample is perfectly reliable.
This assumption may be violated in many real-world cases: for example, respondents to a survey may choose to conceal or obfuscate their group identity out of fear of potential discrimination. In the paper, the researchers show that fair classifiers can still be used given noisy sensitive features by simply changing the desired fairness-tolerance. Their procedure is empirically effective on two relevant real-world case-studies involving sensitive feature censoring.
Poisson-randomized Gamma Dynamical Systems
Aaron Schein UMass Amherst, Scott Linderman Columbia University, Mingyuan Zhou University of Texas at Austin, David Blei Columbia University, Hanna Wallach MSR NYC
This paper presents a new class of state space models for count data. It derives new properties of the Poisson-randomized gamma distribution for efficient posterior inference.
Using Embeddings to Correct for Unobserved Confounding in Networks
Victor Veitch Columbia University, Yixin Wang Columbia University, David Blei Columbia University
This paper address causal inference in the presence of unobserved confounder when proxy is available for the confounders in the form of a network connecting the units. For example, the link structure of friendships in a social network reveals information about the latent preferences of people in that network. The researchers show how modern network embedding methods can be exploited to harness the network estimation for efficient causal adjustment.
Variational Bayes Under Model Misspecification
Yixin Wang Columbia University, David Blei Columbia University
The paper characterizes the theoretical properties of a popular machine learning algorithm, variational Bayes (VB). The researchers studied the VB under model misspecification, which is the setting that is most aligned with the practice, and show that the VB posterior is asymptotically normal and centers at the value that minimizes the Kullback-Leibler (KL) divergence to the true data-generating distribution.
As a consequence, they found that the model misspecification error dominates the variational approximation error in VB posterior predictive distributions. In other words, VB pays a negligible price in producing posterior predictive distributions. It explains the widely observed phenomenon that VB achieves comparable predictive accuracy with MCMC even though VB uses an approximating family.
Poincaré Recurrence, Cycles and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games
Emmanouil-Vasileios Vlatakis-Gkaragkounis Columbia University, Lampros Flokas Columbia University, Georgios Piliouras Singapore University of Technology and Design
The paper introduces a model that captures a min-max competition over complex error landscapes and shows that even a simplified model can provably replicate some of the most commonly reported failure modes of GANs (non-convergence, deadlock in suboptimal states, etc).
Moreover, the researchers were able to understand the hidden structure in these systems — the min-max competition can lead to system behavior that is similar to that of energy preserving systems in physics (e.g. connected pendulums, many-body problems, etc). This makes it easier to understand why these systems can fail and gives new tools in the design of algorithms for training GANs.
Near-Optimal Reinforcement Learning in Dynamic Treatment Regimes
Junzhe Zhang Columbia University, Elias Bareinboim Columbia University
Dynamic Treatment Regimes (DTRs) are particularly effective for managing chronic disorders and is arguably one of the key aspects towards more personalized decision-making. The researchers developed the first adaptive algorithm that achieves near-optimal regret in DTRs in online settings, while leveraging the abundant, yet imperfect confounded observations. Applications are given to personalized medicine and treatment recommendation in clinical decision support.
Paraphrase Generation with Latent Bag of Words
Yao Fu Columbia University, Yansong Feng Peking University, John Cunningham University of Columbia
The paper proposes a latent bag of words model for differentiable content planning and surface realization in text generation. This model generates paraphrases with clear steps, adding interpretability and controllability of existing neural text generation models.
Adapting Neural Networks for the Estimation of Treatment Effects
Claudia Shi Columbia University, David Blei Columbia University, Victor Veitch Columbia University
This paper addresses how to design neural networks to get very accurate estimates of causal effects from observational data. The researchers propose two methods based on insights from the statistical literature on the estimation of treatment effects.
The first is a new architecture, the Dragonnet, that exploits the sufficiency of the propensity score for estimation adjustment. The second is a regularization procedure, targeted regularization, that induces a bias towards models that have non-parametrically optimal asymptotic properties "out-of-the-box". Studies on benchmark datasets for causal inference show these adaptations outperform existing methods.
Efficiently Avoiding Saddle Points with Zero Order Methods: No Gradients Required
The researchers prove that properly tailored zero-order methods are as effective as their first-order counterparts. This analysis requires a combination of tools from optimization theory, probability theory and dynamical systems to show that even without perfect knowledge of the shape of the error landscape, effective optimization is possible.
Metric Learning for Adversarial Robustness
Chengzhi Mao Columbia University, Ziyuan Zhong Columbia University, Junfeng Yang Columbia University, Carl Vondrick Columbia University, Baishakhi Ray Columbia University
Deep networks are well-known to be fragile to adversarial attacks. The paper introduces a novel Triplet Loss Adversarial (TLA) regulation that is the first method that leverages metric learning to improve the robustness of deep networks. This method is inspired by the evidence that deep networks suffer from distorted feature space under adversarial attacks. The method increases the model robustness and efficiency for the detection of adversarial attacks significantly.
Efficient Symmetric Norm Regression via Linear Sketching
Zhao Song University of Washington, Ruosong Wang Carnegie Mellon University, Lin Yang Johns Hopkins University, Hongyang Zhang TTIC, Peilin Zhong Columbia University
The paper studies linear regression problems with general symmetric norm loss and gives efficient algorithms for solving such linear regression problems via sketching techniques.
Rethinking Generative Coverage: A Pointwise Guaranteed Approach
Peilin Zhong Columbia University, Yuchen Mo Columbia University, Chang Xiao Columbia University, Pengyu Chen Columbia University, Changxi Zheng Columbia University
The paper presents a novel and formal definition of mode coverage for generative models. It also gives a boosting algorithm to achieve this mode coverage guarantee.
How Many Variables Should Be Entered in a Principal Component Regression Equation?
Ji Xu Columbia University, Daniel Hsu Columbia University
The researchers studied the least-squares linear regression over $N$ uncorrelated Gaussian features that are selected in order of decreasing variance with the number of selected features $p$ can be either smaller or greater than the sample size $n$. And give an average-case analysis of the out-of-sample prediction error as $p,n,N \to \infty$ with $p/N \to \alpha$ and $n/N \to \beta$, for some constants $\alpha \in [0,1]$ and $\beta \in (0,1)$. In this average-case setting, the prediction error exhibits a "double descent" shape as a function of $p$. This also establishes conditions under which the minimum risk is achieved in the interpolating ($p>n$) regime.
Adaptive Influence Maximization with Myopic Feedback
Binghui Peng Columbia University, Wei Chen Microsoft Research
The paper investigates the adaptive influence maximization problem and provides upper and lower bounds for the adaptivity gaps under myopic feedback model. The results confirm a long standing open conjecture by Golovin and Krause (2011).
Towards a Zero-One Law for Column Subset Selection
Zhao Song University of Washington, David Woodruff Carnegie Mellon University, Peilin Zhong Columbia University
The researchers studied low-rank matrix approximation with general loss function and showed that if the loss function has several good properties, then there is an efficient way to compute a good low-rank approximation. Otherwise, it could be hard to compute a good low-rank approximation efficiently.
Average Case Column Subset Selection for Entrywise l1-Norm Loss
The researchers studied how to compute an l1-norm loss low-rank matrix approximation to a given matrix. And showed that if the given matrix can be decomposed into a low-rank matrix and a noise matrix with a mild distributional assumption, we can obtain a (1+eps) approximation to the optimal solution.
A New Distribution on the Simplex with Auto-Encoding Applications
Andrew Stirn Columbia University, Tony Jebara Spotify, David Knowles Columbia University
The researchers developed a surrogate distribution for the Dirichlet that offers explicit, tractable reparameterization, the ability to capture sparsity, and has barycentric symmetry properties (i.e. exchangeability) equivalent to the Dirichlet. Previous works have used the Kumaraswamy distribution in a stick-breaking process to create a non-exchangeable distribution on the simplex. The method was improved by restoring exchangeability and demonstrating that approximate exchangeability is efficiently achievable. Lastly, the method was showcased in a variety of VAE semi-supervised learning tasks.
Discrete Flows: Invertible Generative Models of Discrete Data
Dustin Tran Google Brain, Keyon Vafa Columbia University, Kumar Agrawal Google AI Resident, Laurent Dinh Google Brain, Ben Poole Google Brain
While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. The researchers extend normalizing flows to discrete events, using a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Empirically, they find that discrete flows obtain competitive performance with or outperform autoregressive baselines on various tasks, including addition, Potts models, and language models.
Characterization and Learning of Causal Graphs with Latent Variables from Soft Interventions
Murat Kocaoglu MIT-IBM Watson AI Lab IBM Research, Amin Jaber Purdue University, Karthikeyan Shanmugam MIT-IBM Watson AI Lab IBM Research NY, Elias Bareinboim Columbia University
This work is all about learning causal relationships – the classic aim of which is to characterize all possible sets that could produce the observed data. In the paper, the researchers provide a complete characterization of all possible causal graphs with observational and interventional data involving so-called 'soft interventions' on variables when the targets of soft interventions are known.
This work potentially could lead to discovery of other novel learning algorithms that are both sound and complete.
Identification of Conditional Causal Effects Under Markov Equivalence
Amin Jaber Purdue University, Jiji Zhang Lingnan University, Elias Bareinboim Columbia University
Causal identification is the problem of deciding whether a causal distribution is computable from a combination of qualitative knowledge about the underlying data-generating process, which is usually encoded in the form of a causal graph, and an observational distribution. Despite the obvious need for identifying causal effects throughout the data-driven sciences, in practice, finding the causal graph is a notoriously challenging task.
In this work, the researchers provide a relaxation of the requirement of having to specify the causal graph (based on substantive knowledge) and allow the input of the inference to be an equivalence class of causal graphs, which can be inferred from data. Specifically, they propose the first general algorithm to learn conditional causal effects entirely from data. This result is particularly useful for evaluating the impact of conditional plans and stochastic policies, which appear both in AI (in the context of reinforcement learning) and in the data-driven sciences.
Efficient Identification in Linear Structural Causal Models with Instrumental Cutsets
Daniel Kumor Purdue University, Bryant Chen Brex Inc., Elias Bareinboim Columbia University
Regression analysis is one of the most common tools used in modern data science. While there is a great understanding and powerful technology to perform regression analysis in high dimensional spaces, the output of such a method is purely associational and devoid of any causal interpretation.
The researchers studied the problem of identification of structural (causal) coefficients in linear systems (deciding whether regression coefficients are amenable to causal interpretation, etc). Building on a technique called instrumental variables, they developed a new method called Instrumental Cutset, which partitions the systems into tractable components such that identification can be decided more efficiently. The resulting algorithm was efficient and strictly more powerful than the current state-of-the-art methods.
When Does Failure Become a Good Thing?
Assistant Professor Allison Bishop takes a look at failure and how people can learn from "unsuccessful" research.
When it comes to research and getting papers into cryptography conferences, there usually has to be a "positive" result — either a new theorem must be proven, a new algorithm must be presented, or a successful attack on an existing algorithm must be obtained. If researchers try to accomplish a lofty goal and fall short, but manage to achieve a smaller goal, they typically present only the smaller goal as if it was the point on its own.
Allison Bishop
"I've found that not every research paper magically comes together and has a "great" result," said Allison Bishop, who has been teaching since 2013. "Our community doesn't really talk about the research process and I wanted to highlight research where even if it "failed" there is still something to learn from it."
Through the years Bishop noticed the lack of a venue to talk about all kinds of research. When she and other researchers studied obfuscation it resulted in a paper "In Pursuit of Clarity In Obfuscation". In the paper they talked about how they "failed" but managed to still learn from their mistakes. Their topic on failure was not considered a "standard" that could be published and they were not able to submit it to a conference. But Bishop, along with PhD students Luke Kowalczyk and Kevin Shi, really wanted to get their findings out and share it with other researchers.
And so, a conference dedicated to disseminating insightful failures of the cryptology research community was born. The Conference for Failed Approaches and Insightful Losses in Cryptology or CFAIL featured seven previously unpublished papers for a day of talks by computer scientists on insightful failures spanning the full range from cryptanalysis (trying to break systems) to cryptographic theory and design (constructing new systems and proving things about specific systems or about abstract systems, etc.).
"CFAIL is great for our field in that it promotes openness and accessibility for these kinds of ideas which are typically sort of intimate," said Luke Kowalczyk, who completed his PhD in November of last year. "When approaching new problems, it's always helpful to see the approaches of other researchers, even if they were not successful. However, it's rare to see failed approaches explained in a public and formal setting."
They were not alone in thinking about the lack of dialogue on research failures. At the time of the conference, a thread on Hacker News (a tech news aggregator) discussed the incentive structures of academia. Shared Kowalczyk, "I was proud to see CFAIL cited as an example of a scientific field with a formal venue to help promote this kind of openness."
"There is a deeply ingrained human tendency to fear that being open about failure will make other people think you are dumb," said Bishop. On the contrary, the researchers at CFAIL were some of the "most creative, bold, and deeply intelligent people." And the atmosphere it created was energizing for the participants — the audience got pretty involved and felt comfortable asking questions, and even started thinking about some of the open research problems in real time. Continued Bishop, "I think talking about failure is probably the best scientific communication strategy left that is severely underused."
Bishop will continue to promote openness in scientific research with another CFAIL at Crypto 2020. This time around it will be a workshop at the conference and a call for papers will be out soon. | CommonCrawl |
Written by Colin+ in arithmetic, puzzles.
An excellent puzzle I heard from @panlepan (I paraphrase, as I've lost the tweet):
When you move the final digit of 142857 to the front, you get 714285, which is five times as large.
What is the smallest positive integer that is doubled when the last digit moves to the front?
There are two approaches I know of: one is dull but effective, the other is a bit more interesting.
A dull construction
Suppose The Number ends with a 1.1 Then its double ends with a 2 - so the original number must end with 21.
We can carry on like this - the previous digit must be 4, and the one before that 8.
We now have a number that ends 8421; we've reached the "deal with a carry" point of the endeavour. Doubling this gives 16842, so the preceding digit is 6. Doubling 68421 gives 136842, which is where the carry has gone: we would expect doubling the 6 to give us a 2, but the carry from doubling the 8 makes it a 3.
Similarly, we have to deal with the carry from doubling the 6. Doubling the 3 would normally give 6, but because of the carry, gives us 7. Still with me? Our number now ends 7368421.
Proceeding in the same way, the next preceding numbers are 4, 9, 8, 7, 5, 1, 3, 6, 2, 5 and 0, making our number 052,631,578,947,368,421. (The next ones would be 1, 2, 4, taking us back to the start.)
This does double if you move the last digit to the front, but starting with a 0 is cheating, don't you think? However, shifting the 1 at the end to the front gives 105,263,157,894,736,842, which also doubles if you move the last digit to the front - and is the answer to the puzzle.
A more interesting way
Suppose our number can be written as $10a + b$, with $a$ and $b$ positive integers such that $1 \le b \le 9$ (and $a$ as big as we like).
The effect of moving $b$ to the front is to make the number $10^k b + a$, where $k = \lceil \log_{10}(a) \rceil$.
That gives us $10^k b + a = 20a + 2b$ (because the effect is also to double the original number).
Rearranging, $(10^k - 2)b = 19a$. For this to be true, $10^k-2$ must be a multiple of 19 (because $b$ certainly isn't).
But how do we find such a $k$? Enter Fermat.
We know about Fermat's Last Theorem, we've had it drilled into us forever. But the Little Theorem? Well, I always need to look it up. It states that, if $p$ is a prime and $a$ a positive integer, then $a^{p-1} \equiv 1 \pmod{p}$.
In particular, $10^{18} \equiv 1 \pmod{19}$.
We also know that $10^{19} \equiv 10 \pmod{19}$, and - because $10\times2 \equiv 1 \mod{19}$, that 10 and 2 are multiplicative inverses.
We know that $10^{36} \equiv 1 \pmod{19}$ (it's $\br{10^{18}}^2$), and that's also $\br{10^{17}}\times\br{10^{19}}$ - so $10^{17} \times 10 \equiv 1 \pmod{19}$.
That means $10^{17} \equiv 2 \pmod{19}$ - or alternatively, $10^{17}-2$ is a multiple of 19.
Now all we have to do is divide $10^{17}-2$ by 19 - the Mathematical Ninja would tell you straight away that that's 52,631,578,947,368,421 - so that multiplied by $b$ gives $a$. However, if $b=1$, we hit a problem related to the one before ($k = \lceil \log_{10}(a) \rceil = 16$, rather than 17), so we try $b=2$ instead and get the same number as above.
I'm not saying the second way is easier, just a bit more interesting!
A Digital Root Puzzle
The Mathematical Ninja lets the student investigate… cube roots
HOW much rice?
Ask Uncle Colin: Multiplying negatives
It turns out it doesn't, but we'll cross that bridge later. [↩]
2 comments on "Doubling"
Ernesto La Orden
My own way to remember the Fermat's Litle Theorem is to write it in this way: a^p = a mod (p).
There is an "a" and a "p" in both sides and they are written in the same order. | CommonCrawl |
DCDS-S Home
Quantum hydrodynamics with nonlinear interactions
February 2016, 9(1): xi-xvii. doi: 10.3934/dcdss.2016.9.1xi
The research of Alberto Valli
Ana Alonso Rodríguez 1, , Hugo Beirão da Veiga 2, and Alfio Quarteroni 3,
Dipartimento di Matematica, Universita degli Studi di Trento, Via Sommarive, 14, I-38050 POVO
Department of Mathematics, Pisa University, Via F.Buonarroti, 1, 56127-Pisa
EPFL, SB, SMA, MATHICSE, CMCS, Av. Piccard, Station 8, CH-1015 Lausanne, Switzerland
The scientific activity of Professor Alberto Valli has been mainly devoted to three different subjects: theoretical analysis of partial differential equations in fluid dynamics; domain decomposition methods; numerical approximation of problems arising in low-frequency electromagnetism.
For more information please click the "Full Text" above.
Citation: Ana Alonso Rodríguez, Hugo Beirão da Veiga, Alfio Quarteroni. The research of Alberto Valli. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : xi-xvii. doi: 10.3934/dcdss.2016.9.1xi
L. Carbone and A. Valli, Filtrazione di un fluido in un mezzo non omogeneo tridimensionale,, Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8), 61 (1976), 161. Google Scholar
A. Valli, L'equazione di Eulero dei fluidi bidimensionali in domini con frontiera variabile,, Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8), 61 (1976), 1. Google Scholar
L. Carbone and A. Valli, Free boundary enclosure in a three-dimensional filtration problem,, Appl. Math. Optim., 4 (1977), 1. doi: 10.1007/BF01442128. Google Scholar
A. Valli, Soluzioni classiche dell'equazione di Eulero dei fluidi bidimensionali in domini con frontiera variabile,, Ricerche Mat., 26 (1977), 301. Google Scholar
L. Carbone and A. Valli, Asymptotic behaviour of the free boundary in a filtration problem,, Boll. Un. Mat. Ital. B (5), 15 (1978), 217. Google Scholar
L. Carbone and A. Valli, Filtration through a porous non-homogeneous medium with variable cross-section,, J. Analyse Math., 33 (1978), 191. doi: 10.1007/BF02790173. Google Scholar
H. Beirão da Veiga and A. Valli, On the motion of a non-homogeneous ideal incompressible fluid in an external force field,, Rend. Sem. Mat. Univ. Padova, 59 (1978), 117. Google Scholar
H. Beirão da Veiga and A. Valli, Existence of $C^\infty$ solutions of the Euler equations for non-homogeneous fluids,, Comm. Partial Differential Equations, 5 (1980), 95. doi: 10.1080/03605308008820134. Google Scholar
H. Beirão da Veiga and A. Valli, On the Euler equations for non-homogeneous fluids (I),, Rend. Sem. Mat. Univ. Padova, 63 (1980), 151. Google Scholar
H. Beirão da Veiga and A. Valli, On the Euler equations for non-homogeneous fluids (II),, J. Math. Anal. Appl., 73 (1980), 338. doi: 10.1016/0022-247X(80)90282-6. Google Scholar
A. Valli, Uniqueness theorems for compressible viscous fluids, especially when the Stokes relation holds,, Boll. Un. Mat. Ital. C (5), 18 (1981), 317. Google Scholar
H. Beirão da Veiga, R. Serapioni and A. Valli, On the motion of non-homogeneous fluids in the presence of diffusion,, J. Math. Anal. Appl., 85 (1982), 179. doi: 10.1016/0022-247X(82)90033-6. Google Scholar
A. Valli, A correction to the paper: "An existence theorem for compressible viscous fluids'',, Ann. Mat. Pura Appl. (4), 132 (1982), 399. doi: 10.1007/BF01760990. Google Scholar
A. Valli, An existence theorem for compressible viscous fluids,, Ann. Mat. Pura Appl. (4), 130 (1982), 197. doi: 10.1007/BF01761495. Google Scholar
P. Secchi and A. Valli, A free boundary problem for compressible viscous fluids,, J. Reine Angew. Math., 341 (1983), 1. doi: 10.1515/crll.1983.341.1. Google Scholar
A. Valli, Periodic and stationary solutions for compressible Navier-Stokes equations via a stability method,, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 10 (1983), 607. Google Scholar
A. Valli, Free boundary problems for compressible viscous fluids,, in Fluid Dynamics (Varenna, (1982), 175. doi: 10.1007/BFb0072331. Google Scholar
P. Marcati and A. Valli, Almost-periodic solutions to the Navier-Stokes equations for compressible fluids,, Boll. Un. Mat. Ital. B (6), 4 (1985), 969. Google Scholar
A. Valli, Global existence theorems for compressible viscous fluids,, in Nonlinear Variational Problems (Isola d'Elba, (1983), 120. Google Scholar
A. Valli, On the integral representation of the solution to the Stokes system,, Rend. Sem. Mat. Univ. Padova, 74 (1985), 85. Google Scholar
A. Valli, Navier-Stokes equations for compressible fluids: Global estimates and periodic solutions,, in Nonlinear Functional Analysis and its Applications, (1983), 467. Google Scholar
A. Valli, Qualitative properties of the solutions to the Navier-Stokes equations for compressible fluids,, in Equadiff 6 (Brno, (1985), 259. doi: 10.1007/BFb0076079. Google Scholar
A. Valli, Stationary solutions to the Navier-Stokes equations for compressible fluids,, in BAIL IV (Novosibirsk, (1986), 417. Google Scholar
A. Valli and W. Zajączkowski, Navier-Stokes equations for compressible fluids: Global existence and qualitative properties of the solutions in the general case,, Comm. Math. Phys., 103 (1986), 259. doi: 10.1007/BF01206939. Google Scholar
A. Valli, On the existence of stationary solutions to compressible Navier-Stokes equations,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 4 (1987), 99. Google Scholar
I. Straškraba and A. Valli, Asymptotic behaviour of the density for one-dimensional Navier-Stokes equations,, Manuscripta Math., 62 (1988), 401. doi: 10.1007/BF01357718. Google Scholar
A. Valli and W. Zajączkowski, About the motion of non-homogeneous ideal incompressible fluids,, Nonlinear Anal., 12 (1988), 43. doi: 10.1016/0362-546X(88)90011-9. Google Scholar
A. Valli, An existence theorem for non-homogeneous inviscid incompressible fluids,, in Differential Equations (Xanthi, (1987), 691. Google Scholar
V. Lovicar, I. Straškraba and A. Valli, On bounded solutions of one-dimensional compressible Navier-Stokes equations,, Rend. Sem. Mat. Univ. Padova, 83 (1990), 81. Google Scholar
A. Quarteroni and A. Valli, Domain decomposition for a generalized Stokes problem,, in, (1988), 59. Google Scholar
A. Valli, On the one-dimensional Navier-Stokes equations for compressible fluids,, in The Navier-Stokes Equations (Oberwolfach, (1988), 173. doi: 10.1007/BFb0086068. Google Scholar
A. Quarteroni, G. Sacchi Landriani and A. Valli, Coupling of viscous and inviscid Stokes equations via a domain decomposition method for finite elements,, Numer. Math., 59 (1991), 831. doi: 10.1007/BF01385813. Google Scholar
A. Quarteroni and A. Valli, Theory and applications of Steklov-Poincaré for boundary value problems: the heterogeneous operator case,, in Fourth International Symposium on Domain Decomposition Methods for Partial Differential Equations (Moscow, (1990), 58. Google Scholar
A. Quarteroni and A. Valli, Theory and applications of Steklov-Poincaré operators for boundary value problems,, in Applied and Industrial Mathematics (Venice, (1989), 179. Google Scholar
C. Carlenzoli, A. Quarteroni and A. Valli, Spectral domain decomposition methods for compressible Navier-Stokes equations,, in Fifth International Symposium on Domain Decomposition Methods for Partial Differential Equations (Norfolk, (1991), 441. Google Scholar
A. Quarteroni, F. Pasquarelli and A. Valli, Heterogeneous domain decomposition: principles, algorithms, applications,, in Fifth International Symposium on Domain Decomposition Methods for Partial Differential Equations (Norfolk, (1991), 129. Google Scholar
A. Valli, Mathematical results for compressible flows,, in Mathematical Topics in Fluid Mechanics (Lisbon, (1991), 193. Google Scholar
A. Quarteroni and A. Valli, Mathematical modelling and numerical approximation of fluid flow,, in Methods and Techniques in Computational Chemistry: METECC-94. Volume C: Structure and Dynamics (ed. E. Clementi), (1993), 247. Google Scholar
C. Carlenzoli, A. Quarteroni and A. Valli, Numerical solution of the Navier-Stokes equations for viscous compressible flows,, in Applied Mathematics in Aerospace Science and Engineering (Erice, (1991), 81. Google Scholar
A. Alonso and A. Valli, A new approach to the coupling of viscous and inviscid Stokes equations,, East-West J. Numer. Math., 3 (1995), 29. Google Scholar
A. Alonso and A. Valli, Some remarks on the characterization of the space of tangential traces of $H(rot;\Omega)$ and the construction of an extension operator,, Manuscripta Math., 89 (1996), 159. doi: 10.1007/BF02567511. Google Scholar
A. Quarteroni and A. Valli, Domain decomposition methods for partial differential equations,, in 27th Computational Fluid Dynamics (ed. H. Deconinck), (1996), 1. Google Scholar
A. Alonso and A. Valli, Domain decomposition algorithms for low-frequency time-harmonic Maxwell equations,, in Numerical Modelling in Continuum Mechanics (Prague, (1997), 3. Google Scholar
A. Alonso and A. Valli, A domain decomposition approach for heterogeneous time-harmonic Maxwell equations,, Comput. Methods Appl. Mech. Engrg., 143 (1997), 97. doi: 10.1016/S0045-7825(96)01144-9. Google Scholar
A. Alonso, R. L. Trotta and A. Valli, Coercive domain decomposition algorithms for advection-diffusion equations and systems,, J. Comput. Appl. Math., 96 (1998), 51. doi: 10.1016/S0377-0427(98)00091-0. Google Scholar
A. Alonso and A. Valli, Finite element approximation of heterogeneous time-harmonic Maxwell equations via a domain decomposition approach,, in International Conference on Differential Equations (Lisboa, (1995), 227. Google Scholar
A. Alonso and A. Valli, Unique solvability for high-frequency heterogeneous time-harmonic Maxwell equations via Fredholm alternative theory,, Math. Methods Appl. Sci., 21 (1998), 463. doi: 10.1002/(SICI)1099-1476(199804)21:6<463::AID-MMA947>3.0.CO;2-U. Google Scholar
A. Alonso and A. Valli, An optimal domain decomposition preconditioner for low-frequency time-harmonic Maxwell equations,, Math. Comp., 68 (1999), 607. doi: 10.1090/S0025-5718-99-01013-3. Google Scholar
A. Quarteroni and A. Valli, Domain decomposition methods for compressible flows,, in Error Control and Adaptivity in Scientific Computing (Antalya, (1998), 221. Google Scholar
A. Alonso Rodríguez and A. Valli, Domain decomposition algorithms for time-harmonic Maxwell equations with damping,, M2AN Math. Model. Numer. Anal., 35 (2001), 825. doi: 10.1051/m2an:2001137. Google Scholar
A. Alonso Rodríguez and A. Valli, Domain decomposition methods for time-harmonic Maxwell equations: Numerical results},, in Recent Developments in Domain Decomposition Methods (Zürich, (2001), 157. doi: 10.1007/978-3-642-56118-4_10. Google Scholar
A. Alonso Rodríguez, P. Fernandes and A. Valli, The time-harmonic eddy-current problem in general domains: Solvability via scalar potentials,, in Computational Electromagnetics (Kiel, (2001), 143. doi: 10.1007/978-3-642-55745-3_10. Google Scholar
A. Alonso Rodríguez, P. Fernandes and A. Valli, Weak and strong formulations for the time-harmonic eddy-current problem in general multi-connected domains,, European J. Appl. Math., 14 (2003), 387. doi: 10.1017/S0956792503005151. Google Scholar
A. Alonso Rodríguez, R. Hiptmair and A. Valli, Mixed finite element approximation of eddy current problems,, IMA J. Numer. Anal., 24 (2004), 255. doi: 10.1093/imanum/24.2.255. Google Scholar
A. Alonso Rodríguez and A. Valli, Mixed finite element approximation of eddy current problems based on the electric field,, in ECCOMAS 2004: European Congress on Computational Methods in Applied Sciences and Engineering (Jyväskylä, (2004), 1. Google Scholar
A. Alonso Rodríguez, R. Hiptmair and A. Valli, A hybrid formulation of eddy current problems,, Numer. Methods Partial Differential Equations, 21 (2005), 742. doi: 10.1002/num.20060. Google Scholar
A. Quarteroni, M. Sala and A. Valli, An interface-strip domain decomposition preconditioner,, SIAM J. Sci. Comput., 28 (2006), 498. doi: 10.1137/04061057X. Google Scholar
O. Bíró and A. Valli, The Coulomb gauged vector potential formulation for the eddy-current problem in general geometry: well-posedness and numerical approximation,, Comput. Methods Appl. Mech. Engrg., 196 (2007), 1890. doi: 10.1016/j.cma.2006.10.008. Google Scholar
M. Discacciati, A. Quarteroni and A. Valli, Robin-Robin domain decomposition methods for the Stokes-Darcy coupling,, SIAM J. Numer. Anal., 45 (2007), 1246. doi: 10.1137/06065091X. Google Scholar
P. Fernandes and A. Valli, Lorenz-gauged vector potential formulations for the time-harmonic eddy-current problem with $L^\infty$-regularity of material properties,, Math. Methods Appl. Sci., 31 (2008), 71. doi: 10.1002/mma.900. Google Scholar
A. Alonso Rodríguez and A. Valli, Voltage and current excitation for time-harmonic eddy-current problems,, SIAM J. Appl. Math., 68 (2008), 1477. doi: 10.1137/070697677. Google Scholar
A. Alonso Rodríguez and A. Valli, A FEM-BEM approach for electro-magnetostatics and time-harmonic eddy-current problems,, Appl. Numer. Math., 59 (2009), 2036. doi: 10.1016/j.apnum.2008.12.002. Google Scholar
A. Alonso Rodríguez, A. Valli and R. Vázquez Hernández, A formulation of the eddy current problem in the presence of electric ports,, Numer. Math., 113 (2009), 643. doi: 10.1007/s00211-009-0241-7. Google Scholar
A. Alonso Rodríguez, J. Camaño and A. Valli, Inverse source problems for eddy current equations,, Inverse Problems, 28 (2012). doi: 10.1088/0266-5611/28/1/015006. Google Scholar
A. Valli, Solving an electrostatics-like problem with a current dipole source by means of the duality method,, Appl. Math. Lett., 25 (2012), 1410. doi: 10.1016/j.aml.2011.12.013. Google Scholar
A. Alonso Rodríguez, E. Bertolazzi, R. Ghiloni and A. Valli, Construction of a finite element basis of the first de Rham cohomology group and numerical solution of 3D magnetostatic problems,, SIAM J. Numer. Anal., 51 (2013), 2380. doi: 10.1137/120890648. Google Scholar
A. Alonso Rodríguez, J. Camaño, R. Rodríguez and A. Valli, A posteriori error estimates for the problem of electrostatics with a dipole source,, Comput. Math. Appl., 68 (2014), 464. doi: 10.1016/j.camwa.2014.06.017. Google Scholar
A. Alonso Rodríguez and A. Valli, Finite element potentials,, Appl. Numer. Math., 95 (2015), 2. doi: 10.1016/j.apnum.2014.05.014. Google Scholar
A. Alonso Rodríguez and A. Valli, Eddy Current Approximation of Maxwell Equations,, Springer Italia, (2010). doi: 10.1007/978-88-470-1506-7. Google Scholar
A. Quarteroni and A. Valli, Domain Decomposition Methods for Partial Differential Equations,, Oxford University Press, (1999). Google Scholar
A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations,, Springer, (1994). Google Scholar
Hugo Beirão da Veiga, Alessandro Morando, Paola Trebeschi. The research of Paolo Secchi. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : iii-ix. doi: 10.3934/dcdss.2016.9.1iii
Daniel Genin. Research announcement: Boundedness of orbits for trapezoidal outer billiards. Electronic Research Announcements, 2008, 15: 71-78. doi: 10.3934/era.2008.15.71
Liu Hui, Lin Zhi, Waqas Ahmad. Network(graph) data research in the coordinate system. Mathematical Foundations of Computing, 2018, 1 (1) : 1-10. doi: 10.3934/mfc.2018001
Leonid A. Bunimovich. Dynamical systems and operations research: A basic model. Discrete & Continuous Dynamical Systems - B, 2001, 1 (2) : 209-218. doi: 10.3934/dcdsb.2001.1.209
Richard Evan Schwartz. Research announcement: unbounded orbits for outer billiards. Electronic Research Announcements, 2007, 14: 1-6. doi: 10.3934/era.2007.14.1
Daniel T. Wise. Research announcement: The structure of groups with a quasiconvex hierarchy. Electronic Research Announcements, 2009, 16: 44-55. doi: 10.3934/era.2009.16.44
Erika T. Camacho, Christopher M. Kribs-Zaleta, Stephen Wirkus. The mathematical and theoretical biology institute - a model of mentorship through research. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1351-1363. doi: 10.3934/mbe.2013.10.1351
Yuxue Li, Maozhu Jin, Peiyu Ren, Zhixue Liao. Research on the optimal initial shunt strategy of Jiuzhaigou based on the optimization model. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1239-1249. doi: 10.3934/dcdss.2015.8.1239
Zuo-Jun max Shen. Integrated supply chain design models: a survey and future research directions. Journal of Industrial & Management Optimization, 2007, 3 (1) : 1-27. doi: 10.3934/jimo.2007.3.1
Aloev Rakhmatillo, Khudoyberganov Mirzoali, Blokhin Alexander. Construction and research of adequate computational models for quasilinear hyperbolic systems. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 277-289. doi: 10.3934/naco.2018017
Tinggui Chen, Yanhui Jiang. Research on operating mechanism for creative products supply chain based on game theory. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1103-1112. doi: 10.3934/dcdss.2015.8.1103
Yi Zhang, Xiao-Li Ma. Research on image digital watermarking optimization algorithm under virtual reality technology. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1427-1440. doi: 10.3934/dcdss.2019098
Xiaohong Zhu, Zili Yang, Tabharit Zoubir. Research on the matching algorithm for heterologous image after deformation in the same scene. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1281-1296. doi: 10.3934/dcdss.2019088
Xin Li, Ziguan Cui, Linhui Sun, Guanming Lu, Debnath Narayan. Research on iterative repair algorithm of Hyperchaotic image based on support vector machine. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1199-1218. doi: 10.3934/dcdss.2019083
Qiang Yin, Gongfa Li, Jianguo Zhu. Research on the method of step feature extraction for EOD robot based on 2D laser radar. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1415-1421. doi: 10.3934/dcdss.2015.8.1415
Hongming Yang, C. Y. Chung, Xiaojiao Tong, Pingping Bing. Research on dynamic equilibrium of power market with complex network constraints based on nonlinear complementarity function. Journal of Industrial & Management Optimization, 2008, 4 (3) : 617-630. doi: 10.3934/jimo.2008.4.617
Yanan Wang, Tao Xie, Xiaowen Jie. A mathematical analysis for the forecast research on tourism carrying capacity to promote the effective and sustainable development of tourism. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 837-847. doi: 10.3934/dcdss.2019056
Chao Mi, Jun Wang, Weijian Mi, Youfang Huang, Zhiwei Zhang, Yongsheng Yang, Jun Jiang, Postolache Octavian. Research on regional clustering and two-stage SVM method for container truck recognition. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1117-1133. doi: 10.3934/dcdss.2019077
PDF downloads (10)
Ana Alonso Rodríguez Hugo Beirão da Veiga Alfio Quarteroni | CommonCrawl |
Resources Aops Wiki 2000 AIME II Problems/Problem 13
2000 AIME II Problems/Problem 13
Revision as of 11:37, 27 November 2007 by Jam (talk | contribs) (→Solution)
The equation has exactly two real roots, one of which is , where , and are integers, and are relatively prime, and . Find .
We may factor the equation as:
$\begin{align*} 2000x^6+100x^5+10x^3+x-2&=0\\ 2(1000x^6-1) + x(100x^4+10x^2+1)&=0\\ 2[(10x^2)^3-1]+x[(10x^2)^2+(10x^2)+1]&=0\\ 2(10x^2-1)[(10x^2)^2+(10x^2)+1]+x[(10x^2)^2+(10x^2)+1]&=0\\ (20x^2+x-2)(100x^4+10x^2+1)&=0\\ \end{align*}$ (Error compiling LaTeX. ! Package amsmath Error: \begin{align*} allowed only in paragraph mode.)
Now for real . Thus the real roots must be the roots of the equation . By the quadratic formula the roots of this are:
Thus , and so the final answer is
2000 AIME II (Problems • Answer Key • Resources)
1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15
All AIME Problems and Solutions
Retrieved from "https://artofproblemsolving.com/wiki/index.php?title=2000_AIME_II_Problems/Problem_13&oldid=20323" | CommonCrawl |
Evolutionary transitions in the Asteraceae coincide with marked shifts in transposable element abundance
S. Evan Staton1,2 &
John M. Burke3
The transposable element (TE) content of the genomes of plant species varies from near zero in the genome of Utricularia gibba to more than 80 % in many species. It is not well understood whether this variation in genome composition results from common mechanisms or stochastic variation. The major obstacles to investigating mechanisms of TE evolution have been a lack of comparative genomic data sets and efficient computational methods for measuring differences in TE composition between species. In this study, we describe patterns of TE evolution in 14 species in the flowering plant family Asteraceae and 1 outgroup species in the Calyceraceae to investigate phylogenetic patterns of TE dynamics in this important group of plants.
Our findings indicate that TE families in the Asteraceae exhibit distinct patterns of non-neutral evolution, and that there has been a directional increase in copy number of Gypsy retrotransposons since the origin of the Asteraceae. Specifically, there is marked increase in Gypsy abundance at the origin of the Asteraceae and at the base of the tribe Heliantheae. This latter shift in genome composition has had a significant impact on the diversity and abundance distribution of TEs in a lineage-specific manner.
We show that the TE-driven expansion of plant genomes can be facilitated by just a few TE families, and is likely accompanied by the modification and/or replacement of the TE community. Importantly, large shifts in TE composition may be correlated with major of phylogenetic transitions.
A common feature of eukaryotic genomes is that they contain transposable elements (TEs), yet there is a remarkable amount of variation in TE content and composition between species [1, 2]. This property of eukaryotic genomes has parallels with ecological communities [3, 4], which vary in the abundance and diversity of species. While it has been shown that niche differences are an important factor in shaping species diversity [5, 6], it is generally believed that neutral processes can explain the assembly of communities over evolutionary time scales [7]. Given the ubiquitous nature of TEs and their contributions to eukaryotic genome evolution [8, 9], an important question is whether or not similar mechanisms operate to shape the genome landscape.
One possible explanation for the variation in TE content and composition between species is that random processes govern the evolution of TE communities and that chance alone determines the outcome for each TE lineage [10]. However, there is strong evidence that TEs integrate in non-random genomic locations, and TEs may show signs of positive selection [11–14]. It is important to understand the phylogenetic distribution of these patterns because TE activity may, in some cases, correlate with the diversification of their host lineages. For example, species radiations in vertebrates appear to be associated with genome repatterning and TE amplification events [15–17]. In one case, the origin of six species of Taterillus gerbils within the past 0.4 million years has been accompanied by numerous large chromosomal changes and the non-random accumulation of LINE-1 elements, with the most recently diverged species showing the greatest amount of LINE-1 accumulation [18]. Also, waves of TE amplification are associated with the radiation and subsequent speciation of four genera of salmonid fishes [19]. Similarly, massive retrotransposon amplification appears to coincide with speciation events in hybrid sunflower species [20], and non-random patterns of retrotransposon accumulation in the hybrid species' genomes indicate a potential mechanism for chromosomal divergence between species [21]. Taken together, these results suggest that studying the properties of TE evolution may indicate the timing and nature of important evolutionary transitions. Thus, we are keenly interested in understanding the nature of TEs in the plant family Asteraceae, which harbors unparalleled species diversity in the plant kingdom [22].
The Asteraceae is the largest family of vascular plants, composed of more than 23,600 species, or 8 % of all plant species [22]. The consensus view is that the Asteraceae originated in South America within the past 40–50 million years, which is somewhat surprising given the large number of species in this family [23]. From South America, the Asteraceae spread to Central America and Africa, and the family currently has a worldwide distribution, being found on every continent except Antarctica [24]. There are 12 recognized subfamilies in the Asteraceae, though four of those subfamilies, the Mutisioideae, Carduoideae, Cichorioideae, and Asteroideae, contain 99 % of the species [24]. Within the Asteraceae, there is exceptional diversity in the ecological distribution of species. For example, there are narrow endemics, and also species such as the common sunflower (Helianthus annuus) and dandelion (Taraxacum officinale) that are found widely distributed on multiple continents. Though most species in the Asteraceae are herbaceous, there are also many shrub and tree species [24]. However, this plant family is perhaps best known for the numerous agriculturally important species such as cultivated sunflower, safflower, lettuce, and globe artichoke [25]. Given the recent evolutionary origin of this enormous plant family, as well as its global distribution, the Asteraceae represent an excellent system to study plant adaptation and speciation. However, very little is known about genome evolution and TE diversity in the Asteraceae as a whole (but see [26–29]).
In this study, we seek to understand the major features of Asteraceae genomes, and to explore the mechanistic basis of TE evolution in plants by analyzing the evolutionary history of this plant family in a lineage-specific manner. It is known that there is a major bias in genome composition towards Gypsy DNA in the common sunflower genome [28, 29], but an outstanding question is whether other Asteraceae genomes exhibit similar patterns. That is, are the genomic properties of the common sunflower unique to that lineage? More importantly, what are the mechanisms contributing to TE community structure in plants? We address these questions by generating whole-genome shotgun (WGS) sequence data from 14 species representing 5 different subfamilies in the Asteraceae, along with an outgroup, and analyzing the relative abundance of TEs in each. We use phylogenetic and linear models to investigate whether there have been lineage-specific patterns of TE evolution in the Asteraceae. We also use ecological measures of community diversity, along with simulation-based approaches, to better understand the genomic impact of TE amplification events and how changes in TE abundance influence TE diversity in the genome as a whole. Taken together, these approaches represent a novel approach to study TE properties by employing descriptive statistical approaches along with phylogenetic and ecological models to investigate the mechanisms of genome community assembly.
Transposable element composition in the Asteraceae
Using WGS sequencing data, we determined that Asteraceae genomes are, on average, composed of 69.9 ± 5.3 % TEs (mean ± SD), with 53.2 ± 19.1 % of these genomes being LTR retrotransposons (LTR-RTs; Fig. 1). As expected for plant species, Class II TEs and non-LTR-RTs were lower in abundance relative to LTR-RTs, comprising just 0.60 ± 0.7 % and 0.82 ± 1.1 % of each genome, respectively. The outgroup species Nasanthus patagonicus exhibited comparable patterns of total repeat abundance (62.0 ± 0.1 %) and LTR-RT abundance (47.3 ± 3.3 %) as the Asteraceae, but contained a significantly higher abundance of Class II TEs (2.9 ± 0.1 %; P = 0.02) and a higher, though not signicantly so, abundance of non-LTR-RTs (2.0 ± 0.2 %; P = 0.20). Interestingly, in all but one species, LINE-like sequences are more prevalent (by a factor of at least 2:1) than other non-LTR-RT types. The one species that does not fit this pattern is Fulcaldea stuessyi, a member of the Barnadesioideae (the most basal subfamily of the Asteraceae), which harbors more SINE-like sequences than other non-LTR-RT types. In addition, the N. patagonicus genome contains a significantly higher abundance of endogenous retroviruses (ERVs; 1.2 ± 0.4 %; P = 0.04) than the average Asteraceae genome (0.06 ± 0.09 %), though it is likely that these sequences represent novel LTR-RTs since plant ERV sequences are more closely related to LTR-RTs than to the Retroviridae [30]. Contrasting the widespread nature of the aforementioned TE types, Penelope transposons are characterized by a sparse distribution throughout eukaryotes [31]. Consistent with this finding, Penelope transposons were found in all but two species in the Asteraceae (Fulcaldea stuessyi and Phoebanthus tenuifolius), and ERV-like sequences were absent from four species (F. stuessyi, Conoclinium coelestinum, P. tenuifolius, and H. argophyllus).
Genomic contribution of TE superfamilies in the Asteraceae. a Phylogenetic tree of 14 Asteraceae species and one outgroup species derived from 763 nuclear loci (see Methods). Filled circles indicate nodes with >75 % bootstrap support; to the right of the tree are the subfamilies to which each species belongs; the red stars on the branches indicate the timing of whole genome duplication events based on [56]. b Barplot of the genomic composition of TE superfamilies. The x-axis indicates abundance in base pairs for each species, shown along the y-axis. Filled circles indicate the genome size for each species. Superfamilies by order and class: Copia, Gypsy, ERV, and DIRS are LTR-RTs; Helitron is in subclass II of Class II; EnSpm, MuDR, hAT, Mariner/Tc1, and Polinton are TIR Class II TEs; Crypton are unique Class II elements in the order Crypton; L1, L2, and Jockey are LINE non-LTR-RTs; Penelope TEs belong in the unique Penelope order of retrotransposons; R1 are a group of non-LTR-RTs that insert into rDNA genes. The diagonal line through each entry in the barplot legend indicates the border of each TE type in the plot, which is a solid black line
In agreement with previous studies [28, 29], we found a large bias in TE content in the genome of H. annuus, which is composed primarily of Gypsy elements (60.0 ± 3.3 %). This bias appears to be shared by all members of the subfamily Asteroideae, including all species of the genus Helianthus analyzed here (62.4 ± 2.7 %), and the most basal member of the tribe Heliantheae, P. tenuifolius (67.5 ± 5.6 %; Fig. 1). We found a significant linear increase in the genomic proportion of Gypsy LTR-RTs from the base of the Asteraceae to the most derived subfamily, the Asteroideae using a generalized least squares test (r 2 = 0.996; P ≤ 2.2e-16; Fig. 2). Copia TEs exhibit an inverse pattern to that of Gypsy, with species at the base of Asteraceae containing proportionally more Copia DNA than those species in the Asteroideae (r 2 = 0.915; P = 2.831e-12; Fig. 2). These phylogenetic patterns remained significant when considering only one Helianthus species (H. annuus) in the analysis, indicating that they are not due to the overrepresentation of a single genus.
Linear change in genomic composition of LTR-RTs. Shown in phylogenetic order starting with the outgroup (bottom of the y-axis) to the most derived lineages of the Asteraceae in this study (top of the y-axis) are the change in genomic proportion (shown along the x-axis) of A) Gypsy and B) Copia TEs
To further investigate the significance of the patterns, we compared the proportion of TEs at the superfamily and family levels along the phylogenetic tree to what would be expected under a Brownian motion model, and we assessed significance of these results using phylogenetically independent contrasts (PICs). We detected significant (P < 0.05) phylogenetic signal, K, for ten superfamilies of TEs (Additional file 1). Notably, Copia TEs as a whole showed significantly (P < 0.05) more phylogenetic signal (i.e., K ≥ 1) than Gypsy (i.e., K ≤ 1). At the individual TE family level, we found more LTR-RT families exhibiting significant (P < 0.05) phylogenetic signal (7 Copia families, 10 Gypsy families, 1 ERV1 family) than either non-LTR-RTs (3 L1-like families, 3 CR1 families, 1, NeSL family) or Class II TEs (1 hAT family, 2 Mariner/Tc1 family, 1 Helitron family), though the average phylogenetic signal for Class II TE families was much higher (K = 3.26 ± 0) than either LTR-RTs (K = 1.78 ± 1.13) or non-LTR-RTs (K = 3.19 ± 0.16) [see Additional files 2 and 3].
Properties of individual TE family evolution
We investigated the mechanisms of genome community assembly over large time scales by analyzing the rank abundance/dominance (RAD) for all TE families in each species in this study. We considered five ecological models and present the model that best fits the data for each species, as determined by a Bayesian Information Criterion (see Methods). Though numerous species across the Asteraceae exhibit a log-normal-like distribution of TE family abundances (6/15 species), which can be described by even abundances and few rare TE families, it is evident that the predominant pattern is for species to exhibit highly uneven TE family abundances and are thus best fit by a niche-preemption model (7/15 species; Fig. 3). For example, we found that F. stuessyi, a member of the subfamily Barnadesioideae, has a very even distribution of TE families in terms of abundance (0.33 ± 0.52 %), while members of the subfamily Asteroideae have a very uneven distribution (see Fig. 1 for subfamily description), being composed of relatively few highly abundant families and many rare families (0.92 ± 2.4 %). Six species in the Heliantheae show TE family distributions best fit by a straight line (i.e., the niche preemption model; Fig. 3). The dominance of TE families in the Heliantheae is evident when considering that the top 10 TE families in this group account for nearly 2X the genomic proportion (51.5 ± 3.14 %) as the top 10 TE families in the rest of the Asteraceae (26.8 ± 9.10 %).
RAD plot of TE family abundance. Species are presented in phylogenetic order starting with the outgroup in the bottom right panel and the moving left to the most derived lineages of the Asteraceae in this study being displayed at the top left. The x-axis depicts the rank order of TEs by abundance, with rank 1 being given to the most abundant family, rank 2 given the second most abundant family, and so on. The y-axis depicts the log abundance of each TE family. Above the plots are the 5 ecological models used to test the fit of observed abundance. The colored line in each panel represents the best-fit model to each distribution as determined by BIC (see Methods)
While the RAD models described above demonstrate global patterns of abundance and dominance of TE families, these plots are unlabeled and do not allow investigation of specific changes in rank abundance. To infer which specific TE families have contributed the most to the rank abundance patterns observed in this study, and in the marked change in rank abundance and dominance within the Heliantheae in particular, we analyzed the rank of TE families sorted by abundance in the Asteraceae as a whole (Fig. 4) as compared to the abundance of TE families within the Heliantheae (Fig. 5). Interestingly, we found no phylogenetic patterns of rank abundance at the TE family level that are shared across the Heliantheae (Fig. 5). At the superfamily level, however, it is clear that at least the four highest-ranking TE families in the each species in the Heliantheae are members of the Gypsy superfamily.
Rank abundance of TE families in the Asteraceae. The y-axis depicts the most abundant TE families in the Asteraceae, listed in decreasing rank abundance from the top the y-axis. The x-axis shows the average percent genomic abundance of each TE family in the Asteraceae
Rank abundance of TE families in the Heliantheae. Along the y-axis is the rank abundance of the top 2 % of TE families in the Heliantheae, in decreasing order. Each panel depicts the rank abundance of TE families in phylogenetic order of the tribe from the base of the plot. The x-axis shows the percent genomic abundance of each TE family
Impact of TE family abundance on TE diversity
To investigate the potential impact of changes in TE abundance on patterns of genome community diversity, we estimated the correlation of changes in TE family abundance and TE richness with genome size. As expected for plant species [1, 32, 33], the abundance of retrotransposon DNA is strongly correlated with genome size (r 2 = 0.608; P = 6.06e-4; Additional file 4). These patterns were also significant when considering the non-independence of the species with a phylogenetic generalized least squares test (Copia, P = 0.0009; Gypsy, P = <0.0001; Additional file 5). However, while we did find a positive correlation with genome size and TE family size, we did not find such a correlation with genome size and TE richness (Fig. 6). To investigate the impact of genome dominance by some TE families on genome community structure, we also calculated Shannon's diversity and evenness of TE families for each species in this study (Additional file 6), which may provide more insight into the evolution of genome community patterns than looking at TE richness alone [34]. For example, in addition to the major shift in genome composition at the base of Heliantheae, there also appears to be a reduction in Shannon's diversity and evenness (Additional file 6). This result is further supported by a marked increase in the average TE family size in the Heliantheae, which is accompanied by a decrease in TE richness (Fig. 7).
Relationship between genome size and TE family size and richness. Along the x-axis is shown the genome size of each species in mega-base pairs. a The TE richness, or total number of TE families seen, is shown along the y-axis. b The mean TE family size as a percent of the genome is depicted on the y-axis
Phylogenetic relationship between TE richness and TE family size. a The TE family richness is shown along the x-axis for each species, which are depicted in phylogenetic order from the outgroup species at the base of the y-axis to the most derived lineages in the Asteraceae at the top of the y-axis. b The mean TE family size as a percentage of the genome is shown along the x-axis. In both panels, the red vertical line indicates the mean and the horizontal dashed black line shows the base of the Heliantheae (with all species in the Heliantheae being shown above the line)
It is well known that TEs vary in abundance and type between eukaryotic species. For example, TEs are completely absent from the genomes of some unicellular eukaryotes [36], though >50 % of the human genome is composed of TEs [35]. Similarly, the TE composition of the Saccharomyces cerevisiae genome is 4 % [37], and includes only LTR-RTs, whereas some plant genomes are >80 % TEs e.g., [29, 38–40], including hundreds of families of both Class I and Class II TEs [12]. There is also a disparity with respect to TE copy number and the occurrence of contemporary TE activity. For example, mammalian genomes contain numerous high copy number TE families though only a few recently active TE families have been discovered [41]. Conversely, there are many active TE families in the genomes of fruitfiles and pufferfish, but these families only contain a few copies [42–44]. Given the potential impact of TEs on genome structure and gene expression divergence [45–47] and the apparent variation in TE susceptibility amongst eukaryotes, an understanding of the timescales and phylogenetic patterns over which different classes of TEs are active is of great interest.
Transposable elements and genome content in the Asteraceae
Species in the Asteraceae vary tremendously in the TE composition of their genomes, especially with respect to LTR-RTs (Fig. 1). It is not surprising that the greatest magnitude of change in genome content involves LTR-RTs given that these sequences account for the largest portion of each genome. It is, however, interesting that we see such strong linear patterns of change in genome content at the LTR-RT superfamily level from the base of the Asteraceae to the crown lineages (Fig. 2). In the broad sense, these patterns fit the expectation of zero-sum change for a neutral community, which predicts that an increase in abundance in one member of a community will result in a proportional decrease in the abundance of another [7]. Though TE activity may lead to expansion of the nuclear genome [20, 38, 48], the inverse patterns of change in Gypsy and Copia abundance in the Asteraceae reflects that there are a finite number of insertion sites in the genome, and increases in copy number of one or more TE families may result in the replacement or inactivation of other TE copies.
We detected significant phylogenetic signal for both Class I and Class II TEs at both the superfamily and family level (Additional files 1, 2 and 3), indicating that the genomes of related species are more similar in TE composition and abundance than expected by chance. When considering the variation in genome content between the basal and most derived lineages of the Asteraceae (Fig. 1), this result is expected. However, it seems likely that very different processes contributed to these phylogenetic patterns. For example, the phylogenetic signal seen in Penelope retrotransposons and ERVs may be a product of the sparse distribution of those sequences. The genomic composition of ERVs in N. patagonicus appears high relative to the Asteraceae, though this finding not uncommon for plant species. For example, the genomic percentage of ERVs is 2.4 % in the Amborella genome [49], twice that of N. patagonicus. Alternatively, Gypsy elements are found in all species in the Asteraceae, but there is a clear increase in the abundance of several Gypsy families at the base of Heliantheae, producing a phylogenetic pattern shared by all members of this tribe. The inverse pattern can be seen for the Copia superfamily, which also shows significant phylogenetic signal (Additional file 1), where a linear decrease in these sequences from the Barnadesioideae to the Asteroideae contributes to phylogenetic patterns across the family. The foregoing results indicate that no single evolutionary process can explain these patterns of genome evolution in the Asteraceae. Specifically, species in the basal subfamilies of the Asteraceae are strikingly different in TE composition compared with the crown subfamilies, with those species in the basal subfamilies containing a greater abundance of non-LTR-RTs and DNA transposons. Could the greater TE diversity at the base of the Asteraceae and in the outgroup species be a result of the age of those lineages, or could there be other mechanisms influencing the abundance and diversity of the genome community? While it is not currently possible for us describe the evolutionary events that produced these patterns, ongoing genome sequencing projects in the Asteraceae should enable better descriptions in future studies.
Transposable element families and genome community assembly
Although ecosystems typically vary in terms of their species abundance and diversity, most communities exhibit a very similar distribution in the relative abundance of species [7]. Specifically, most communities exhibit a log-normal-like distribution of species abundance, with few species having high abundance, many rare species with very low abundance, and numerous species lying between these extremes [7]. Interestingly, one prior study has shown that eukaryotic genomes appear to exhibit similar log-normal distributions of genetic elements, suggesting that neutral processes may best explain community assembly over evolutionary timescales, regardless of the system [50]. However, there is some doubt as to whether the log normal model is the best null hypothesis for TE abundance distributions [51]. We tested a range of neutral and niche-based abundance distribution models and asked whether Asteraceae genomes also exhibit a log-normal distribution of TE family abundances, and whether there are shared patterns of TE abundance distributions across the family. While six species in this study exhibit a log normal distribution of TE abundance, a greater number, seven species, exhibit a niche-preemption distrbution, and two species have a TE abundance distribution best fit by the Zipf model, a hierarchical distribution (Fig. 3).
Interestingly, there is a very marked break at the base of Heliantheae with all species in this tribe exhibiting numerous highly abundant TE families and many rare families. This type of distribution has been used to describe communities with poor habitat [52] and/or few species [53], or the early succession of species [54] following disturbance [55]. Typically, these patterns of uneven abundance do not fit neutral expectations [56]. While there are caveats in interpreting ecological models in a genomic context, these results, taken together with other measures of TE abundance presented here, clearly reflect a unique evolutionary history for this tribe.
What biological change facilitated the major genomic transitions in the Heliantheae? It is tempting to speculate that the whole genome duplication event at the base of the Heliantheae [57] may have provided a genomic disturbance which contributed to the biased distribution of TE family abundance in this tribe, or directed integration of Gypsy elements may have contributed to these patterns [11, 29, 58]. Clearly, more work will be required to gain a deeper understanding of the underlying processes. It is clear from this analysis, however, that whole-genome turnover and expansion events have taken place in the lineage leading to the tribe Heliantheae, which arose ca. 26–31 MYA [57, 59].
Mechanisms of change in the genome-wide level of transposable elements
Major transitions in genome content are evident in each subfamily of the Asteraceae (Fig. 1). What is the best mechanistic explanation of the patterns of TE abundance in the Asteraceae? The coexistence of species may be facilitated by niche differentiation [60], and this type of model best explains the TE abundance data we see for species in the tribe Heliantheae. However, the TE abundance and diversity for this group of species indicates a very biased composition towards Gypsy TEs (Figs. 2 and 3). The linear increase in abundance of Gypsy TEs in the Asteraceae has had at least two major influences on the genome community of TEs. First, the correlation we see with TE family size and genome size (Fig. 6) indicates an unequal contribution of TE families to the genome community. Second, it is clear that the linear pattern of increase in Gypsy is driven by only a few TE families (Fig. 3), which has lead to an increase in average family size and a decrease in overall TE richness (Fig. 7). Interestingly, we do not see different superfamilies dominating Helianthus genomes as has been observed in some species of Gossypium [61]. This may indicate that a single event at the base of Heliantheae produced the observed genomic change, and that the patterns we see in each Helianthus species are shared by phylogenetic history rather than being independent events leading to similar patterns in each species. Alternatively, Gypsyelements may have evolved features allowing them to outcompete other TEs or avoid host-silencing mechanisms. Future investigations into these questions will surely lead to a greater understanding of the processes contributing to the high levels of diversity observed within the Asteraceae, and to the processes contributing to the evolution of TE diversity across the plant kingdom as a whole.
The majority view of TE evolution is that these sequences evolve primarily by neutral processes and are therefore likely to generate predictable distributions of relative abundance [50]. We showed, however, that plant species may exhibit uneven distributions of TE family abundance, as exemplified by all members of the Heliantheae investigated herein. Our results indicate that these patterns may be facilitated by: 1) an unequal contribution of certain TE families over time [29, 62]; and 2) nonrandom patterns of TE accumulation across the genome, as has been shown for one species in this study, H. annuus [21, 26, 27]. Aside from species in the tribe Heliantheae, other species in the Asteraceae do exhibit TE abundance distributions that are in line with neutral expectations. This finding may indicate that the factors contributing to the relative abundance TEs vary over time. Based on these results, we believe that the relative abundance of TEs in plant genomes can be best described as a continuum of resource-based patterns (i.e., niche-preemption) to random patterns (i.e., neutral processes). Our finding of major shifts in TE composition at the base of the Asteraceae and at the base of the tribe Heliantheae provides further evidence that TE compositions contain phylogenetic signal [63], and suggests a possible role for TEs in species formation in the Asteraceae.
Taxon sampling and WGS sequencing
To investigate patterns of genome evolution across the Asteraceae, we generated paired-end Illumina Hi-Seq sequence data (100 bp in length; 400 bp insert size) for individuals from 15 taxa. The estimated genome coverage for each species ranged from 0.42x – 3.52x (Additional file 7). These species were selected to represent every major subfamily of the Asteraceae, and included an outgroup species, N. patagonicus (Additional file 7). In addition, five of the taxa were selected from the genus Helianthus in order to investigate patterns of genome evolution amongst closely related species, and to increase our understanding of the evolutionary history of the most well-studied species in the family, H. annuus, for which there have been numerous prior studies about TE properties (see [26–29]). This study was done in parallel with a previously published phylogenomic study in which the taxon sampling and library preparation methods are described [64].
Repeat identification from WGS sequences
Prior to analysis, all WGS reads were treated with PRINSEQ version 0.19.4; [65] with the parameters '-min_len 40 –noniupac –min_qual_mean 15 –lc_method entropy –lc_threshold 60 –trim_ns_right 10 –ns_max_p 20' to remove low quality and short sequences. After quality filtering, we screened all chloroplast- and mitochondria-derived sequences from the WGS reads using the complete chloroplast genome sequence for cultivated sunflower line HA383 (Genbank accession number DQ383815) and a database of 10 complete plant mitochondria genome sequences obtained from Genbank, respectively. One million paired-end reads were sampled randomly from each set of screened reads and interleaved with Pairfq version 0.09; [66] prior to analysis. Repeat identification was carried out by performing an all-by-all BLAST following the methods of Staton et al. [29] with the 1 million randomly sampled paired-end reads, followed by clustering using the Louvain method [67]. Annotation of clusters was performed using blastn [68] against RepBase version 18.01; [69] and a set of full-length LTR-RTs described by Staton et al. [29]. Our repeat identification methods are implemented using the Transposome software version 0.03; [70] that we developed for this study. We performed three replicates of the above sampling and annotation procedure with Transposome for each species to minimize the statistical error in our estimates of genome composition.
To investigate the effect of varying levels of genome coverage, we simulated 10 different levels of genome coverage from the H. annuus WGS reads ranging from 0.056 to 5.1 %, with 3 replicates at each level (total of 30 read sets). The coefficient of variation in the inferred genomic composition of each TE family was measured at each level of genome coverage after analysis with Transposome to infer the appropriate level of sampling; this allowed us to maximize the level of TE diversity being captured.
Genome size estimation and prediction of changes in genome composition
In order to determine the genomic contribution of each TE family to the species in this study, and estimate the magnitude of change across the Asteraceae, we calculated genome size according to Hu et al. [71], with modifications. Using WU-BLAST with parameters "M = 1 N = -3 -Q -R 1" we mapped a reference transcriptome of 11 species from the Compositae Genome Project database (http://compgenomics.ucdavis.edu/) to 5 million WGS reads for each species, and calculated the coverage of each transcript using the formula:
$$ Co{v}_i=N/L $$
where N is the total length of reads mapped and L is the transcript length. The genome size (Cval) for each species was then determined by the formula:
$$ Cval=P\times \left(n\times l/ mean\left(Co{v}_i\right)\right) $$
where P is the ploidy level, n is the total number of reads, and l is the read length. In the above formula, only alignments over 60 base pairs in length and over 70 % identity were considered. These values were chosen from a permutation test using all possible alignments from lengths 50–100 and percent identity thresholds from 50 to 100, comparing observed to expected values. The mean coverage (Covi) was trimmed to remove the top 10 % of transcripts by coverage. The estimated genome size for each species, along with the published prediction (if available), is shown in Additional files 7 and 8.
The genomic contribution of each TE superfamily was calculated from the annotation summary file generated by Transposome (Fig. 1), and was used to determine the magnitude of change in TE composition in each species. Generalized least squares tests were performed with the R programming language [72] to estimate directional change in TE content in the Asteraceae (Fig. 2). We calculated Shannon's evenness and diversity statistics using the R package Vegan [73] to investigate the influence of genome size change on TE diversity statistics.
Phylogenetic patterns of TE family evolution
In addition to analyzing statistical patterns of repeat abundance, we also explored a mechanistic basis for TE evolution in the Asteraceae from an ecological perspective through the use of community ecology models. First, we compared RAD distributions using the R package Vegan [73] to investigate the processes leading to the inferred distribution of TE families in the Asteraceae [50]. We compared five ecological models to test whether the rank abundance distribution of TE families in each species was best fit by neutral or niche-based models (reviewed in [56]). As in previous studies (e.g., [4, 74]), we treat a TE family as analgous to a biological species, the genome as analagous to the ecological communtiy, and an individual TE is treated as an individual of a given species. The Null model fits a brokenstick model where individual TEs are randomly distributed among the observed TE families and no parameters are fitted [5]. The Lognormal and Zipf models are generalized linear models where the Lognormal model assumes the logarithm of abundances are distributed normally [73]. The Zipf model,
$$ {a}_r=Jp{r}^Y, $$
where a is the expected abunance of a TE family at rank r, J is the total number of individual TEs, p is the fitted proportion of the most abundant TE family, and Υ is a decay coefficient, is used to fit a particular power law distribution [73]. The Mandelbrot model is a generalization of the Zipf model and adds one nonlinear parameter to the Zipf with the remaining parameters and log-likelihood being fitted with a linear model [73]. In the Preemption model, also called the geometric series or niche preemption model, each level of TE family abundance is a sequential, constant proportion of the total number of individuals in the whole community. The preemption model rank abundance is fit by straight line in the RAD plot [75].
Second, a phylogenetic generalized least squares (pgls) test was conducted using caper [76] to test for the association of changes in TE composition with particular phylogenetic divisions within the Asteraceae and genome size. The phylogenetic tree used in the pgls analyses was generated from an alignment of 763 nuclear loci sequenced by a novel targeted enrichment method [64]. The model we tested was:
$$ Log\left( Genome\ size\right)\sim Log\left(S*\right) $$
where S* is the superfamily percent genomic abundance.
To further investigate the mechanisms and timing of shifts in genome content, we calculated phylogenetic signal for each TE family by using a descriptive statistic called K, which indicates significant phylogenetic signal for a trait, in this case TE abundance, on the tree compared to a Brownian motion model, along with phylogenetic independent contrasts PICs; [77, 78]. These calculations were performed using the R package picante [79], and all statistical analyses and plotting were performed in R [72].
All sequence data in this paper is deposited in the NCBI Short Read Archive under BioProject number PRJNA288472.
TE:
Transposable element
Long terminal repeat
LTR-RT:
Long terminal repeat retrotransposon
ERV:
Endogenous retrovirus
Long interpersed nuclear element
SINE:
Short interspersed nuclear element
WGS:
Whole-genome shotgun, SD, Standard deviation
Phylogenetic independent contrast
Rank abundance dominance
MYA:
Million years ago
Bennetzen JL. Transposable element contributions to plant gene and genome evolution. Plant Mol Biol. 2000;42:251–69.
Bennetzen JL, Ma J, Devos KM. Mechanisms of recent genome size variation in flowering plants. Plant Mol Biol. 2005;95:127–32.
Brookfield JYF. The ecology of the genome – mobile DNA elements and their hosts. Nat Rev. 2005;6:128–36.
Venner S, Feschotte C, Biemont C. Dynamics of transposable elements: towards a community ecology of the genome. Trends Gen. 2009;739:1–7.
Pielou EC. Ecological diversity. New York: Wiley-Interscience; 1975.
Tokeshi M. Niche apportionment or random assortment – species abundance patters explained. J Animal Ecol. 1990;59:1129–46.
Hubbell SP. The unified neutral theory of biodiversity and biogeography. Princeton: Princeton University Press; 2001.
Gregory TR. Evolution of the genome. San Diego: Elsevier, Inc; 2005.
Slotkin RK, Nuthikattu S, Jiang N. The impact of transposable elements on gene and genome evolution. In: Plant genome diversity. Vol. 1. Vienna: Springer-Verlag Wien; 2014.
Lynch M. The origins of genome architecture. Sunderland: Sinauer Associates, Inc; 2007.
Gao X, Hou Y, Ebina H, Levin HL, Voytas DF. Chromodomains direct integration of retrotransposons to heterochromatin. Genome Res. 2008;18:359–69.
Baucom RS, Estill JC, Chaparro C, Upshaw N, Jogi A, Deragon JM, et al. Exceptional diversity, non-random distribution, and rapid evolution or retroelements in the B73 maize genome. PLoS Genet. 2009;5:1–13.
Baucom RS, Estill JC, Leebens-Mack J, Bennetzen JL. Natural selection on gene function drives the evolution of LTR retrotransposon families in the rice genome. Genome Res. 2009;19:243–54.
Nellåker C, Keane TM, Yalcin B, Wong K, Agam A, Belgard TG, et al. The genomic landscape shaped by selection on transposable elements across 18 mouse strains. Genome Biol. 2012;13:R45.
Volff JN, Korting C, Meyer A, Schartl M. Evolution and discontinuous distribution of Rex3 retrotransposons in fish. Mol Biol Evol. 2001;18:427–31.
Ray DA, Xing J, Salem AH, Batzer MA. SINEs of a nearly perfect character. Syst Biol. 2006;55:928Y935.
Bohne A, Brunet F, Galiana-Arnoux D, Schultheis C, Volff J. Transposable elements as drivers of genomic and biological diversity in vertebrates. Chromosome Res. 2008;16:203–15.
Dobigny G, Ozouf-Costaz C, Waters P, Bonillo C, Volobouev V. LINE-1 amplification accompanies explosive genome repatterning in Taterillus (Rodentia, Gerbillinae). Chromosome Res. 2004;12:787–93.
de Boer JG, Yazawa R, Davidson WS, Koop BF. Bursts and horizontal evolution of DNA transposons in the speciation of pseudotetraploid salmonids. BMC Genomics. 2007;8:422.
Ungerer MC, Strakosh SC, Zhen Y. Genome expansion in three hybrid sunflower species is associated with retrotransposon proliferation. Curr Biol. 2006;16:R872–3.
Staton SE, Ungerer MC, Moore RC. The genomic organization of Ty3/gypsy-like retrotransposons in Helianthus (Asteraceae) homoploid hybrid species. Am J Bot. 2009;96:1646–55.
Stevens PF. Angiosperm Phylogeny Website. Version 8, June 2007. [http://www.mobot.org/MOBOT/research/APweb].
Kim KJ, Choi KS, Jansen RK. Two chloroplast DNA inversions originated simultaneously during the early evolution of the sunflower family (Asteraceae). Mol Biol Evol. 2005;22:1783–92.
Panero JL, Funk VA. The value of sampling anomalous taxa in phylogenetic studies: major clades of the Asteraceae revealed. Mol Phylogenet Evol. 2008;47:757–82.
Funk VA. Systematics, evolution, and biogeography of the compositae. Vienna: IAPT; 2009.
Santini S, Cavallini A, Natali L, Minelli S, Maggini F, Cionini PG. Ty1/Copia- and Ty3/Gypsy-like DNA sequences in Helianthus species. Chromosoma. 2002;111:192–200.
Natali L, Santini S, Giordani T, Minelli S, Maestrini P, Cionini PG, et al. Distribution of Ty3-Gypsy- and Ty1-Copia-like DNA sequences in the genus Helianthus and other Asteraceae. Genome. 2006;49:64–72.
Cavallini A, Natali L, Zuccolo A, Giordani T, Jurman I, Ferrillo V, et al. Analysis of transposons and repeat composition of the sunflower (Helianthus annuus L.) genome. Theor Appl Genet. 2010;120:491–508.
Staton SE, Hartman Bakken B, Blackman B, Chapman M, Kane N, Tang S, et al. The sunflower (Helianthus annuus L.) genome reflects a recent history of biased accumulation of transposable elements. Plant J. 2012;72:142–53.
Peterson-Burch BD, Wright DA, Laten HM, Voytas DF. Retroviruses in plants? Trends Gen. 2000;16:151–2.
Akhipova I. Distribution and phylogeny of Penelope-like in Eukaryotes. Syst Biol. 2006;55:875–8.
Bennetzen JL. Patterns in grass genome evolution. Curr Opin Plant Biol. 2007;10:176–81.
Devos KM. Grass genome organization and evolution. Curr Opin Plant Biol. 2010;13:139–45.
Ma M. Species richness vs evenness: independent relationship and different responses to edaphic factors. OIKOS. 2005;111:192–8.
Lander E, Linton LM, Birren B, Nusbaum C, Zody MC, Baldwin J, et al. Initial sequencing and analysis of the human genome. Nature. 2001;15:860–921.
DeBarry JD, Kissinger JC. Jumbled genomes: missing apicomplexan synteny. Mol Biol Evol. 2011;28:2855–71.
Kim JM, Vanguri S, Boeke JD, Gabriel A, Voytas DF. Transposable elements and genome organization: a comprehensive survey of retrotransposons revealed by the complete Saccharomyces cerevisiae genome sequence. Genome Res. 1998;8:464–78.
SanMiguel P, Tikhonov A, Jin Y, Motchoulskaia N, Zakharov D, Melake-Berhan A, et al. Nested retrotransposons in the intergenic regions of the maize genome. Science. 1996;274:765–8.
Schnable P, Ware D, Fulton RS, Stein JC, Wei F, Pastemak S, et al. The B73 maize genome: complexity, diversity, and dynamics. Science. 2009;326:1112–5.
Qin C, Yu C, Shen Y, Fang X, Chen L, Min J, et al. Whole-genome sequencing of cultivated and wild peppers provides insights into Capsicum domestication and specialization. Proc Natl Acad Sci U S A. 2014;111:5135–40.
Furano AV, Duvernell DD, Boissinot S. L1 (LINE-1) diversity differs dramatically between mammals and fish. Trends Gen. 2004;20:9–14.
Neafsey DE, Blumenstiel JP, Hartl DL. Different regulatory mechanisms underlie similar transposable element profiles in pufferfish and fruitfiles. Mol Biol Evol. 2014;21:2310–8.
Eickbush TH, Furano AV. Fruit flies and humans respond differently to retrotransposons. Curr Opin Gen Dev. 2002;12:669–74.
Hua-Van A, Le Rouzic A, Maisonhaute C, Capy P. Abundance, distribution and dynamics of retrotransposable elements: similarities and differences. Cytogen Genome Res. 2005;110:426–40.
Xie D, Chen C, Ptaszek LM, Xiao S, Cao X, Fang F, et al. Rewirable gene regulatory networks in the preimplantation embryonic development of three mammalian species. Genome Res. 2010;20:804–15.
Warenfors M, Pereira V, Eyre-Walker A. Transposable elements: insertion pattern and impact on gene expression evolution in Hominids. Mol Biol Evol. 2010;27:1955–62.
Hollister JD, Smith LM, Guo Y, Ott F, Weigel D, Gaut BS. Transposable elements and small RNAs contribute to gene expression divergence between Arabidopsis thaliana and Arabidopsis lyrata. Proc Natl Acad Sci U S A. 2011;108:2322–7.
Piegu B, Guyot R, Picault N, Roulin A, Saniyal A, Kim H, et al. Doubling genome size without polyploidization: dynamics of retrotransposon-mediated genome expansions in Oryza australensis, a wild relative of rice. Genome Res. 2006;16:1262–9.
Amborella genome project. The Amborella genome and the evolution of flowering plants. Science. 2013;342:1241089.
Serra F, Becher V, Dopazo H. Neutral theory predicts the relative abundance and diversity of genetic elements in a broad array of eukaryotic genomes. PLoS ONE. 2013;8:6.
Linquist S, Cotenie K, Elliott TA, Saylor B, Kremer SC, Gregory TR. Applying ecological models to communities of genetic elements: the case of Neutral Theory. Mol Ecol. 2015;24:3232–42.
Keeley JE, Fotheringham CJ. Species–area relationships in Mediterranean climate plant communities. J Biogeogr. 2003;30:1629–57.
Whittaker RH. Dominance and diversity in land plant communties. Science. 1965;147:250–60.
Whittaker RH. Evolution and measurement of species diversity. Taxon. 1972;21:213–51.
Nummelin M. Log-normal distribution of species abundances is not a universal indicator of rain forest disturbance. J Appl Ecol. 1998;35:454–7.
McGill BJ, Etienne RS, Gray JS, Alonso D, Anderson MJ, Benecha HK, et al. Species abundance distributions: moving beyond single prediction theories to integration within an ecological framework. Ecol Lett. 2007;10:995–1015.
Barker MS, Kane NC, Matvienko M, Kozik A, Michelmore RW, Knapp SJ, et al. Multiple paleopolypoidizations during the evolution of the Compositae reveal parallel patterns of duplicate gene retention after millions of years. Mol Biol Evol. 2008;25:2445–55.
Peterson-Burch BD, Nettleton D, Voytas DF. Genomic neighborhoods for Arbidopsis retrotransposons: a role for targeted integration in the distribution of the Metaviridae. Genome Biol. 2004;5:R78.
Chapman MA, Leebens-Mack JH, Burke JM. Positive selection and expression divergence following gene duplication in the sunflower CYCLOIDEA gene family. Mol Biol Evol. 2008;25:1260–73.
Hutchinson GE. Homage to Santa Rosalia, or why are there so many kinds of animals? Am Nat. 1959;93:145–59.
Hawkins JS, Kim H, Nason JD, Wing RA, Wendel JF. Differential lineage-specific amplification of transposable elements is responsible for genome size variation in Gossypium. Genome Res. 2006;16:1252–61.
Buti M, Giordani T, Cattonaro F, Cossu RM, Pistelli L, Vukich M, et al. Temporal dynamics in the evolution of the sunflower genome as revealed by sequencing and annotation of three large genomic regions. Theor Appl Genet. 2011;5:779–91.
Dodsworth S, Chase MW, Kelly LJ, Leitch IJ, Macas J, Novák P, et al. Genomic repeat abundances contain phylogenetic signal. Syst Biol. 2015;64:112–26.
Mandel J, Dikow RB, Funk VA, Masalia R, Staton SE, Kozik A, et al. A target enrichment method for gathering phylogenetic information from hundreds of loci: an example from the Compositae. App Plant Sci. 2014;2:130085.
Schmieder R, Edwards R. Quality control and preprocessing of metagenomic datasets. Bioinformatics. 2011;27:863–4.
Staton SE. Pairfq: sync paired-end FASTA/Q files and keep singleton reads. [https://github.com/sestaton/Pairfq].
Blondel VD, Guillaume J, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. J Stat Mech. 2008;2008:P10008.
Camacho C, Coulouris G, Avagyan V, Ma N, Papadopoulos J, Bealer K, et al. BLAST+: architecture and applications. BMC Bioinformatics. 2009;10:421.
Jurka J, Kapitonov VV, Pavlicek A, Klonowski P, Kohany O, Walichiewicz J. Repbase update, a database of eukaryotic repetitive elements. Cytogenet Genome Res. 2005;110:462–7.
Staton SE, Burke JM. Transposome: annotation of transposable element families from unassembled sequence reads. Bioinformatics. 2015;31:1827–9.
Hu H, Bandyopadhyay PK, Olivera BM, Yandell M. Characterization of the Conus bullatus genome and its venom-duct transcriptome. BMC Genomics. 2011;12:60.
R Core Team. R: A language and environment for statistical computing. [http://www.R-project.org].
Oksanen J, Blanchet FG, Kindt R, Legendre P, Minchin PR, O'Hara RB, et al. vegan: Community Ecology Package. R package version 2.0-7. [http://CRAN.R-project.org/package=vegan].
Le Rouzic A, Dupas S, Capy P. Genome ecosystem and transposable element species. Gene. 2007;390:214–20.
Motomura I. A statistical treatment of associations. Jpn J Zool. 1932;44:379–83.
Orme D, Freckleton R, Thomas G, Petzoldt T, Fritz S, Isaac L, et al. caper: Comparative Analyses of Phylogenetics and Evolution in R. R package version 0.5. [http://CRAN.R-project.org/package=caper].
Blomberg SP, Garland Jr T, Ives AR. Testing for phylogenetic signal in comparative data: behavioral traits are more labile. Evolution. 2003;57:717–45.
Felsenstein J. Phylogenies and the comparative method. Am Nat. 1985;125:1–15.
Kembel SW, Cowan PD, Helmus MR, Cornwell WK, Morlon H, Ackerly DD, et al. Picante: R tools for integrating phylogenies and ecology. Bioinformatics. 2010;26:1463–4.
We thank Jennifer Mandel (University of Memphis) for assistance with DNA isolation and sequencing; Vicki Funk (Smithsonian Institute) for sharing plant specimens; and the Georgia Advanced Computing Research Center for cooperation and assistance with our computational analyses. Financial support was provided by the NSF Plant Genome Research Program (DBI-0820451).
Department of Genetics, University of Georgia, Athens, GA, 30602, USA
S. Evan Staton
Current address: Beaty Biodiversity Research Centre and Department of Botany, 3529–6270 University Blvd, University of British Columbia, Vancouver, BC, V6T 1Z4, Canada
Department of Plant Biology, University of Georgia, Athens, GA, 30602, USA
John M. Burke
Correspondence to S. Evan Staton.
SES and JMB designed the study. SES performed the experiments and wrote the manuscript with feedback from JMB. All authors approved the final manuscript prior to submission.
Displays the phylogenetic signal for TE superfamilies in the Asteraceae. (PDF 11 kb)
Displays the phylogenetic signal for TE superfamilies. (PDF 61 kb)
Shows the TE families exhibiting significant phylogenetic signal. (PDF 63 kb)
Depicts the relationship between retrotransposon DNA and genome size. (PDF 63 kb)
Shows results from GLS and PGLS tests for the evolution of Gypsy and Copia composition. (PDF 60 kb)
Shows the genome diversity statistics for TE families. (PDF 807 kb)
Shows the raw data statistics and genome size estimates. (PDF 66 kb)
Shows published genome size estimates and genome size observations determined by the method described in this study. (PDF 54 kb)
Staton, S.E., Burke, J.M. Evolutionary transitions in the Asteraceae coincide with marked shifts in transposable element abundance. BMC Genomics 16, 623 (2015). https://doi.org/10.1186/s12864-015-1830-8
Phylogenetic Pattern
Rank Abundance
Gypsy Element
Genome Community | CommonCrawl |
Interference between two photons, tensor product of individual wave functions?
I have learned that the wave function cannot be visualized as a real physical wave like for example the EM field, because for multi-particle systems, it is not a wave in $\mathbb{R}^3$ but in $\mathbb{R}^{3N}$. See this question if I haven't expressed myself clearly.
If I understand things correctly, this is a consequence of the QM fact that the combination of two individual quantum systems with Hilbert spaces $\mathcal{H}_1$ and $\mathcal{H}_2$ is described by the tensor product $\mathcal{H}_1\otimes \mathcal{H}_2$. The wave function of two free particles is not a wave in $\mathbb{R}^3$, but in $\mathbb{R}^3\otimes \mathbb{R}^3 = \mathbb{R}^6$.
I'm wondering how that fits into the following modified double-slit experiment:
Imagine two light sources emitting photons that are as coherent and equal as possible (same frequency, same polarization, etc). Each light source is placed before one of the slits, and there is a wall between the light sources, so if hole 1 is closed, only photons from source 2 can pass to the detector.
Let's define $\mathcal{H}_1$ as the Hilbert space of the quantum system where source 2 is off and only source 1 emits photons:
Similarly, $\mathcal{H}_2$ describes photons from source 2 and no photons from source 1:
In both systems, there is no interference pattern on the screen. In $\mathcal{H}_1$, we detect most of photons around the position A on the detector, and in $\mathcal{H}_2$ most of the photons land at position B.
Now, for the system $\mathcal{H}_c$ of the combined system, where both sources emit photons simultaneously, I assume that $\mathcal{H}_c = \mathcal{H}_1 \otimes \mathcal{H}_2$.
Is that correct so far?
Now, as far as I know, in $\mathcal{H}_c$ there should be an interference pattern of photons at the position C of the detector. The photons from source 1 interfere with those from source 2.
How can this interference be explained with respect to the tensor product? The wave functions of the photons from source 1 do not "live in" the same $\mathbb{R}^3$ as those from source 2, so how can they interfere?
EDIT: Continuation moved to other question.
quantum-mechanics double-slit-experiment tensor-calculus
$\begingroup$ $\mathbb{R}^3\otimes\mathbb{R}^3\neq \mathbb{R}^6$, in general (although they can be made isomorphic). However, when considering tensor product spaces, the notation $\psi$ is to be intended as $\psi_1\otimes 1_2$, therefore as element of the tensor product space obtained by tensoring with the identity on the other side; if you carry this notation on you will find that everything is consistent eventually. $\endgroup$ – gented Oct 23 '15 at 9:13
$\begingroup$ @GennaroTedesco in the wave picture, the Identity of the Hilbert space is the vector space zero $\psi_0 \equiv 0$. So for a photon from $\mathcal{H}_1$ and one from $\mathcal{H}_2$ we have $\psi_1 \otimes \psi_2$ as the combined state. I cannot see where I need $\mathbf{1}_i$ here. $\endgroup$ – Bass Oct 23 '15 at 9:32
$\begingroup$ If you are considering the system of the two particles then yes, $\psi_{12} = \psi_1\otimes\psi_2$ and consequently the wave function is $\psi(1,2) = (\langle x_1|\otimes\langle x_2 |)\otimes(|\psi_1\rangle\otimes |\psi_2\rangle) = \psi_1(x_1)\cdot\psi_2(x_2)$ by definition of tensor product. $\endgroup$ – gented Oct 23 '15 at 9:56
$\begingroup$ @GennaroTedesco: I think that's exactly my problem: How can there be destructive interference in $\psi_1(x_1) \cdot \psi_2(x_2)$? If both amplitudes at a certain point on the detector are non-zero, their product cannot vanish too. $\endgroup$ – Bass Oct 23 '15 at 10:01
$\begingroup$ The tensor product of $\mathbb{R}^3$ with itself is $\mathbb{R}^9$, not $\mathbb{R}^6$. We have $\mathbb{R}^n\otimes\mathbb{R}^m = \mathbb{R}^{n\cdot m}$, while the sum is for the direct product $\mathbb{R}^n\oplus\mathbb{R}^m = \mathbb{R}^{n+m}$. $\endgroup$ – ACuriousMind♦ Oct 23 '15 at 14:41
This answers the title
Photons are quantum mechanical entities, described by wavefunctions which are solutions of quantum mechanical equations on which the boundary conditions are imposed.
Photons have extremely weak interactions between them. Photon photon interactions are box diagrams and the probability of a photon interacting with another photon is minuscule. . The classical interference pattern in the Young experiment showed the wave nature of light within classical electrodynamics. The observed single photon at a time interference demonstrates the quantum mechanical nature of photons. Classical light beams are emergent from an under-layer of innumerable photons.
Though a tensor product of individual wave functions may be written down , it has no interference type information (phases) unless the wavefunctions come from the solution of the quantum mechanical equations with the same boundary conditions, i.e. are entangled. This can only happen in dimensions commensurate to h_bar, quantum mechanical dimensions, not the macroscopic dimensions shown in the figure.
In addition , the velocity of light is enormous, there is no way that two independent light sources can be made synchronous in releasing individual photons.
You need to distinguish functions from $\mathbb R^{3N}$ and $\mathbb R^{3N}$ itself.
For instance if you have eigenstates of the harmonic oscillator, they are functions like $\psi_1:\mathbb R^3\rightarrow \mathbb C$ and the the tensor product of a space spanned by such functions is itself spanned by $\psi:\mathbb R^6\rightarrow \mathbb C.$
You can think of functions like $\psi(x_1,y_1,z_1,x_2,y_2,z_2)=\psi_1(x_1,y_1,z_1)\psi_2(x_2,y_2,z_2)$ as like a basis and they are functions from $\mathbb R^6$ and they are factorizable as the product of two functions but their linear combinations ate not factorizable like that but instead is a more general function from $\mathbb R^6.$
And as long as we are correcting misconceptions. In general, the function isn't into the complex numbers, it is into a spin spin state and so the multiparticle version isn't into the complex numbers but is into the joint spin state (which is a tensor product of the single particle spin states).
Now for some further problems. The Schrödinger equation is for non relativistic particles and doesn't handle particle creation. So when you have a source that creates photons you have the problem of particle creation and the problem of a massless particle.
But you could always ask the corresponding question about electron beams and ignore the creation process.
So you might imagine a function from $\mathbb R^6$ but you can't actually have just any function. In particular your function that sends $(x_1,y_1,z_1,x_2,y_2,z_2)\mapsto \psi(x_1,y_1,z_1,x_2,y_2,z_2)$ in facts has to have the symmetry $\psi(a,b,c,d,e,f)=-\psi(d,e,f,a,b,c)$ (and if they Z bosons instead you would have to have $\psi(a,b,c,d,e,f)=+\psi(d,e,f,a,b,c)$).
Which means if there is a nonzero probability of one electron going through the slit there is a nonzero probability of the other electrons going through the same slit.
Which means there are two totally different senses of labelling the electrons. One corresponds to whether you are referring to $(x_1,y_1,z_1)$ or $(x,2,y_2,z_2)$ (which is the sense everyone else uses) and another is which slit is goes through (the sense you are using) so you have to watch out for that.
You can definitely talk about waves corresponding to one electron heading (probability current pointing) towards the slit on the right and the other one heading (probability current pointing) in a completely different direction, a direction orthogonal to the slit.
If those waves evolve to have just one central peak that has some nonzero values at C then you can ask how the state evolves with a state of both beams heading towards the slits. But know because of the symmetry those basic eaves we build things out of aren't just $\psi(x_1,y_1,z_1,x_2,y_2,z_2)=\psi_1(x_1,y_1,z_1)\psi_2(x_2,y_2,z_2)$ they are waves like $\psi(x_1,y_1,z_1,x_2,y_2,z_2)=\psi_1(x_1,y_1,z_1)\psi_2(x_2,y_2,z_2)-\psi_2(x_1,y_1,z_1)\psi_1(x_2,y_2,z_2)$ and so it is not possible to say something like one particle is going through one slit and the other is going through the other.
One think you can do however is slow the rate of particle such that the time to go from slit to screen is so much less than the times between hits that one one particle is going through any slit at a single moment. Hence just one particle is going through any slits at any moment.
But that's the usual single particle case. There is a wave at both slits and a single particle going through. All that has happened now is you have been honest about the effects on and of the other particles that haven't gone through yet.
TimaeusTimaeus
Apparently, I got this wrong from the beginning. Two photons from different sources can interfere (if the experiment is set up appropriately, but it's very hard to set that up).
However, they don't interfere at the quantum mechanical wave function level, but at the level of the electromagnetic field.
According to @ACuriousMind (who explained this to me) this could be solved in quantum optics, and most likely in QED too.
$\begingroup$ I think your answers is itself still a misunderstanding. I don't think the focus on photons and creation and distinct sources helps to understand interference and how it is related (or not related) to multiparticle effects such as tensor products. And the electromagnetic field is an emergent phenomena that happens from a buildup of information from a multiparticle quantum system. So all the physics happens at that deeper quantum level to cause the interference of the classical field, so you've got the causality completely backwards. $\endgroup$ – Timaeus Oct 25 '15 at 16:49
$\begingroup$ @Timaeus: That might well be. The reason I'm asking it this way is the order I've learned things: 1. First I've learned that light's a wave, it shows interference effects. 2. Then I've learned that light is quantized, has particle-like behaviour, and that even a single photon shows interference effects. This can be hard to swallow if you're accustomed to classical physics. 3. Once I accepted that, I wondered: How can the classical, "macroscopic" interference be seen in the tensor product? Now I realized these interferences happen at a different level. $\endgroup$ – Bass Oct 25 '15 at 20:26
$\begingroup$ @Timaeus: I suppose that the "deep quantum level" you're talking about becomes apparent in QFT, where quantum mechanical effects like photon self-interference are coupled with field effects like macroscopic (many-particle) interference effects. I'm an absolute beginner in QFT, so at the moment I'm happy to know that the classical interference does not happen at the wave-function level. $\endgroup$ – Bass Oct 25 '15 at 20:30
$\begingroup$ Don't be happy to be wrong. The only reason classical interference isn't happening at the quantum level is that a classical wave is like temperature, it doesn't exist for a single particle its a property of specific kinds of multiparticle states. The true interference is happening at the level of the quantum wave and that is what ends up causing the classical interference, but you'd have to learn how to see a classical field be built out of many photons before you could see how a classical field has interference. $\endgroup$ – Timaeus Oct 25 '15 at 21:19
$\begingroup$ @Timaeus "Build a classical field out of many photons", is that usually taught in QM or QFT courses? $\endgroup$ – Bass Oct 26 '15 at 7:37
Not the answer you're looking for? Browse other questions tagged quantum-mechanics double-slit-experiment tensor-calculus or ask your own question.
"Reality" of EM waves vs. wavefunction of individual photons - why not treat the wave function as equally "Real"?
Interference of two non-entangled photons, calculation using tensor product of Hilbert spaces
Probability wave speed of dispersion and interference
Why is it difficult to differentiate between interference and diffraction?
Interference of independent electrons
Are double-slit patterns really due to wave-like interference?
Eigenstates of operators on constituent systems in tensor product space
Tensor product postulate
Is double slit interference due to EM/de Broglie waves? And how does this relate to quantum mechanical waves?
Tensor product in quantum mechanics?
Doubts about the use of tensor product In quantum mechanics | CommonCrawl |
MBE Home
Effect of seasonal changing temperature on the growth of phytoplankton
October 2017, 14(5&6): 1071-1089. doi: 10.3934/mbe.2017056
Global stability of the steady states of an epidemic model incorporating intervention strategies
Yongli Cai 1, , Yun Kang 2, and Weiming Wang 1,,
School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China
Sciences and Mathematics Faculty, College of Integrative Sciences and Arts, Arizona State University, Mesa, AZ 85212, USA
* Corresponding author: Weiming Wang
Received July 2016 Accepted October 2016 Published May 2017
Fund Project: The authors would like to thank the anonymous referees for very helpful suggestions and comments which led to improvement of our original manuscript. This research was supported by the National Science Foundation of China (11601179, 61373005 & 61672013), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (16KJB110003). The research of YK is partially supported by NSF-DMS (Award Number 1313312) and The James S. McDonnell Foundation 21st Century Science Initiative in Studying Complex Systems Scholar Award (UHC Scholar Award 220020472)
Figure(2)
In this paper, we investigate the global stability of the steady states of a general reaction-diffusion epidemiological model with infection force under intervention strategies in a spatially heterogeneous environment. We prove that the reproduction number $\mathcal{R}_0$ can be played an essential role in determining whether the disease will extinct or persist: if $\mathcal{R}_0<1$, there is a unique disease-free equilibrium which is globally asymptotically stable; and if $\mathcal{R}_0>1$, there exists a unique endemic equilibrium which is globally asymptotically stable. Furthermore, we study the relation between $\mathcal{R}_0$ with the diffusion and spatial heterogeneity and find that, it seems very necessary to create a low-risk habitat for the population to effectively control the spread of the epidemic disease. This may provide some potential applications in disease control.
Keywords: Basic reproduction number, disease-free equilibrium, endemic, spatial heterogeneity.
Mathematics Subject Classification: Primary: 35B36, 45M10; Secondary: 92C15.
Citation: Yongli Cai, Yun Kang, Weiming Wang. Global stability of the steady states of an epidemic model incorporating intervention strategies. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1071-1089. doi: 10.3934/mbe.2017056
L. J. S. Allen, B. M. Bolker, Y. Lou and A. L. Nevai, Asymptotic profiles of the steady states for an SIS epidemic patch model, SIAM Journal on Applied Mathematics, 67 (2007), 1283-1309. doi: 10.1137/060672522. Google Scholar
L. J. S. Allen, B. M. Bolker, Y. Lou and A. L. Nevai, Asymptotic profiles of the steady states for an SIS epidemic reaction-diffusion model, Discrete and Continuous Dynamical Systems-A, 21 (2008), 1-20. doi: 10.3934/dcds.2008.21.1. Google Scholar
P. M. Arguin, A. W. Navin, S. F. Steele, L. H. Weld and P. E. Kozarsky, Health communication during SARS, Emerging Infectious Diseases, 10 (2004), 377-380. doi: 10.3201/eid1002.030812. Google Scholar
M. P. Brinn, K. V. Carson, A. J. Esterman, A. B. Chang and B. J. Smith, Cochrane review: Mass media interventions for preventing smoking in young people, Evidence-Based Child Health: A Cochrane Review Journal, 7 (2012), 86-144. doi: 10.1002/ebch.1808. Google Scholar
Y. Cai, Y. Kang, M. Banerjee and W. Wang, A stochastic SIRS epidemic model with infectious force under intervention strategies, Journal of Differential Equations, 259 (2015), 7463-7502. doi: 10.1016/j.jde.2015.08.024. Google Scholar
Y. Cai and W. M. Wang, Dynamics of a parasite-host epidemiological model in spatial heterogeneous environment, Discrete and Continuous Dynamical Systems-Series B, 20 (2015), 989-1013. doi: 10.3934/dcdsb.2015.20.989. Google Scholar
Y. Cai and W. M. Wang, Fish-hook bifurcation branch in a spatial heterogeneous epidemic model with cross-diffusion, Nonlinear Analysis: Real World Applications, 30 (2016), 99-125. doi: 10.1016/j.nonrwa.2015.12.002. Google Scholar
Y. Cai, Z. Wang and W. M. Wang, Endemic dynamics in a host-parasite epidemiological model within spatially heterogeneous environment, Applied Mathematics Letters, 61 (2016), 129-136. doi: 10.1016/j.aml.2016.05.011. Google Scholar
R. S. Cantrell and C. Cosner, Spatial Ecology Via Reaction-Diffusion Equations, John Wiley & Sons, Ltd., 2003. doi: 10.1002/0470871296. Google Scholar
J. Cui, X. Tao and H. Zhu, An SIS infection model incorporating media coverage, Journal of Mathematics, 38 (2008), 1323-1334. doi: 10.1216/RMJ-2008-38-5-1323. Google Scholar
J. Cui, Y. Sun and H. Zhu, The impact of media on the control of infectious diseases, Journal of Dynamics and Differential Equations, 20 (2008), 31-53. doi: 10.1007/s10884-007-9075-0. Google Scholar
O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio $R_0$ in models for infectious diseases in heterogeneous populations, Journal of Mathematical Biology, 28 (1990), 365-382. doi: 10.1007/BF00178324. Google Scholar
W. E. Fitzgibbon, M. Langlais and J. J. Morgan, A mathematical model of the spread of feline leukemia virus (FeLV) through a highly heterogeneous spatial domain, SIAM Journal on Mathematical Analysis, 33 (2001), 570-588. doi: 10.1137/S0036141000371757. Google Scholar
W. E. Fitzgibbon, M. Langlais and J. J. Morgan, A reaction-diffusion system modeling direct and indirect transmission of diseases, Discrete and Continuous Dynamical Systems-B, 4 (2004), 893-910. doi: 10.3934/dcdsb.2004.4.893. Google Scholar
J. Ge, K. I. Kim, Z. Lin and H. Zhu, A SIS reaction-diffusion-advection model in a low-risk and high-risk domain, Journal of Differential Equations, 259 (2015), 5486-5509. doi: 10.1016/j.jde.2015.06.035. Google Scholar
A. B. Gumel, S. Ruan, T. Day, J. Watmough and F. Brauer, Modelling strategies for controlling SARS outbreaks, Proceedings of the Royal Society of London B: Biological Sciences, 271 (2004), 2223-2232. doi: 10.1098/rspb.2004.2800. Google Scholar
D. Henry and D. B. Henry, Geometric Theory of Semilinear Parabolic Equations, volume 840. Springer-Verlag, Berlin, 1981. Google Scholar
W. Huang, M. Han and K. Liu, Dynamics of an SIS reaction-diffusion epidemic model for disease transmission, Mathematical Biosciences and Engineering, 7 (2010), 51-66. doi: 10.3934/mbe.2010.7.51. Google Scholar
P. A. Khanam, B. Khuda, T. T. Khane and A. Ashraf, Awareness of sexually transmitted disease among women and service providers in rural bangladesh, International Journal of STD & AIDS, 8 (1997), 688-696. doi: 10.1258/0956462971919066. Google Scholar
T. Kuniya and J. Wang, Lyapunov functions and global stability for a spatially diffusive SIR epidemic model, Applicable Analysis, (2016), 1-26. doi: 10.1080/00036811.2016.1199796. Google Scholar
A. K. Misra, A. Sharma and J. B. Shukla, Modeling and analysis of effects of awareness programs by media on the spread of infectious diseases, Mathematical and Computer Modelling, 53 (2011), 1221-1228. doi: 10.1016/j.mcm.2010.12.005. Google Scholar
C. Neuhauser, Mathematical challenges in spatial ecology, Notices of the AMS, 48 (2001), 1304-1314. Google Scholar
A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, volume 198. Springer New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar
R. Peng, Asymptotic profiles of the positive steady state for an SIS epidemic reaction-diffusion model. Part I, Journal of Differential Equations, 247 (2009), 1096-1119. doi: 10.1016/j.jde.2009.05.002. Google Scholar
R. Peng and S. Liu, Global stability of the steady states of an SIS epidemic reaction-diffusion model, Nonlinear Analysis: Theory, Methods & Applications, 71 (2009), 239-247. doi: 10.1016/j.na.2008.10.043. Google Scholar
R. Peng and X. Zhao, A reaction-diffusion SIS epidemic model in a time-periodic environment, Nonlinearity, 25 (2012), 1451-1471. doi: 10.1088/0951-7715/25/5/1451. Google Scholar
R. Peng and F. Yi, Asymptotic profile of the positive steady state for an SIS epidemic reaction-diffusion model: Effects of epidemic risk and population movement, Physica D: Nonlinear Phenomena, 259 (2013), 8-25. doi: 10.1016/j.physd.2013.05.006. Google Scholar
M. H. Protter and H. F. Weinberger, Maximum Principles in Differential Equations, Prentice-Hall, New Jersey, 1967. Google Scholar
M. Robinson, N. I. Stilianakis and Y. Drossinos, Spatial dynamics of airborne infectious diseases, Journal of Theoretical Biology, 297 (2012), 116-126. doi: 10.1016/j.jtbi.2011.12.015. Google Scholar
J. Shi, Z. Xie and K. Little, Cross-diffusion induced instability and stability in reaction-diffusion systems, Journal of Applied Analysis and Computation, 1 (2011), 95-119. Google Scholar
H. L. Smith, Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems, Mathematical Surveys and Monographs, 41. American Mathematical Society, Providence, RI, 1995. Google Scholar
C. Sun, W. Yang, J. Arino and K. Khan, Effect of media-induced social distancing on disease transmission in a two patch setting, Mathematical Biosciences, 230 (2011), 87-95. doi: 10.1016/j.mbs.2011.01.005. Google Scholar
S. Tang, Y. Xiao, L. Yuan, R. A. Cheke and J. Wu, Campus quarantine (FengXiao) for curbing emergent infectious diseases: lessons from mitigating a/H1N1 in Xi'an, China, Journal of Theoretical Biology, 295 (2012), 47-58. doi: 10.1016/j.jtbi.2011.10.035. Google Scholar
J. M. Tchuenche and C. T. Bauch, Dynamics of an infectious disease where media coverage influences transmission ISRN Biomathematics, 2012 (2012), Article ID 581274, 10 pages. doi: 10.5402/2012/581274. Google Scholar
J. M. Tchuenche, N. Dube, C. P. Bhunu and C. Bauch, The impact of media coverage on the transmission dynamics of human influenza, BMC Public Health, 11 (2011), S5.Google Scholar
N. Tuncer and M. Martcheva, Analytical and numerical approaches to coexistence of strains in a two-strain SIS model with diffusion, Journal of Biological Dynamics, 6 (2012), 406-439. doi: 10.1080/17513758.2011.614697. Google Scholar
P. Vanden Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Mathematical Biosciences, 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar
J. Wang, R. Zhang and T. Kuniya, The dynamics of an SVIR epidemiological model with infection age, IMA Journal of Applied Mathematics, 81 (2016), 321-343. doi: 10.1093/imamat/hxv039. Google Scholar
W. D. Wang, Epidemic models with nonlinear infection forces, Mathematical Biosciences and Engineering, 3 (2006), 267-279. doi: 10.3934/mbe.2006.3.267. Google Scholar
W. D. Wang and X. Zhao, Basic reproduction numbers for reaction-diffusion epidemic models, SIAM Journal on Applied Dynamical Systems, 11 (2012), 1652-1673. doi: 10.1137/120872942. Google Scholar
D. Xiao and S. Ruan, Global analysis of an epidemic model with nonmonotone incidence rate, Mathematical Biosciences, 208 (2007), 419-429. doi: 10.1016/j.mbs.2006.09.025. Google Scholar
Y. Xiao, S. Tang and J. Wu, Media impact switching surface during an infectious disease outbreak Scientific Reports, 5 (2015), 7838. doi: 10.1038/srep07838. Google Scholar
Y. Xiao, T. Zhao and S. Tang, Dynamics of an infectious diseases with media/psychology induced non-smooth incidence, Mathematical Biosciences and Engineering, 10 (2013), 445-461. doi: 10.3934/mbe.2013.10.445. Google Scholar
M. E. Young, G. R. Norman and K. R. Humphreys, Medicine in the popular press: The influence of the media on perceptions of disease PLoS One, 3 (2008), e3552. doi: 10.1371/journal.pone.0003552. Google Scholar
Figure 1. In the low-risk domain of model (4), (a) the influence of the diffusion coefficient $d$ on $\mathcal{R}_0$; (b) the influence of the spatial heterogeneity of environment on $\mathcal{R}_0$. The parameters are taken as (33)
Figure Options
Download as PowerPoint slide
Figure 2. In the high-risk domain of model (4), (a) the influence of the diffusion coefficient $d$ on $\mathcal{R}_0$; (b) the influence of the spatial heterogeneity of environment on $\mathcal{R}_0$. The parameters are taken as (33)
Hui Cao, Yicang Zhou. The basic reproduction number of discrete SIR and SEIS models with periodic parameters. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 37-56. doi: 10.3934/dcdsb.2013.18.37
Ovide Arino, Manuel Delgado, Mónica Molina-Becerra. Asymptotic behavior of disease-free equilibriums of an age-structured predator-prey model with disease in the prey. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 501-515. doi: 10.3934/dcdsb.2004.4.501
Nicolas Bacaër, Xamxinur Abdurahman, Jianli Ye, Pierre Auger. On the basic reproduction number $R_0$ in sexual activity models for HIV/AIDS epidemics: Example from Yunnan, China. Mathematical Biosciences & Engineering, 2007, 4 (4) : 595-607. doi: 10.3934/mbe.2007.4.595
Roger M. Nisbet, Kurt E. Anderson, Edward McCauley, Mark A. Lewis. Response of equilibrium states to spatial environmental heterogeneity in advective systems. Mathematical Biosciences & Engineering, 2007, 4 (1) : 1-13. doi: 10.3934/mbe.2007.4.1
Gerardo Chowell, R. Fuentes, A. Olea, X. Aguilera, H. Nesse, J. M. Hyman. The basic reproduction number $R_0$ and effectiveness of reactive interventions during dengue epidemics: The 2002 dengue outbreak in Easter Island, Chile. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1455-1474. doi: 10.3934/mbe.2013.10.1455
W. E. Fitzgibbon, M.E. Parrott, Glenn Webb. Diffusive epidemic models with spatial and age dependent heterogeneity. Discrete & Continuous Dynamical Systems - A, 1995, 1 (1) : 35-57. doi: 10.3934/dcds.1995.1.35
Yu-Xia Wang, Wan-Tong Li. Combined effects of the spatial heterogeneity and the functional response. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 19-39. doi: 10.3934/dcds.2019002
Yuan-Hang Su, Wan-Tong Li, Fei-Ying Yang. Effects of nonlocal dispersal and spatial heterogeneity on total biomass. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4929-4936. doi: 10.3934/dcdsb.2019038
Tom Burr, Gerardo Chowell. The reproduction number $R_t$ in structured and nonstructured populations. Mathematical Biosciences & Engineering, 2009, 6 (2) : 239-259. doi: 10.3934/mbe.2009.6.239
Xiaoyan Zhang, Yuxiang Zhang. Spatial dynamics of a reaction-diffusion cholera model with spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2625-2640. doi: 10.3934/dcdsb.2018124
Stephen Pankavich, Christian Parkinson. Mathematical analysis of an in-host model of viral dynamics with spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1237-1257. doi: 10.3934/dcdsb.2016.21.1237
Qingyan Shi, Junping Shi, Yongli Song. Hopf bifurcation and pattern formation in a delayed diffusive logistic model with spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 467-486. doi: 10.3934/dcdsb.2018182
Ling Xue, Caterina Scoglio. Network-level reproduction number and extinction threshold for vector-borne diseases. Mathematical Biosciences & Engineering, 2015, 12 (3) : 565-584. doi: 10.3934/mbe.2015.12.565
Gerardo Chowell, Catherine E. Ammon, Nicolas W. Hengartner, James M. Hyman. Estimating the reproduction number from the initial phase of the Spanish flu pandemic waves in Geneva, Switzerland. Mathematical Biosciences & Engineering, 2007, 4 (3) : 457-470. doi: 10.3934/mbe.2007.4.457
Ariel Cintrón-Arias, Carlos Castillo-Chávez, Luís M. A. Bettencourt, Alun L. Lloyd, H. T. Banks. The estimation of the effective reproductive number from disease outbreak data. Mathematical Biosciences & Engineering, 2009, 6 (2) : 261-282. doi: 10.3934/mbe.2009.6.261
Min Zhu, Xiaofei Guo, Zhigui Lin. The risk index for an SIR epidemic model and spatial spreading of the infectious disease. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1565-1583. doi: 10.3934/mbe.2017081
Ping Yan. A frailty model for intervention effectiveness against disease transmission when implemented with unobservable heterogeneity. Mathematical Biosciences & Engineering, 2018, 15 (1) : 275-298. doi: 10.3934/mbe.2018012
Francesco Cellarosi, Ilya Vinogradov. Ergodic properties of $k$-free integers in number fields. Journal of Modern Dynamics, 2013, 7 (3) : 461-488. doi: 10.3934/jmd.2013.7.461
Federico Rodriguez Hertz, Jana Rodriguez Hertz. Cohomology free systems and the first Betti number. Discrete & Continuous Dynamical Systems - A, 2006, 15 (1) : 193-196. doi: 10.3934/dcds.2006.15.193
Shuling Yan, Shangjiang Guo. Dynamics of a Lotka-Volterra competition-diffusion model with stage structure and spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1559-1579. doi: 10.3934/dcdsb.2018059
Yongli Cai Yun Kang Weiming Wang | CommonCrawl |
on Statistical Machine Translation
Document Translation Retrieval Based on Statistical Machine ...
The MetaMorpho Translation System - Statistical Machine Translation
generating rules, in a MetaMorpho grammar, each .... 1 In fact the order is determined by various factors other than ... Mosaic translations are usually subop-.
Moses manual - Statistical Machine Translation
Statistical Machine Translation of English - Machine Translation Archive
Improvements in Phrase-Based Statistical Machine Translation
Statistical Machine Translation between Related Languages
Symmetric Word Alignments for Statistical Machine Translation
Proceedings of the... - Statistical Machine Translation
2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation mod- els. In Proceedings of AMTA. Franz Josef Och and Hermann Ney.
Domain Adaptation in Statistical Machine Translation ...
Query Rewriting Using Monolingual Statistical Machine Translation
The SMT viewpoint has been introduced to the field of IR by Berger and Lafferty. (1999) and Berger et al. (2000) ..... Beeferman, Doug and Adam Berger. 2000.
English-to-Sanskrit Statistical Machine Translation
Improving Persian-English Statistical Machine Translation ...
What's New in Statistical Machine Translation
10a. the modern groups sell strong pharmaceuticals . 10b. los grupos ... office received an email from Mr. Bin. Laden and other ..... Alignment Templates Model.
Statistical Sign Language Machine Translation - arXiv
conversation is stored into a file and ready for training. IJCSI International ... MECHANIC FIX TOMORROW MORNING. ..... WET-WIPES YOU KEEP CAR ?
Statistical Machine Translation Framework for Modeling Phonological ...
Chinese Syntactic Reordering for Statistical Machine Translation
Morphological Analysis for Statistical Machine Translation
For phrase translation models (Och and Ney. 2002), we induce additional merge/deletion analysis on the basis of base noun phrase parsing of Arabic.
Hierarchical Incremental Adaptation for Statistical Machine Translation
NAACL 2006 Europarl Evaluation - Statistical Machine Translation
1. Introduction. The dependency treelet translation system developed at MSR is a statistical MT system that ... source language dependency parser, as well as a word alignment component. [1]. To train a translation .... concatenate the part-of-speech
Statistical Machine Translation of European ... - Semantic Scholar
Dynamic Model Interpolation for Statistical Machine Translation
guage pairs using 6 machine translation evaluation ..... The column labeled 'Hard' corresponds to a system that used hard weights (either 1 or 0) for the .... combining all models from multiple SMT engines ... Pharaoh: a beam search decoder.
Statistical Machine Translation with Terminology
This paper examines the situation when a. Japanese patent is applied to or granted by the. Japanese Patent Office (JPO) without being trans- lated into English.
Machine translation: statistical approach with additional ... - CiteSeerX
Statistical Sign Language Machine Translation: from English
13 downloads 15751 Views 697KB Size Report
translation error minimizing to find the output sentence $\hat{e}$ which ... $\propto\sum_{\overline{f},Z,a}P(a|\overline{e})\prod_{i=1}^{I}\phi(\overline{f}_{a_{i}}|\ ...
数理解析研究所講究録 第 1629 巻 2009 年 87-93
Integer Programming for a Phrase Alignment Problem on Statistical Machine Translation Mikio Yamamoto*, Shunji Umetani\dagger , Mitsuru Koshikawa* and Tomomi Matsui\dagger \dagger *University of Tsukuba, \dagger Osaka University, \dagger \dagger Chuo University
Realization of "Machine Translation (MT)" systems with humanlike capability has been the biggest dream in the research field of natural language processing. In 1947 Warren Weaver pointed out we can regard a decoder for decryption as a machine translation system, if we consider a foreign language as a code language (Weaver 1955). Over the past 60 years, though much many researchers and engineers in this field have tried building such a MT system, the complete one is still a dream. However, recently statistical methods have innovated the technological framework of MT, in which all translation rules are automatically estimated from vast amount of parallel text data (e.g. a collection of English-Japanese translated sentences pairs) using recent high performance computing powers. In the framework of "Statistical Machine TMranslation (SMT)" two problems should be solved. The "modeling problem" is how to model and estimate a valid probability function of output candidate sentences conditioned by an input sentences. The "decoding problem" is how to find the best output sentence which maximizes the estimated probability function. The system to find the best output is called 'decoder' after Weaver. The search space of a decoder is extremely large, so we have to approximate the probability function and maximization process at multiple-stages. In this paper, we will discuss an integer programming approach to solve a part of the decoding problem: the phrase alignment problem.
Phrase-Based Statistical Machine Translation
If we have a probabilistic distribution, $P(e|f)$ , of a sentences in target (or output) language (e.g. English) given a input sentence $f$ in source (or input) language (e.g. Hkench), we can make a chance of translation error minimizing to find the output sentence which maximizes the probability of . This approach is called the "noisy channel model." $e$
$\hat{e}$
$P(\hat{e}|f)$
$\hat{e}=arg\max_{\epsilon}P(e|f)$
$=arg \max_{\epsilon}\frac{P(e)P(f|e)}{P(f)}$
$=arg \max_{e}P(e)P(f|e)$
and $P(f|e)$ are called a language model and a translation model, respectively. Note that the direction of the translation model is inverse (i.e. probabilities of source sentences given a target sentence). Language models are often approximated by the Markov models. A translation model is decomposed into local translation models based on bilingual dictionaries. There are many variations of the decomposition. Brown et al. (1993) proposed a basic model based on word-to-word alignment style in their paper, which is the origin of all recent SMT researches. However, in the past decade, the phrase-to-phrase alignment style has predominated, because its performance is much better than word-to-word based models and it can capsulate complex local translation process in phrases into a bilingual phrase dictionary (Koehn, Och and Marcu 2003). $P(e)$
88 In phrase-based models, the translation model is decomposed into probabilities of phrase "segmentations" of source and target sentences ( and , respectively), an probability of an "alignment" $(a)$ between segmented phrases in target and source sentences, and phrase translation probabilities conditioned by the segmentations and alignment. Then it is marginalized. $\overline{f}$
$\overline{e}$
$P(f|e)= \sum_{\overline{f}=f_{t}^{g,a}}P(f,\overline{f},\overline{e}, a|e)$
$= \sum_{\overline{f}=f,\overline{\epsilon}=\epsilon,a}P(\overline{e}|e)P(a|e,\overline{e})P(f\neg e,\overline{e}, a)P(f|\overline{f}, e,\overline{e}, a)$
Phrase segmentations $f$ and .
$\overline{f}$
have complete information of the original source and target sentences
$e$
$= \sum_{\overline{f}^{\epsilon,a}}P(\overline{\epsilon}|e)P(a|\overline{e})P(\overline{f}|\overline{e}, a)$
In the above model, a target sentence is segmented into a sequence of phrases . An $a=a_{1}^{I}$ (that is, alignment ) which represents a position of a souroe phrase is a sequence of translated from a target phrase . That is, each phrase in is translated into a phrase . If we assume that the segmentation probabilities are uniform, the translation probability can be decomposed into $I$
$a$
$\overline{e}_{1}^{I}(=e)$
$a_{i}$
$\overline{q}$
$\overline{e}_{i}$
$\overline{e}_{1}^{I}$
$\overline{f}_{a}.$
$P(f|e) \propto\sum_{f,\epsilon,a}P(a|\overline{e})P(\overline{f}|\overline{e},a)$
$\propto\sum_{\overline{f},Z,a}P(a|\overline{e})\prod_{i=1}^{I}\phi(\overline{f}_{a_{i}}|\overline{e}_{i})$
is a translation probability of aligned phrases in target and source sentences. is a reordering model which gives probabilities about position moving of phrases in a source sentence. We can decompose a reordering model into Where
$\phi(\overline{f}_{a}.|\overline{e}_{i})$
$P(a|\overline{e})$
$P(a| \overline{e})=\prod_{i=1}^{I}P(eq|\overline{e},a_{1}^{i-1})$
$\approx\prod_{i=1}^{I}P(a_{t}|\overline{e}_{1},u_{-1})$
We call this model a "lexicahzed reordering model" because a decomposed probability is conditioned by the actual phrases 's (Koehn et al. 2005). Avoiding the problem of sparse trainin data, we only consider three reordering types: monotone order (m), swap with previous phrase (s), or discontinuous (d). We define a function type $(j, k)$ as the following. $g$
$(j, k)=\{\begin{array}{ll}m if j=k-1s if j-1=kd others.\end{array}$
Using this function we get the final version of the reordering model. $P(a| \overline{e})\approx\prod_{i=1}^{I}P(type(a_{i-1},a_{i})|\overline{e}_{i})$
Phrase Alignment Problems
Since the search space to compute the best target sentence in the phrase-based SMT model described in the previous section is large, the decoding problem is approximated as the following. $\hat{e}=arg\max_{e}P(e)P(f|e)$
$=arg \max_{\epsilon}P(e)\sum_{\overline{f}=f,\epsilon=e,a}P(a|\overline{e})P(\overline{f}|\overline{e},a)$
$\approx arg\max_{e}P(e)mxP(a|\overline{e})P(\overline{f}|\overline{e}, a)P=f^{\frac{a}{e}}=e,a$
Inside maximization for and in the last formula is called a "phrase alignment problem." This approximation of summing by maximization is justified by the intuitive fact that probability mass of only a few correct segmentation and algnment is predominantly large and the other probability can be ignored. However, although the search space was dramatically reduced by this approximation, it remains too large to get the exact result. A real decoder uses a heuristic search algorithm and finds out a pseudo best result from the very limited space over the four variables of and . In the next section, we will formulate the phrase alignment problem as an integer linear programming to develop the algorithm to compute the exact best result for three variables of and , but . In the remains of this section, we define the realistic phrase alignment problem, because the current model of SMT becomes a little bit more complicated. In real SMT systems, the noisy channel model is extended to integrate the other information as effective for translation quality as possible using the loglinear model. This approach is called the "discriminative model." For example, in the noisy channel model we used only $P(f|e)$ as the (inverse) translation model. But it is known that the original translat\'ion model $P(e|f)$ is also effective for improving translation quality. For another example, we used to compute a reordering probability. But we can improve translation quality to use richer probability and reverse directional probability . We can integrate the basic model and such additional probabilities or features into the log-linear model. $a$
$\overline{f},\overline{e}$
$e,\overline{f},\overline{e}$
$P(type(a_{i-1}, *)|\overline{e}_{i})$
$P(type(a_{i-1},a_{i})|\overline{e}_{i},\overline{f}_{ai})$
$\underline{c}onditioned$
$P(type(*,a_{i+1})|\overline{e}_{i}, f_{a_{*}}.)$
$logP(e|f)=C+ \sum_{i}w_{i}f_{i}(e, f)$
$f_{i}(e, f)$ is the i-th probability or feature, is the weight for it and $C$ is a normalization constant which can be ignored in the maximization problem. The weights are determined by the discriminative training method. In the case of SMT, we commonly use a minimum error rate training (MERT) (Och and Ney 2003) which adjusts weights to maximize an automatic evaluation measure for translation quality such as BLEU (Papineni et al. 2002). An example of the realistic phrase alignment problem is the following. $w_{i}$
$(f_{t}^{\frac{\hat}{}} \hat{\frac{}{\epsilon}},\hat{a})=arg\max P(a|\overline{e})P(\overline{f}|\overline{e}, a)$
$f,\overline{e},a$
$=arg \max w_{p1}\prod_{i=1}^{I}\phi(\overline{f}_{a_{i}}|\overline{e}_{i})\overline{f}^{\epsilon,a}xw_{p2}\prod_{i=1}^{I}\phi(\overline{e}_{i}|\overline{f}_{a:})$
$\cross w_{r1}\prod_{i=1}^{I}P(type(a_{i-1},a_{i})|\overline{e}_{1},\overline{f_{i}})xw_{r2}\prod_{i=1}^{I}P(type(a_{i,\Phi+1})|\overline{q},\overline{f_{i}})$
All parameters of weights and values of probability functions are given. Fortunately, we can compile all given parameters for each phrase pair into two kinds of integrated parameters and indexed by an entry for a phrase pair in the dictionary. The integrated parameters is determined by just a phrase pair and is determined by a phrase pair and a reordering type. When a phrase alignment system is given a sentence pair $f$ and , at the first it makes up a table of candidates of phrase pairs to match partly to $f$ and looking up in the dictionary. Then, from the $p$
$d$
90 $1$
$1$
Phrase pair 1
$\ovalbox{\tt\small REJECT}$
$|$
$————–r—————$ $1$
Figure 1: Candidates of phrase pairs for the input sentence pair.
table it computes a set of phrase pairs which covers all words in the sentence pair and maximizes the object function. Note that a output set of phrase pairs from the table determines the phrase alignment at the same time. To make a problem simple, we consider that the table of candidates of phrase pairs is also given as the part of the problem. The solution of the problem is a set of selected phrase pairs from the table. The table of candidates of phrase pairs for the input sentence pair is defined as the following. We assume that there are four phrase pairs shom in Figure 1 as candidates for the input sentence pair $f=f_{1},$ and $e=e_{1},$ where and is words in source and target language, respectively. In the Figure 1, boxes denote phrases in a input sentence and lines denote alIgnments between phrases in source and target sentences. The table of candidate phrase pairs is represented by two matrices $F$ and . The i-th column vectors in $F$ and $E$ denote word sequences of phrases of the i-th phrase pair, and words in the phrase are expressed with 's and words out of the phrase with $0' s$ . For example, the next $F$ and $E$ correspond to four candidates of the phrase pairs in Figure 1. $f_{2},$ $f_{3},$ $f_{4}$
$e_{2},$ $e_{3}$
$f_{i}$
$e_{i}$
$F=(\begin{array}{llll}1 1 0 01 0 1 00 0 0 10 0 0 1\end{array}),E=(\begin{array}{llll}1 0 1 01 1 0 00 0 0 1\end{array})$
In the next section, we fomulate phrase alignment problems as an integer linear programning when the table and parameters are given.
Phrase Alignment as an integer programming problem Simple Phrase Alignment Problems
In this subsection, we consider a simple phrase alignment problem, in which we assume the reordering probability is unifom, that is the reordering model is ignored. At the first, we introduce binary variables $x_{k}\in\{0,1\}$ which represent whether the k-th phrase pair in the table selected (1) or not (0) as a member of an output set of phrase pairs. DeNero and Klein (2008) fomulate this simple version
91 of the phrase alignment problem as the following.
$\sum_{k=1}^{K}xkp_{k}$
subject to $Fx=1$ ,
$Ex=1$ ,
$x_{k}\in\{0,1\}$
Where is a compiled parameter of the k-th phrase pair, pairs and 1 is 1 . $p_{k}$
$($
$(1\leq k\leq K)$ $K$
is the number of candidates of phrase
$\cdots 1)^{T}$
Full Phrase Alignment Problems
It's difficult to extend the formulation in the previous subsection to one of incorporating the reordering models, because auxiliary variables $xk$ have no information of position relationship between phrase pairs. So we introduce a graph representation of the table of phrase pairs in the target side instead of $E$ and the second auxiliary variables . Figure 2 shows an example of the graph representation of the candidate phrases in the target side sentence. Boxes denote phrases and lines denote connections between adjacent phrases. Using the graph representation, we incorporate the compiled parameters for the reordering models as weights on the connections. The feasible sets of phrases are expressed with paths starting $hom$ the node to the node . We can clearly ovserve two feasible paths on the graph in Figure 2. New auxiliary binary variables means whether the a-th connection is on the feasible path $(y_{a}=1)$ or not $(y_{a}=0)$ Using new auxiliary variables , we can formulate a full phrase alignment problem as the following. $y_{a}$
$d_{a}$
$s$
$g$
$y_{a}$
$k=1 L^{\wedge}x_{k}p_{k}K+\sum_{a=1}^{A}y_{a}d_{a}$
$My=b$ , $Ny=x$ ,
$(1 \leq k\leq K)$
$y_{a}\in\{0,1\}$
$(1 \leq a\leq A)$
Where the parameters in the object function denotes the compiled parameters for the reordering probabilities and its weights. We can regard as a weight on the a-th connection in the graph representation. The equation $My=b$ represents the "conservation law of flow," which is the standard $d_{a}$
Figure 2: An example of the graph representation of the target side phrase candidates in Figure 1.
92 technique for guaranteeing valid paths. The equation $Ny=x$ represents the relationship between and . If the connection variable is 1, of both side of the connection must be 1. The symbol denotes the number of connections of the phrase graph in the target side. For example, the "conservation law of flow" for Figure 2 is the following. $y$
$x_{k}$
$(00001$
$-100001$
$\frac{0}{0001}1$
$x$
$-100001)(\begin{array}{l}y_{1}y_{2}y_{3}y_{4}y_{6}y_{6}\end{array})=(\begin{array}{l}-l00001\end{array})$
The first item is $M$ and each column corresponds to the nodes (phrase) , phrasel, ...phrase4 and . The fifth line vector of $M$ represents the conservation law for the fourth phrase (node); $y_{4}+y_{6}=y_{6}$ . An example of the equation $Ny=x$ for Figure 2 is the following. $s$
$(\begin{array}{llllll}1 0 0 0 0 00 l 0 0 0 00 0 1 0 0 00 0 0 1 1 0\end{array})(\begin{array}{l}y_{1}y_{2}y_{3}y_{4}y_{5}y_{6}\end{array})=(\begin{array}{l}x_{1}x_{2}x_{3}x_{4}\end{array})$
Experiments and Conclusion
We build up a dictionary of phrase pairs the number of entries of wihch is 60 million from 2 million parallel Japanese-English sentence pairs of the training data at the NTCIR-7 (Fujii et al. 2008) using the script within Moses package (Koehn et al. 2007). We used CPLEX version 11.0 as the solver for the integer programming and solved the full phrase alignment problem for a few hundred thousand sentence pairs for the test. Figure 3 shows an example of a phrase alignment for a real JapaneseEnglish sentence pair computed by CPLEX. In spite of such realistic setting and data, average time to compute the best alignment for one sentence pair was about a few hundred milliseconds. We plan to apply the method in this paper to the reranking problem in order to improve the quality of statistical machine translation.
Japanese (source langauge)
English (target langauge) Figure 3: An example of a phrase alignment result for a Japanese-English sentence pair.
93 References Brown, P. F., S. A. Della Pietra, V. J. Della Pietra and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, Vol.19, No.2,
DeNero, J. and D. Klein. 2008. The complexity of phrase alignment problems. In Proc. of ACL-08, pages 25-28. Fujii, A., M. Yamamoto, M. Utiyama and T. Utsuro. 2008. Overview of the patent translation task at the NTCIR-7 workshop. In Proc. of NTCIR-7, pages $38\mathbb{H}00$ .
Koehn, P., A. Axelrod, A. B. Mayne, C. Callison-Burch, M. Osbome and D. Talbot. 2005. Edinburgh system description for the 2005 IWSLT speech translation evaluation. In Proc. of IWSLT-2005. Koehn, P., F. J. Och and D. Marcu. 2003. Statistical phrase-based translation. In Proc. of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics,
Koehn, P. et al. 2007. Moses: open source toolkit for statistical machine translation. In Proc. of ACL-07, demonstration session. Och, F. J. and H. Ney. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL-03, pages 160-167.
Papineni, K., S. Roukos, T. Ward and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL-02, pages 311-318. Weaver, Warren. 1955. Translation. Machine 7translation of Languages: Fourteen Essays, W.N.Locke and A.D. Booth (eds.). (Reprinted in Readings in Machine IT}anslation, S.Nirenburg, H.Somers and Y.Wilks (eds.), MIT Press, 2003)
Report "on Statistical Machine Translation" | CommonCrawl |
Antitumor activity of vorinostat-incorporated nanoparticles against human cholangiocarcinoma cells
Tae Won Kwak1,
Do Hyung Kim2,
Young-Il Jeong1 &
Dae Hwan Kang1,3
Journal of Nanobiotechnology volume 13, Article number: 60 (2015) Cite this article
The aim of this study is to evaluate the anticancer activity of vorinostat-incorporated nanoparticles (vorinostat-NPs) against HuCC-T1 human cholangiocarcinoma cells. Vorinostat-NPs were fabricated by a nanoprecipitation method using poly(dl-lactide-co-glycolide)/poly(ethylene glycol) copolymer.
Vorinostat-NPs exhibited spherical shapes with sizes <100 nm. Vorinostat-NPs have anticancer activity similar to that of vorinostat in vitro. Vorinostat-NPs as well as vorinostat itself increased acetylation of histone-H3. Furthermore, vorinostat-NPs have similar effectiveness in the suppression or expression of histone deacetylase, mutant type p53, p21, and PARP/cleaved caspase-3. However, vorinostat-NPs showed improved antitumor activity against HuCC-T1 cancer cell-bearing mice compared to vorinostat, whereas empty nanoparticles had no effect on tumor growth. Furthermore, vorinostat-NPs increased the expression of acetylated histone H3 in tumor tissue and suppressed histone deacetylase (HDAC) expression in vivo. The improved antitumor activity of vorinostat-NPs can be explained by molecular imaging studies using near-infrared (NIR) dye-incorporated nanoparticles, i.e. NIR-dye-incorporated nanoparticles were intensively accumulated in the tumor region rather than normal one.
Our results demonstrate that vorinostat and vorinostat-NPs exert anticancer activity against HuCC-T1 cholangiocarcinoma cells by specific inhibition of HDAC expression. Thus, we suggest that vorinostat-NPs are a promising candidate for anticancer chemotherapy in cholangiocarcinoma.
Local delivery strategy of vorinostat-NPs against cholangiocarcinomas.
Vorinostat (suberoylanilide hydroxamic acid, SAHA) is known to one of histone deacetylase inhibitor (HDACi) [1]. In acetylation process, acetyl group in one molecule is transferred to another and deacetylation reaction is to remove acetyl group from a molecule. HDAC has an important role in the transcriptional regulation through stabilization of DNA-histone interaction and deacetylation process is known to have relationship with carcinogenesis [1]. HDACis such as vorinostat act as a chelator for zinc ions in the active site of histone deacetylases (HDACs) and vorinostat is regarded as a promising cancer chemotherapeutic drug [1]. Vorinostat has been approved by the FDA for treatment of cutaneous T cell lymphoma [1, 2]. Accumulation of acetylated histones and acetylated proteins has correlation with p21 WAF1 gene expression, apoptotic signals such as mutant-type p53 and active-type caspase expression, cell differentiation and cell death [1–5]. In recent clinical trials, the safety and anticancer efficacy vorinostat has been evaluated against gastrointestinal (GI) cancer patient [3]. In the results of these trials, the report suggested that vorinostat can be used as an effective anticancer agent for GI cancer [3]. Vorinostat induced both apoptosis and autophagy in gastric cancer cell lines and has shown clinical benefits for gastric cancer patients [6, 7]. The anticancer activity of vorinostat has also investigated against colon cancer, glioma, lung cancer, breast cancer and hepatocellular carcinoma in preclinical or clinical trials, both as a single treatment or combination with other types of anticancer drugs [5–8]. We previously reported that vorinostat exhibits anticancer efficacy against HuCC-T1 human cholangiocarcinoma (CCA) cells [9]. In this report, we show that vorinostat is involved in growth inhibition, apoptosis of HuCC-T1 cells in vitro and anti-tumor activity of HuCC-T1 cell-bearing xenograft model in vivo.
CCA is a malignant tumor that occurs in the epithelium of the biliary tract [10]. Although the rate of incidence of CCA has increased worldwide, the reason for its increase remains unclear [11, 12]. Current treatment options for CCA include surgical resection, radiotherapy, chemotherapy, stent displacement and immunotherapy [13–15]. Although surgical resection is believed to be a curative treatment option for CCA, patients with CCA are frequently diagnosed at an unresectable stage [16]. Chemotherapeutic approaches for CCA are considered to increase patient survival and quality of life [12]. Various chemotherapeutic agents such as gemcitabine, cisplatin, oxaliplatin, capecitabine and 5-fluorouracil have been tested as single agents or in combination in clinical trials for CCA [17, 18]. Even though the combination of some anticancer agents have been reported to have therapeutic advantages, systemic chemotherapy using conventional anticancer agents is still ineffective and shows an insignificant increase in survival period. In fact, current standard chemotherapeutic treatment for CCA patients is normally gemcitabine plus cisplatin [18, 19]. Even though combination of these chemotherapeutic agents delayed onset of progression, most cases still succumbed to CCA and has no significant advances in survivability [20]. Because most of chemotherapeutic agents showed minimal survival gain and chemotherapeutic agents have difficulties in delivery to CCA, targeted therapy for CCA patients has been proposed [21]. Novel treatment options for a chemotherapeutic approach for CCA are required to improve patient survivability.
Nanomedicine such as nanoparticles, liposomes and polymeric micelles have advantages in targeting malignant solid tumor because they have small sizes of <1000 nm and unique structures that can amplify the anticancer activity of conventional drugs [22–27]. In recent decades, nanomedicine-based drug delivery systems have also been investigated to target CCA cells for diagnosis and chemotherapeutic treatment [22–27]. Magnetic nanoparticles were reported to be a useful device for the diagnosis of intrahepatic CCA [22, 23]. Magnetic drug nanoparticles enveloping chemotherapeutic drugs were reported to be an effective treatment for the inhibition of CCA cell proliferation in a tumor xenograft model of nude mice [24]. Totawa et al. reported that hybrid liposomes were specifically accumulated in human CCA cells and induced cell cycle arrest [25]. In our previous study, chitosan nanoparticles incorporating all-trans retinoic acid were demonstrated to be effective in inhibiting the invasion, migration and proliferation of human CCA cells [26]. Stimuli-sensitive nanoparticles can also be considered to target CCA cells [27].
In this study, we prepared vorinostat-NPs using biodegradable polymers to assess their anticancer effects on HuCC-T1 cells in vitro and in vivo. The efficacy of vorinostat and vorinostat-NPs in HuCC-T1 cells was studied using western blotting, immunohistochemistry and a HuCC-T1 xenograft model in nude mice.
Characterization of vorinostat-incorporated nanoparticles
Vorinostat-incorporated nanoparticles (vorinostat-NPs) were fabricated using the nanoprecipitation method. Vorinostat and poly(dl-lactide-co-glycolide)/poly(ethylene glycol) (LGE) block copolymer was dissolved in organic solvent. Then this solution was poured into aqueous solution and the organic solvent was removed. At these procedures, there is no apparent precipitation of vorinostat in the aqueous phase. To remove remained organic solvents and emulsifier (pluronic F68), vorinostat-NPs were separated by centrifugation and washing process. As shown in Table 1, experimental drug loading of vorinostat into the nanoparticles was slightly lower than the theoretical value. This might be due to the liberation of vorinostat from the nanoparticles during the preparation procedure, thereby causing a decrease in the effective drug content. Figure 1 shows the characteristics of vorinostat-NPs. X-ray powder diffractograms (XRD) measurement of lyophilized nanoparticles was employed to confirm whether or not free drug was remained in the nanoparticle solution as shown in Fig. 1a. As shown in Fig. 1a, vorinostat alone has sharp crystalline peaks while empty nanoparticles (empty-NPs) have broad peak properties. Interestingly, vorinostat-NPs also have broad peak properties as similar to empty-NPs whereas the physical mixture of empty-NPs and vorinostat has both sharp and broad crystalline peaks. These results indicated that the intrinsic crystallinity of vorinostat was decreased by incorporation into the polymer nanoparticles and that vorinostat was molecularly distributed in the nanoparticle matrix. Furthermore, these results demonstrated that no extensive precipitation of drug has been occurred during fabrication procedure and then free drug remained was minimized. Additionally, these results also indicated that vorinostat was properly payloaded into the nanoparticles.
Table 1 Evaluation of drug contents of vorinostat-NPs
Characteristics of vorinostat-NPs. a XRD chromatogram; b average particle size; c TEM images; d drug-release kinetics. Empty-NPs were prepared similar to vorinostat-incorporated nanoparticles (vorinostat-NPs) in the absence of vorinostat
The average particle size was measured to investigate colloidal properties of vorinostat-NPs as shown in Fig. 1b. Particles size of nanoparticles was slightly increased when vorinostat was incorporated into the nanoparticles (Fig. 1b), but the vorinostat content of the nanoparticles did not significantly affect the change in particle diameter. As shown in Fig. 1c, empty-NPs and vorinostat-NPs have spherical shapes under transmission electron microscopy (TEM) observation with small particle sizes of <200 nm. Figure 1d shows the release properties characteristics of vorinostat from the nanoparticles. The initial burst release of the drug was observed for 1 day and, subsequently, vorinostat-NPs revealed sustained behavior over 5 days. In particular, the release rate of vorinostat was slower when the vorinostat content in the nanoparticles was higher. These phenomena might be due to that hydrophobic drugs in the nanoparticles can be aggregated at higher drug content and then aggregated drug released slowly due to the limited solubility in the aqueous phase. Gref et al. reported that increased drug contents of hydrophobic drugs into core–shell type nanospheres lead to aggregation or crystallization in the cores of the nanospheres [28]. The crystallization or aggregation of hydrophobic drug in the core of the nanospheres led slow rate of dissolution and diffusion of drugs into the aqueous phase [28]. Then, the release rate of hydrophobic should be slower at higher drug loading than lower drug loading.
In vitro anticancer activity of vorinostat-incorporated nanoparticles
Figure 2 shows the anticancer activity of vorinostat and vorinostat-NPs against HuCC-T1 cells. For cytotoxicity index, viability of cells was checked in the serum-free media as shown in Fig. 2a. The viability of the cells was slightly decreased according to the increase of vorinostat concentration. In particular, vorinostat-NPs has higher cytotoxicity than vorinostat alone even though empty nanoparticles have small effect on the viability of cells. In growth inhibition study, both vorinostat and vorinostat-incorporated nanoparticles have low inhibitory effects on the growth of HuCC-T1 cells as shown in Fig. 2b. These results were compared at vorinostat concentration of 1 and 5 μg/ml as shown in Fig. 2c. Both vorinostat and vorinostat nanoparticles has small inhibitory effects on the cell growth at 1 and 5 μg/ml while empty nanoparticles did not affect to the growth of HuCC-T1 cells. Figure 2d shows the apoptosis/necrosis analysis of HuCC-T1 cells upon treatment with vorinostat and vorinostat-incorporated nanoparticles. As shown in Fig. 2d, e, the apoptosis and necrosis of HuCC-T1 cells were significantly increased at 5 μg/ml of a vorinostat concentration when vorinostat or vorinostat-incorporated nanoparticles were treated. The empty nanoparticles showed minimal effect to cells.
Anticancer activity of vorinostat and vorinostat-NP against HuCC-T1 cells in vitro. 3 × 105 cells for cytotoxicity (a) and 3 × 104 cells for growth inhibition (b) test were exposed to vorinostat or vorinostat-NPs for 24 h, respectively. For cytotoxicity study, serum-free media were used and grow inhibition test was performed with 10 %-FBS supplemented media. c Comparison of cytotoxicity and growth inhibition of HuCC-T1 cells following treatment with vorinostat or vorinostat-NPs at 1and 5 μg/ml. d, e Apoptosis and necrosis analysis of HuCC-T1 cells. For apoptosis or necrosis, 1 × 106 cells were exposed to vorinostat or vorinostat-NPs for 24 h. FITC-conjugated Annexin V was used to analyze apoptosis and PI was used to analyze necrosis
Figure 3 shows the western blot analysis of molecular signals in HuCC-T1 cells upon treatment with vorinostat and vorinostat-NPs. As shown in Fig. 3a, acetylated histone H3 was evidently increased upon treatment with 5 μg/ml vorinostat and vorinostat-NPs, whereas empty-NPs have no effect on this signal. As shown in Fig. 3b, quantitative analysis showed a significant increase in ac-histone H3 expression in HuCC-T1 cells, indicating that vorinostat-incorporated nanoparticles were effective in acetylation of Histone H3 as well as vorinostat itself. Furthermore, treatment of vorinostat-incorporated nanoparticles were also effective in decreasing expression of HDAC 1 and HDAC 3 as well as vorinostat itself even though HDAC2 expression was almost similar or slightly higher than control. In fact, HDAC2 and 3 expression of vorinostat-incorporated nanoparticles were slightly higher than vorinostat itself at 5 μg/ml concentration (Fig. 3a, c). These results might be due to the sustained release properties of nanoparticles, i.e. nanoparticles slowly released the drug into the cell culture medium and lower concentration of intact vorinostat can affected to the cellular expression of HDAC2 and 3. Interestingly, empty-NPs affected HDAC expression in cancer cells at high concentrations (50 μg/ml). Because the reason for these results is not clear, further investigations are required in the future. One possible explanation of these results is that high concentrations of empty-NPs show some cytotoxic effects in the in vitro cell culture environment, even though LGE block copolymer has already been approved as a biocompatible polymer and approved for human use by the US FDA [29].
Western blot analysis of Ac-histone H3 and HDAC expression after treatment with vorinostat or vorinostat-NPs. a expression of Ac-histone H3 and HDACs; b quantitation of Ac-histone H3 expression; c quantitation of HDAC expression
Figure 4A shows that the level of mutant p53 was significantly decreased both by vorinostat and vorinostat-NPs, whereas that of wild-type p53 was not significantly changed. The expression of p21 was significantly increased upon treatment with both vorinostat and vorinostat-NPs. Upon immunocytochemical staining of HuCC-T1 cells, a decrease in mutant-type p53 and an increase in p21 were also observed. Expression of each protein was normalized to the GAPDH as cytosolic control and lamin B as nuclear control. Figure 4B shows apoptosis signals in HuCC-T1 cells. As shown in Fig. 4B, a decrease in the level of poly-ADP ribose polymerase (PARP) precursor and an increase in cleaved PARP (at 24 kDa) were observed. Furthermore, decreases in caspase-3 and -9 precursors were also observed upon treatment with vorinostat and vorinostat-NPs. And, the level of cleaved caspase-3 was increased, as shown in Fig. 4B. Including the result of Bax expression, these results indicated that vorinostat-NPs induced apoptosis and affected the expression of apoptotic molecular signals to the same extent as did vorinostat. On treatment with vorinostat or vorinostat-NPs, actin was disrupted in HuCC-T1 cells as shown in Fig. 4C, indicating that vorinostat-NPs have a similar effect on apoptosis and protein changes at the cellular level as does vorinostat.
Expression of signals after treatment with vorinostat and vorinostat-NPs. A a Western blot analysis of apoptotic signal expression in HuCC-T1 cells following treatment with vorinostat or vorinostat-NPs. b–d Immunocytochemistry (×1200) of wild-type p53, mutant p53 and p21 of HuCC-T1 cells after treatment with vorinostat or vorinostat-NPs. B Western blot analysis of PARP precursor, PARP, caspase-3 precursor, cleaved caspase-3, caspase-9 precursor and Bax. C Western blot analysis and immunocytochemistry of actin
In vivo antitumor activity of vorinostat-incorporated nanoparticles
The increase in tumor volume and changes in body weight were monitored. As shown in Fig. 5a, the size of the tumor rapidly increased with time following treatment of empty-NPs. However, the volume of the tumor was significantly suppressed by treatment of vorinostat or vorinostat-NPs. Body weight did not significantly change with any of the treatments, indicating that neither vorinostat nor nanoparticles were significantly toxic to the mice, as shown in Fig. 5b. Interestingly, vorinostat-NPs showed higher efficacy of tumor growth inhibition: tumor volume following treatment of vorinostat-NPs was almost 50 % smaller than following treatment of vorinostat. Terminal deoxynucleotidyl transferase dUDT nick-end-labeling (TUNEL) staining of solid tumors supported these results, as shown in Fig. 5c: upon treatment of vorinostat-NPs, apoptosis was higher than treatment of vorinostat, whereas minimal apoptosis was seen upon treatment of empty-NPs. Interestingly, the expression of ac-histone H3 in tumor tissue was significantly increased upon treatment with vorinostat-NPs compared to empty-NPs and vorinostat as shown in Fig. 6. Furthermore, the expression levels of HDAC 1, 2, 3 and 4/5/7 were relatively decreased upon treatment of vorinostat and vorinostat-NPs, compared to treatment of empty-NPs. These results indicated that subcutaneous injection of vorinostat-NPs has similar or higher antitumor activity compared to vorinostat.
Antitumor activity of vorinostat or vorinostat-NP in HuCC-T1 tumor xenograft mice model. (dose 50 mg vorinostat/kg) a tumor volume; b body weight; c TUNEL assay (×400) of extracted tumor tissues. HuCC-T1 human CCA cells (1 × 107) were implanted into the back of BALb/C nude mouse. 2 weeks later, vorinostat, empty-NPs or vorinostat-NPs were injected subcutaneously beside the solid tumor and the day of drug injection was set as day 0. Tumor volume was calculated using the formula V = (a × [b]2)/2, with a being the largest and b being the smallest diameter. For TUNNEL assay, tumors were isolated and fixed with 4 % formamide after 30 days of injection
Immunohistochemistry (×400) of tumor tissues from HuCC-T1 cell bearing xenograft mouse model. To study HDAC expression, tumor tissues were stained with acetyl histone H3, HDAC1, HDAC2, HDAC3, and HDAC4/5/7 antibodies
To clarify the reason for the higher antitumor activity of the nanoparticles, near infrared (NIR)-dye-conjugated nanoparticles (NIR-NPs) were subcutaneously injected into a normal region as well as the site of the tumor, as shown in Fig. 7. Intact NIR-dye treated mice showed a rapid decrease in both the normal region and the tumor site. However, nanoparticle treated mice showed quite different results, i.e. nanoparticles remained longer at the tumor site than at the normal region. In particular, the strongest fluorescence intensity was observed at the center of the solid tumor at 1 day after treatment with NIR-NPs, whereas intact NIR-dye revealed the strongest fluorescence intensity in the region surrounding the solid tumor. The treatment of NIR-NPs revealed strong fluorescence intensity after 8 days for injection. These results confirmed that vorinostat-NPs have higher antitumor activity in vivo than does vorinostat.
In vivo fluorescence imaging of HuCC-T1 tumor xenograft mice model. NIR dye-incorporated NPs were simultaneously injected into a normal region and beside the tumor region of the back of mouse. Mouse were observed with the Maestro 2™ In Vivo imaging system at 780 nm
HDAC expression in cancer cells has a critical role in remodeling of chromatin structure, gene expression, cell cycle regulation and differentiation [30, 31]. Increased HDAC activity is known to result in malignant tumor behavior [30]. In particular, Morine et al. reported that HDAC1 expression in intrahepatic CCA is significantly correlated with the stage of carcinogenesis and is related to malignant behaviors of cancer, such as angiogenesis, lymph node metastasis and vascular invasion [30]. They found that the survival rate of the HDAC1-positive group was significantly worse than that of the negative group. Furthermore, HDAC6 is also known to have a strong relationship with the stage of CCA and can thus be considered a clinic-pathological parameter for CCA [32]. Higher HDAC expression is associated with shorter survival times in gastric cancer patients and is regarded as an independent prognostic marker for gastric cancer [33]. Inhibition of the molecular action of HDAC using HDAC inhibitors is a promising candidate for cancer chemotherapy [34, 35].
HDAC inhibitors exert anticancer activities on human cancer cells through cell cycle arrest, growth arrest, activation of apoptotic pathways, autophagic cell death, reactive oxygen species (ROS)-mediated cell death and mitotic cell death etc. [35, 36]. HDAC inhibitors tightly bind to DNA histones and prevent the transcription/expression of tumor suppressor genes by inducing histone acetylation [35, 36]. Several HDAC inhibitors which inhibits class I and II HDACs [36] show minimal intrinsic toxicity to human bodies but show dramatic anticancer efficacy for cancer [37]. In clinical trials, oral administration of vorinostat with promising anticancer activity was well tolerated in patients with GI cancers [3]. In animal tumor xenograft studies, intraperitoneal injection of vorinostat induced tumor necrosis and inhibited the growth of colon tumors through the inhibition of different subtypes of HDACs [5].
In our study, we fabricated vorinostat-NPs for treatment of CCA. As shown in Fig. 1, vorinostat-NPs have spherical shapes and small diameter <100 nm. They showed sustained drug release behavior over 5 days as shown in Fig. 1d. Especially, nanoparticles having higher vorinostat contents (vorinostat-NP20) showed slower release kinetic. These results might be due to that hydrophobic drug can be aggregated at higher drug loading contents and release of aggregated drug can be delayed compared to the nanoparticles with lower drug contents (vorinostat-NP10). These phenomena were frequently reported by several investigators. To assess biological activity, vorinostat-NPs were treated to HuCC-T1 cells and their anticancer activity was compared to vorinostat itself at in vitro and in vivo. Vorinostat-NPs as well as vorinostat properly inhibited the growth of HuCC-T1 cells in vitro and the growth of tumor volume in vivo through inhibition of HDAC expression in the HuCC-T1 cells and tumor tissues, as shown in Figs. 3, 5 and 6. Furthermore, vorinostat-NPs have higher antitumor activity compared to vorinostat, due to the sustained release properties of nanoparticles. The apoptotic signals of HuCC-T1 cells were also significantly altered upon treatment with vorinostat or vorinostat-NPs in vitro and in vivo as shown in Figs. 3, 4 and 5. As shown in Fig. 4, levels of mutant p53 were significantly decreased with little change in wild-type p53 and this suppression is correlated with the expression of PARP/cleaved caspase-3. Other researchers have reported that the expression of mutant p53 was significantly decreased in a dose- and time-dependent manner [38, 39]. For example, Yan et al. found that disruption of HDAC8 expression significantly inhibits proliferation of cancer cells having mutant-type p53 irrespective of wild-type p53. In their results, colony formation of mutant-type p53 cell lines SW480 was remarkably decreased by treatment of vorinostat while wild-type p53 cell lines HCT116 showed little changes [40]. Furthermore, both vorinostat-NPs and vorinostat were able to arrest cell growth and induced apoptosis as shown in Figs. 3 and 4. And, they increased the expression of p21, a cyclin-dependent kinase inhibitor I, in a dose-dependent manner and this is correlated with other apoptosis signals. Thenaa et al. also reported that vorinostat inhibits mammary cell growth through altered p21 expression and cell cycle arrest [41]. Our results showed that vorinostat-incorporated nanoparticles as well as vorinostat itself also affects in induction of apoptotic signals, suppression of mutant-type p53, up-regulation of p21 and disruption of actin in HuCC-T1 cells. Furthermore, vorinostat-incorporated nanoparticles were also higher efficacy than vorinostat itself at in vivo animal tumor xenograft study. Treatment with vorinostat or vorinostat-incorporated nanoparticles increased acetylation of histone H3 (Ac-Histone H3) and then decreased HDAC expression as shown in Fig. 6. Di Gennaro et al. also reported that subcutaneous injection of vorinostat at a dose of 100 mg/kg against SW620 colorectal cancer xenografts increases Ac-Histone H3 and showed synergized effect with capecitabine in the inhibition of tumor growth [42]. Furthermore, intravenous injection of vorinostat or vorinostat-incorporated nanoparticles is known to sensitize radiotherapy and then effectively inhibit growth of PC3 tumor xenograft in mice [43].
Hydrogels or nanocarriers are known to improve antitumor activity of vorinostat in vivo animal tumor xenograft model [44–46]. Vorinostat-NPs caused antitumor activity, apoptotic expression in TUNEL assay and inhibitory activity of HDAC expression compared with vorinostat itself, as shown in Figs. 5 and 6. Antitumor activity of drugs can be improved at in vivo circumstances by use of biodegradable polymers for controlled drug release [41]. Therefore, these results can be explained by the sustained release properties of vorinostat-NPs. Li et al. reported that biodegradable thermosensitive hydrogel enhances the therapeutic efficacy of vorinostat and significantly inhibited intratumoral angiogenesis [44]. Furthermore, Mohamed et al. reported that polymeric micelles significantly enhance half-lives in blood and bioavailability of vorinostat in rats by intravenous injection and oral administration [45]. Nanocarriers also increase the half-life of vorinostat in blood and improve the antitumor activity of vorinostat [46]. Gref et al. reported potential of long blood circulation of core–shell type nanospheres composed of PLGA-PEG block copolymer rather than plain nanoparticles [28]. Systemic approach of chemotherapeutic agents is known to have limited clinical benefit due to the difficulties of drug delivery to CCA tumor [20, 21, 47]. For this reason, alternative treatment regimen is required to deliver the anticancer drugs to CCA tumor. Therefore, we focused on the possibility of drug delivery to CCA tumor by local administration of vorinostat-incorporated nanoparticles. Practically, growth of tumors originated in HuCC-T1 cells in the back of the mice was effectively suppressed compared to vorinostat itself and empty nanoparticle treatment as shown in Fig. 5. To clarify vorinostat delivery to tumor tissues, NIR-dye was physically incorporated into the nanoparticles as similar to vorinostat and injected beside tumor tissues. As shown in Fig. 7, it is likely that nanoparticles were efficiently delivered to tumor tissues compared to free NIR-dye. Furthermore, NIR-NPs stayed longer in the tumor tissue than free NIR-dye. Practically, NIR-NPs were rapidly cleared from normal region but not in tumor region while free NIR-dye was rapidly cleared both in normal and tumor region. The reason of improved antitumor activity of vorinostat nanoparticles compared to vorinostat itself can be explained by these results. Other researcher also reported that nanoparticles can be stayed longer in the injection site and efficiently delivered to tumor tissues compared to free NIR-dye [48]. In other words, enhanced permeation and retention effect of macromolecules and nanomedicines in the tumor tissues also can be considered to explain these results [49, 50]. In our results, vorinostat-NPs showed higher anticancer activity than intact vorinostat and have higher efficacy in the drug delivery to tumor tissue. We suggest that vorinostat nanoparticles are promising candidate to treat CCA.
We prepared vorinostat-NPs using biodegradable block copolymer for anticancer therapy in HuCC-T1 CCA cells. Vorinostat-NPs have similar anticancer activities in terms of growth inhibition, apoptosis and inhibition of HDAC expression in vitro to that of vorinostat alone. However, vorinostat-NPs show improved antitumor activity in xenograft mice model and a higher inhibition rate of HDAC expression in vivo. The higher anticancer activity of vorinostat-NPs can be explained by NIR-NPs, i.e. NIR-NPs were remained in the tumor tissue longer than did free NIR dye. We suggest that vorinostat nanoparticles can be used as a promising vehicle for HDAC-targeted chemotherapy in CCA cells.
Vorinostat was purchased from LC Labs. Co. (Woburn, MA, USA). LGE copolymer (Resomer® RGP d 50105) was purchased from Boehringer Ingelheim Pharma GmbH & Co. (Ingelheim am Rhein, Germany). Pluronic F68, dimethyl sulfoxide (DMSO) and acetone were purchased from Sigma-Aldrich Chem. Co. (St. Louis, MO, USA). Dialysis membranes with molecular weight cutoffs of 8000 g/mol were purchased from Spectra/PorTM (Spectrum Laboratories Inc, Rancho Dominguez, CA, USA). RPMI1640 media, fetal bovine serum (FBS) and all cell culture components were purchased from Life Technologies (Grand Island, NY, USA). All reagents and organic solvents used were of extra-pure grade.
Fabrication of vorinostat-incorporated nanoparticles
One hundred milligrams of LGE were dissolved in 10 ml acetone. Ten and twenty mg of vorinostat was dissolved in 0.2 and 0.4 ml of DMSO, respectively. Then, vorinostat solution was mixed with LGE/acetone solution. The mixed solution was dropped in 20 ml of deionized water [Pluronic F68, 0.1 % (w/v)] for 10 min and then the organic solvent was evaporated under vacuum. The nanoparticle solution was recovered by ultra-centrifugation at 100,000×g (Supra 30 K, Vacuum High Speed Centrifuge, Hanil Science Industrial Co. Ltd., Incheon, Korea). Subsequently, harvested nanoparticles were washed with 10 ml of deionized water and then harvested again by ultra-centrifugation. The washing procedure was repeated three times. The resulting nanoparticles were reconstituted in deionized water or lyophilized. To measure vorinostat content in the nanoparticles, 5 mg of lyophilized nanoparticles were dissolved in DMSO. The drug content and loading efficiency of vorinostat in the vorinostat-NPs was evaluated using the Flexar high-performance liquid chromatography (HPLC) system (Perkin-Elmer Life and Analytical Sciences, Waltham, MA, USA).
Drug concentrations were determined using the HPLC system as follows: the Flexar HPLC system was equipped with a Solvent Manager 5-CH degasser, an autosampler, a quaternary LC pump, a column oven and an UV/VIS detector. Chromatography was performed on a guard column (SecurityGuard® Guard Cartridge Kit; Phenomenex, Torrance, CA, USA) and a C18 column (Brownlee C18®, 5 micrometer, 150 × 4.6; Perkin Elmer) at 37 °C. Vorinostat was eluted isocratically with mobile phase (acetonitrile/0.1 % formic acid at a ratio of 22/78) at a flow rate of 1 ml/min and monitored at 241 nm. Chromatograms were recorded and integrated with the Chromera 2.1 system software (Perkin Elmer Life and Analytical Sciences, Waltham, MA, USA).
$$ \begin{aligned} {\text{Drug content }} & = {\text{ [(Drug weight in the nanoparticles)}}/({\text{weight of nanoparticles}}) ]\times 100 \\ {\text{Loading efficiency }} & = {\text{ [(Residual drug in the nanoparticle)}}/({\text{initial feeding amount of drug}}) ]\times 100 \\ \end{aligned} $$
The morphology of the nanoparticles was observed using TEM (JEM-2000 FX II microscope, JEOL, Tokyo, Japan). The nanoparticle solution was dropped onto a carbon film coated on a copper grid and then the nanoparticles were negatively stained with phosphotungstic acid (0.05 % w/w). TEM observation was performed at an accelerating voltage of 80 kV. Particle size was measured using the Nano-ZS apparatus (Malvern Instruments, Malvern, UK). Nanoparticles were reconstituted in deionized water (nanoparticle concentration 0.1 mg/ml) and then used to determine particle size. The crystallinity of vorinostat and vorinostat-NPs were analyzed using XRD (Rigaku D/Max-1200, Rigaku, Tokyo, Japan) equipped with Ni-filtered Cu Ka radiation (40 kV, 20 mA). The vorinostat powder and lyophilized nanoparticle solid were used to measure crystallinity using XRD.
Drug release study
Drug release testing was performed using phosphate-buffered saline (PBS; 10 mM, pH 7.4) solution at 37 °C. Five milligrams of nanoparticles in 1 ml of deionized water were added to 4 ml of PBS and this solution was then introduced into a dialysis tube. This dialysis tube was immersed in a 100 ml bottle with 95 ml of PBS. Whole media were taken at predetermined time intervals and exchanged with fresh PBS. The concentration of the released drug was measured using the HPLC system. The percentage of released drug was calculated from following equation: [(amount of released drug/total weight of drug in the nanoparticles) × 100].
HuCC-T1 cell line was obtained from the Health Science Research Resources Bank (Osaka, Japan) and maintained with RPMI1640 medium supplemented with 10 % heat-inactivated FBS and 1 % penicillin/streptomycin at 37 °C in a humidified atmosphere containing 5 % CO2.
Cell cytotoxicity and growth inhibition study
HuCC-T1 cells were seeded in 24-well plates at a density of 3 × 104 and 3 × 105 cells per well for the growth inhibition and cytotoxicity assays, respectively. Following this, each plate was incubated overnight in a CO2 incubator. Vorinostat in DMSO and vorinostat-NPs were diluted with RPMI1640 medium containing 10 % FBS for the growth inhibition assay at various concentrations and then added to HuCC-T1 cells in 24-well plates following 24 h incubation. The cytotoxicity assay was carried out using serum-free RPMI1640 media. The control was treated with 0.1 % (v/v) DMSO. Cells were trypsinized, harvested and resuspended in PBS. Trypan blue was added and the number of cells was counted using the Countess™ Automated Cell Counter (Invitrogen, Carlsbad, CA, USA). The reduction of viable cells by treatment of vorinostat or vorinostat-incorporated nanoparticles compared to control treatment was calculated and expressed as mean ± SD.
Apoptosis and necrosis analysis
HuCC-T1 cells were seeded in 6-well plates at a density of 1 × 106 cells per well and exposed to various concentrations of vorinostat and vorinostat-NPs for 24 h. The cells were harvested, washed with PBS, resuspended in 500 μl binding buffer and stained with FITC-conjugated Annexin V for apoptosis analysis and with PI for necrosis analysis. These cells were analyzed by flow cytometry (BD biosciences, San Jose, CA, USA).
Western blot analysis and immunocytochemistry
HuCC-T1 cells were seeded in 6-well plates at a density of 1 × 106 cells per well and exposed to various concentrations of vorinostat and vorinostat-NPs for 24 h. Cells were trypsinized and washed with cold PBS. The cells were collected by centrifugation and lysed in lysis buffer containing protease inhibitors [50 mM Tris, 150 mM NaCl, 1 % NP-40, 0.5 % deoxycholic acid, 0.1 % sodium dodecyl sulfate (SDS)] with phenylmethylsulfonyl fluoride and a protease inhibitor cocktail (Roche Diagnostics, Basel, Switzerland). The cell suspension was cleared by centrifugation at 14,000×g for 30 min at 4 °C and then supernatant or cell lysates were collected. The protein concentration was determined using the BCA Protein Assay kit (Pierce, Rockford, IL, USA).
For western blotting, 50 μg protein was subjected to SDS-polyacrylamide gel electrophoresis (SDS-PAGE), transferred to a polyvinyl difluoride membrane, blocked with 5 % skim milk in TBS-T and probed with an appropriate primary antibody followed by a secondary HRP-conjugated antibody. Proteins were detected by chemiluminescence. Proteins were quantified by digital analyses.
Antitumor activity of vorinostat-incorporated nanoparticles against the animal tumor xenograft model
To assess the antitumor activity of vorinostat-NPs, a tumor xenograft model was prepared by subcutaneous injection of HuCC-T1 cells into the backs of nude mice. HuCC-T1 cells (1 × 107 cells) in a total volume of 100 μl were subcutaneously injected into the backs of male nude mice (5-week-old and 20–25 g in weight; Orient, Seongnam, South Korea). When the solid tumor reached approximately 4–5 mm in diameter, empty-NPs, vorinostat and vorinostat-NPs were injected subcutaneously adjacent to the solid tumor. Treatment dose was adjusted to 1 mg vorinostat (50 mg/kg). A total of 18 mice were divided into three groups, as follows: (1) vorinostat-injected, (2) empty-NPs injected and (3) vorinostat-NP injected. Body weight and tumor volume were measured twice a week, starting on the first day of treatment. Two perpendicular diameters of the tumor were measured and tumor volume was calculated using the formula V = (a × [b]2)/2, with a being the largest and b being the smallest diameter. The animal study was carried out according to the guidelines of the Animal Treatment and Research Council of Pusan National University.
After 30 days of injection, tumors were isolated and fixed in 4 % formamide, paraffin-embedded and sliced for hematoxylin and eosin (H&E) staining or for the TUNEL assay. For immunohistochemical staining of the tumors, acetyl histone H3 antibody was diluted to 1:500 and HDAC1 antibody was dilued to 1:100. HDAC2, HDAC3, and HDAC4/5/7 antibodies were diluted to 1:200. Staining was performed using an Envision kit (Life Technologies, Carlsbad, CA, USA) according to the manufacturer's protocol.
Fluorescence imaging of solid tumor-bearing mice
To study biodistribution of vorinostat-NPs, hydrophobic NIR-dye (XenoLight DiR, Caliper Lifesciences, MA 01748-1668, USA) was incorporated into the nanoparticles. Four milligrams of hydrophobic NIR-dye was dissolved in 0.2 ml of DMSO and mixed with 100 mg of LGE dissolved in acetone. This solution was dropped in 20 ml of deionized water [Pluronic F68, 0.1 % (w/v)] for 10 min and then organic solvent was evaporated under vacuum. After that, hydrophobic NIR-NPs were harvested by same procedure as described above. To measure content of hydrophobic NIR-dye in the nanoparticles, hydrophobic NIR-NPs were dissolved in 10 ml of DMSO and the concentration was measured with fluorescence spectrophotometer (RF-5301 fluorescence spectrofluorophotometer, Shimadzu, Tokyo, Japan). The content of hydrophobic near-infrared was calculated to 3.6 % (w/w).
For tumor imaging, NIR-NPs (50 mg/kg) were injected subcutaneously beside tumor tissue. To compare uptake of nanoparticles at tumor tissue and normal tissue, same quantity of NIR-NPs was also injected subcutaneously in the normal region. Mouse was observed using the Maestro 2™ In Vivo imaging system (Cambridge Research and Instruments, Inc., Woburn, MA 01801, USA) at 780 nm.
Quantification of image intensity
Quantification of staining intensities was calculated using the ImageJ (ver 1.42q) software (NIH, Bethesda, MD, USA).
Statistical analyses of the data from treated and untreated cells were performed using the Student's t test. A p value <0.05 was considered to be statistically significant.
vorinostat-NPs:
vorinostat-incorporated nanoparticles
HDAC:
histone deacetylase
HDACi:
histone deacetylase inhibitor
GI:
CCA:
LGE:
poly(dl-lactide-co-glycolide)-b-poly(ethylene glycol)
XRD:
X-ray powder diffractograms
PARP:
poly-ADP ribose polymerase
SDS:
sodium dodecyl sulfate
TUNEL:
terminal deoxynucleotidyl transferase dUDP nick-end-labeling
NIR:
near-infrared
NIR-NPs:
NIR-dye conjugated nanoparticles
empty-NPs:
empty nanoparticles
Richon VM. Cancer biology: mechanism of antitumour action of vorinostat (suberoylanilidehydroxamic acid), a novel histone deacetylase inhibitor. Br J Cancer. 2006;95:S2–6.
Takada Y, Gillenwater A, Ichikawa H, Aggarwal BB. Suberoylanilidehydroxamic acid potentiates apoptosis, inhibits invasion, and abolishes osteoclastogenesis by suppressing nuclear factor-kappaB activation. J Biol Chem. 2006;281:5612–22.
Doi T, Hamaguchi T, Shirao K, Chin K, Hatake K, Noguchi K, et al. Evaluation of safety, pharmacokinetics, and efficacy of vorinostat, a histone deacetylase inhibitor, in the treatment of gastrointestinal (GI) cancer in a phase I clinical trial. Int J Clin Oncol. 2013;18:87–95.
Claerhout S, Lim JY, Choi W, Park YY, Kim K, Kim SB, et al. Gene expression signature analysis identifies vorinostat as a candidate therapy for gastric cancer. PLoS One. 2011;6:e24662.
Jin JS, Tsao TY, Sun PC, Yu CP, Tzao C. SAHA inhibits the growth of colon tumors by decreasing histone deacetylase and the expression of cyclin D1 and survivin. Pathol Oncol Res. 2012;18:713–20.
Chinnaiyan P, Chowdhary S, Potthast L, Prabhu A, Tsai YY, Sarcar B, et al. Phase I trial of vorinostat combined with bevacizumab and CPT-11 in recurrent glioblastoma. Neuro Oncol. 2012;14:93–100.
Ramaswamy B, Fiskus W, Cohen B, Pellegrino C, Hershman DL, Chuang E, et al. Phase I–II study of vorinostat plus paclitaxel and bevacizumab in metastatic breast cancer: evidence for vorinostat-induced tubulin acetylation and Hsp90 inhibition in vivo. Breast Cancer Res Treat. 2012;132:1063–72.
Liu YL, Yang PM, Shun CT, Wu MS, Weng JR, Chen CC. Autophagy potentiates the anti-cancer effects of the histone deacetylase inhibitors in hepatocellular carcinoma. Autophagy. 2010;6:1057–65.
Kwak TW, Kim do H, Chung CW, Lee HM, Kim CH, Jeong YI, et al. Synergistic anticancer effects of vorinostat and epigallocatechin-3-gallate against HuCC-T1 human cholangiocarcinoma cells. Evid Based Complement Altern Med. 2013;2013:185158.
Lim JH. Cholangiocarcinoma: morphologic classification according to growth pattern and imaging findings. Am J Roentgenol. 2003;181:819–27.
Khan SA, Taylor-Robinson SD, Toledano MB, Beck A, Elliott P, Thomas HC. Changing international trends in mortality rates for liver, biliary and pancreatic tumours. J Hepatol. 2002;37:806–13.
Sirica AE. Cholangiocarcinoma: molecular targeting strategies for chemoprevention and therapy. Hepatology. 2005;41:5–15.
Ciombor KK, Goff LW. Advances in the management of biliary tract cancers. Clin Adv Hematol Oncol. 2013;11:28–34.
Brunner TB, Eccles CL. Radiotherapy and chemotherapy as therapeutic strategies in extrahepatic biliary duct carcinoma. Strahlenther Onkol. 2010;186:672–80.
Skipworth JR, Olde Damink SW, Imber C, Bridgewater J, Pereira SP, Malagó M. Surgical, neo-adjuvant and adjuvant management strategies in biliary tract cancer. Aliment Pharmacol Ther. 2011;34:1063–78.
Rizvi S, Gores GJ. Pathogenesis, diagnosis and management of cholangiocarcinoma. Gastroenterology. 2013;145:1215–29.
Cereda S, Passoni P, Reni M, Viganò MG, Aldrighetti L, Nicoletti R, et al. The cisplatin, epirubicin, 5-fluorouracil, gemcitabine (PEFG) regimen in advanced biliary tract adenocarcinoma. Cancer. 2010;116:2208–14.
Kim ST, Park JO, Lee J, Lee KT, Lee JK, Choi SH, et al. A phase II study of gemcitabine and cisplatin in advanced biliary tract cancer. Cancer. 2006;106:1339–46.
Valle J, Wasan H, Palmer DH, Cunningham D, Anthoney A, Maraveyas A, et al. Cisplatin plus gemcitabine versus gemcitabine for biliary tract cancer. N Engl J Med. 2010;362:1273–81.
Leong E, Chen WW, Ng E, Van Hazel G, Mitchell A, Spry N. Outcomes from combined chemoradiotherapy in unresectable and locally advanced resected cholangiocarcinoma. J Gastrointest Cancer. 2012;43:50–5.
Furuse J, Okusaka T. Targeted therapy for biliary tract cancer. Cancers (Basel). 2011;3:2243–54.
Braga HJ, Imam K, Bluemke DA. MR imaging of intrahepatic cholangiocarcinoma: use of ferumoxides for lesion localization and extension. Am J Roentgenol. 2011;177:111–4.
Lee Y, Lee JS, Kim CM, Jeong JY, Choi JI, Kim MJ. Area of paradoxical signal drop after the administration of superparamagnetic iron oxide on the T2-weighted image of a patient with lymphangitic metastasis of the liver. Magn Reson Imaging. 2008;26:577–82.
Tang T, Zheng JW, Chen B, Li H, Li X, Xue KY, et al. Effects of targeting magnetic drug nanoparticles on human cholangiocarcinoma xenografts in nude mice. Hepatobiliary Pancreat Dis Int. 2007;6:303–7.
Towata T, Komizu Y, Kariya R, Suzu S, Matsumoto Y, Kobayashi N, et al. Hybrid liposomes inhibit the growth of cholangiocarcinoma by induction of cell cycle arrest in G1 phase. Bioorg Med Chem Lett. 2010;20:3680–2.
Chung KD, Jeong YI, Chung CW, Kim do H, Kang DH. Anti-tumor activity of all-trans retinoic acid-incorporated glycol chitosan nanoparticles against HuCC-T1 human cholangiocarcinoma cells. Int J Pharm. 2012;422:454–61.
Hwang JH, Choi CW, Kim do HW, Kim H, Kwak TW, Lee HM, et al. Dextran-b-poly(l-histidine) copolymer nanoparticles for pH-responsive drug delivery to tumor cells. Int J Nanomedicine. 2013;8:3197–207.
Gref R, Minamitake Y, Peracchia MT, Trubetskoy V, Torchilin V, Langer R. Biodegradable long-circulating polymeric nanospheres. Science. 1994;263:1600–3.
Oh JK. Polylactide (PLA)-based amphiphilic block copolymers: synthesis, self-assembly and biomedical applications. Soft Matter. 2011;7:5096–108.
Morine Y, Shimada M, Iwahashi S, Utsunomiya T, Imura S, Ikemoto T, et al. Role of histone deacetylase expression in intrahepatic cholangiocarcinoma. Surgery. 2012;151:412–9.
Sun WJ, Zhou X, Zheng JH, Lu MD, Nie JY, Yang XJ, et al. Histone acetyltransferases and deacetylases: molecular and clinical implications to gastrointestinal carcinogenesis. Acta Biochim Biophys Sin (Shanghai). 2012;44:80–91.
Boonjaraspinyo S, Boonmars T, Kaewkes S, Laummaunwai P, Pinlaor S, Loilome W, et al. Down-regulated expression of HSP70 in correlation with clinicopathology of cholangiocarcinoma. Pathol Oncol Res. 2012;18:227–37.
Weichert W, Röske A, Gekeler V, Beckers T, Ebert MP, Pross M, et al. Association of patterns of class I histone deacetylase expression with patient prognosis in gastric cancer: a retrospective analysis. Lancet Oncol. 2008;9:139–48.
Weichert W. HDAC expression and clinical prognosis in human malignancies. Cancer Lett. 2009;280:168–76.
Fang JY. Histone deacetylase inhibitors, anticancerous mechanism and therapy for gastrointestinal cancers. J Gastroenterol Hepatol. 2005;20:988–94.
Xu WS, Parmigiani RB, Marks PA. Histone deacetylase inhibitors: molecular mechanisms of action. Oncogene. 2007;26:5541–52.
Vigushin DM, Coombes RC. Histone deacetylase inhibitors in cancer treatment. Anticancer Drugs. 2012;13:1–13.
Yan W, Liu S, Xu E, Zhang J, Zhang Y, Chen X, et al. Histone deacetylase inhibitors suppress mutant p53 transcription via histone deacetylase 8. Oncogene. 2013;32:599–609.
Li D, Marchenko ND, Moll UM. SAHA shows preferential cytotoxicity in mutant p53 cancer cells by destabilizing mutant p53 through inhibition of the HDAC6-Hsp90 chaperone axis. Cell Death Differ. 2011;18:1904–13.
Chen YX, Fang JY, Zhu HY, Lu R, Cheng ZH, Qiu DK. Histone acetylation regulates p21WAF1 expression in human colon cancer cell lines. World J Gastroenterol. 2004;10:2643–6.
Said TK, Moraes RCB, Sinha R, Medina D. Mechanisms of suberoylanilide hydroxamic acid inhibition of mammary cell growth. Breast Cancer Res. 2001;3:122–33.
Di Gennaro E, Piro G, Chianese MI, Franco R, Di Cintio A, Moccia T, Luciano A, de Ruggiero I, Bruzzese F, Avallone A, Arra C, Budillon A. Vorinostat synergises with capecitabine through upregulation of thymidine phosphorylase. Br J Cancer. 2010;103:1680–91.
Wang EC, Min Y, Palm RC, Fiordalisi JJ, Wagner KT, Hyder N, Cox AD, Caster JM, Tian X, Wang AZ. Nanoparticle formulations of histone deacetylase inhibitors for effective chemoradiotherapy in solid tumors. Biomaterials. 2015;51:208–15.
Li J, Gong C, Feng X, Zhou X, Xu X, Xie L, et al. Biodegradable thermosensitive hydrogel for SAHA and DDP delivery: therapeutic effects on oral squamous cell carcinoma xenografts. PLoS One. 2012;7:e33860.
Mohamed EA, Zhao Y, Meshali MM, Remsberg CM, Borg TM, Foda AM, et al. Vorinostat with sustained exposure and high solubility in poly(ethylene glycol)-b-poly(dl-lactic acid) micelle nanocarriers: characterization and effects on pharmacokinetics in rat serum and urine. J Pharm Sci. 2012;101:3787–98.
Kim JY, Shim G, Choi HW, Park J, Chung SW, Kim S, et al. Tumor vasculature targeting following co-delivery of heparin-taurocholate conjugate and suberoylanilide hydroxamic acid using cationic nanolipoplex. Biomaterials. 2012;33:4424–30.
Thomas MB. Systemic and targeted therapy for biliary tract tumors and primary liver tumors. Surg Oncol Clin N Am. 2014;23:369–81.
Park W, Park SJ, Na K. The controlled photoactivity of nanoparticles derived from ionic interactions between a water soluble polymeric photosensitizer and polysaccharide quencher. Biomaterials. 2011;32:8261–70.
Maeda H, Wu J, Sawa T, Matsumura Y, Hori K. Tumor vascular permeability and the EPR effect in macromolecular therapeutics: a review. J Control Release. 2000;65:271–84.
Iyer AK, Khaled G, Fang J, Maeda H. Exploiting the enhanced permeability and retention effect for tumor targeting. Drug Discov Today. 2006;11:812–8.
TWK carried out fabrication of vorinostat-loaded nanoparticles and cell culture study. DHK carried out animal study using mouse tumor xenograft model. YIJ conceived of this research and participated in its design. DHK drafted a manuscript and organized all the procedures. All authors read and approved the final manuscript.
This study was supported by a grant of the Korean Health Technology R&D Project, Ministry of Health and Welfare, Republic of Korea (Project No. HI14C2220).
Compliance with ethical guidelines
Competing interests The authors declare that they have no competing interests.
Biomedical Research Institute, Pusan National University Hospital, 179 Gudeok-ro, Seo-gu, Busan, 602-739, Republic of Korea
Tae Won Kwak
, Young-Il Jeong
& Dae Hwan Kang
School of Medicine, Pusan National University, Yangsan, Gyeongnam, 626-770, Republic of Korea
Do Hyung Kim
Department of Internal Medicine, Pusan National University Yangsan Hospital, Yangsan, Gyeongnam, 626-770, Republic of Korea
Dae Hwan Kang
Search for Tae Won Kwak in:
Search for Do Hyung Kim in:
Search for Young-Il Jeong in:
Search for Dae Hwan Kang in:
Correspondence to Young-Il Jeong or Dae Hwan Kang.
Tae Won Kwak and Do Hyung Kim equally contributed to this work
Kwak, T.W., Kim, D.H., Jeong, Y. et al. Antitumor activity of vorinostat-incorporated nanoparticles against human cholangiocarcinoma cells. J Nanobiotechnol 13, 60 (2015). https://doi.org/10.1186/s12951-015-0122-4
Received: 06 April 2015
Poly(dl-lactide-co-glycolide)
Block copolymer
Cancer chemotherapy
Drug targeting | CommonCrawl |
Microscopic derivation of Ginzburg-Landau theories for hierarchical quantum Hall states
Yoran Tournois, Maria Hermanns, Thors Hans Hansson
We propose a Ginzburg-Landau theory for a large and important part of the abelian quantum Hall hierarchy, including the prominently observed Jain sequences. By a generalized "flux attachment" construction we extend the Ginzburg-Landau-Chern-Simons composite boson theory to states obtained by both quasielectron and quasihole condensation, and express the corresponding wave functions as correlators in conformal field theories. This yields a precise identification of the relativistic scalar fields entering these correlators in terms of the original electron field.
Dimer description of the SU(4) antiferromagnet on the triangular lattice
Anna Keselman, Lucile Savary, Leon Balents
In systems with many local degrees of freedom, high-symmetry points in the phase diagram can provide an important starting point for the investigation of their properties throughout the phase diagram. In systems with both spin and orbital (or valley) degrees of freedom such a starting point gives rise to SU(4)-symmetric models. Here we consider SU(4)-symmetric "spin" models, corresponding to Mott phases at half-filling, i.e. the six-dimensional representation of SU(4). This may be relevant to twisted multilayer graphene. In particular, we study the SU(4) antiferromagnetic "Heisenberg" model on the triangular lattice, both in the classical limit and in the quantum regime. Carrying out a numerical study using the density matrix renormalization group (DMRG), we argue that the ground state is non-magnetic. We then derive a dimer expansion of the SU(4) spin model. An exact diagonalization (ED) study of the effective dimer model suggests that the ground state breaks translation invariance, forming a valence bond solid (VBS) with a 12-site unit cell. Finally, we consider the effect of SU(4)-symmetry breaking interactions due to Hund's coupling, and argue for a possible phase transition between a VBS and a magnetically ordered state.
Anomalies, a mod 2 index, and dynamics of 2d adjoint QCD
Aleksey Cherman, Theodore Jacobson, Yuya Tanizaki, Mithat Ünsal
SciPost Phys. 8, 072 (2020) · published 5 May 2020 |
We show that $2$d adjoint QCD, an $SU(N)$ gauge theory with one massless adjoint Majorana fermion, has a variety of mixed 't Hooft anomalies. The anomalies are derived using a recent mod $2$ index theorem and its generalization that incorporates 't Hooft flux. Anomaly matching and dynamical considerations are used to determine the ground-state structure of the theory. The anomalies, which are present for most values of $N$, are matched by spontaneous chiral symmetry breaking. We find that massless $2$d adjoint QCD confines for $N >2$, except for test charges of $N$-ality $N/2$, which are deconfined. In other words, $\mathbb Z_N$ center symmetry is unbroken for odd $N$ and spontaneously broken to $\mathbb Z_{N/2}$ for even $N$. All of these results are confirmed by explicit calculations on small $\mathbb{R}\times S^1$. We also show that this non-supersymmetric theory exhibits exact Bose-Fermi degeneracies for all states, including the vacua, when $N$ is even. Furthermore, for most values of $N$, $2$d massive adjoint QCD describes a non-trivial symmetry-protected topological (SPT) phase of matter, including certain cases where the number of interacting Majorana fermions is a multiple of $8$. As a result, it fits into the classification of $(1+1)$d SPT phases of interacting Majorana fermions in an interesting way.
Multifractality and its role in anomalous transport in the disordered XXZ spin-chain
David J. Luitz, Ivan M. Khaymovich, Yevgeny Bar Lev
SciPost Phys. Core 2, 006 (2020) · published 30 April 2020 |
The disordered XXZ model is a prototype model of the many-body localization transition (MBL). Despite numerous studies of this model, the available numerical evidence of multifractality of its eigenstates is not very conclusive due severe finite size effects. Moreover it is not clear if similarly to the case of single-particle physics, multifractal properties of the many-body eigenstates are related to anomalous transport, which is observed in this model. In this work, using a state-of-the-art, massively parallel, numerically exact method, we study systems of up to 24 spins and show that a large fraction of the delocalized phase flows towards ergodicity in the thermodynamic limit, while a region immediately preceding the MBL transition appears to be multifractal in this limit. We discuss the implication of our finding on the mechanism of subdiffusive transport.
The negativity contour: a quasi-local measure of entanglement for mixed states
Jonah Kudler-Flam, Hassan Shapourian, Shinsei Ryu
SciPost Phys. 8, 063 (2020) · published 20 April 2020 |
In this paper, we study the entanglement structure of mixed states in quantum many-body systems using the $\textit{negativity contour}$, a local measure of entanglement that determines which real-space degrees of freedom in a subregion are contributing to the logarithmic negativity and with what magnitude. We construct an explicit contour function for Gaussian states using the fermionic partial-transpose. We generalize this contour function to generic many-body systems using a natural combination of derivatives of the logarithmic negativity. Though the latter negativity contour function is not strictly positive for all quantum systems, it is simple to compute and produces reasonable and interesting results. In particular, it rigorously satisfies the positivity condition for all holographic states and those obeying the quasi-particle picture. We apply this formalism to quantum field theories with a Fermi surface, contrasting the entanglement structure of Fermi liquids and holographic (hyperscale violating) non-Fermi liquids. The analysis of non-Fermi liquids show anomalous temperature dependence of the negativity depending on the dynamical critical exponent. We further compute the negativity contour following a quantum quench and discuss how this may clarify certain aspects of thermalization.
Anomaly matching in the symmetry broken phase: Domain walls, CPT, and the Smith isomorphism
Itamar Hason, Zohar Komargodski, Ryan Thorngren
Symmetries in Quantum Field Theory may have 't Hooft anomalies. If the symmetry is unbroken in the vacuum, the anomaly implies a nontrivial low-energy limit, such as gapless modes or a topological field theory. If the symmetry is spontaneously broken, for the continuous case, the anomaly implies low-energy theorems about certain couplings of the Goldstone modes. Here we study the case of spontaneously broken discrete symmetries, such as Z/2 and T. Symmetry breaking leads to domain walls, and the physics of the domain walls is constrained by the anomaly. We investigate how the physics of the domain walls leads to a matching of the original discrete anomaly. We analyze the symmetry structure on the domain wall, which requires a careful analysis of some properties of the unbreakable CPT symmetry. We demonstrate the general results on some examples and we explain in detail the mod 4 periodic structure that arises in the Z/2 and T case. This gives a physical interpretation for the Smith isomorphism, which we also extend to more general abelian groups. We show that via symmetry breaking and the analysis of the physics on the wall, the computations of certain discrete anomalies are greatly simplified. Using these results we perform new consistency checks on the infrared phases of 2+1 dimensional QCD.
Hall anomaly and moving vortex charge in layered superconductors
Assa Auerbach, Daniel P. Arovas
Magnetotransport theory of layered superconductors in the flux flow steady state is revisited. Longstanding controversies concerning observed Hall sign reversals are resolved. The conductivity separates into a Bardeen-Stephen vortex core contribution, and a Hall conductivity due to moving vortex charge. This charge, which is responsible for Hall anomaly, diverges logarithmically at weak magnetic field. Its values can be extracted from magetoresistivity data by extrapolation of vortex core Hall angle from the normal phase. Hall anomalies in YBCO, BSCCO, and NCCO data are consistent with theoretical estimates based on doping dependence of London penetration depths. In the appendices, we derive the Streda formula for the hydrodynamical Hall conductivity, and refute previously assumed relevance of Galilean symmetry to Hall anomalies.
Molecular dynamics simulation of entanglement spreading in generalized hydrodynamics
Márton Mestyán, Vincenzo Alba
SciPost Phys. 8, 055 (2020) · published 9 April 2020 |
We consider a molecular dynamics method, the so-called flea gas for computing the evolution of entanglement after inhomogeneous quantum quenches in an integrable quantum system. In such systems the evolution of local observables is described at large space-time scales by the Generalized Hydrodynamics approach, which is based on the presence of stable, ballistically propagating quasiparticles. Recently it was shown that the GHD approach can be joined with the quasiparticle picture of entanglement evolution, providing results for entanglement growth after inhomogeneous quenches. Here we apply the flea gas simulation of GHD to obtain numerical results for entanglement growth. We implement the flea gas dynamics for the gapped anisotropic Heisenberg XXZ spin chain, considering quenches from globally homogeneous and piecewise homogeneous initial states. While the flea gas method applied to the XXZ chain is not exact even in the scaling limit (in contrast to the Lieb--Liniger model), it yields a very good approximation of analytical results for entanglement growth in the cases considered. Furthermore, we obtain the {\it full-time} dynamics of the mutual information after quenches from inhomogeneous settings, for which no analytical results are available.
Topological thermal Hall effect for topological excitations in spin liquid: Emergent Lorentz force on the spinons
Yong Hao Gao, Gang Chen
SciPost Phys. Core 2, 004 (2020) · published 8 April 2020 |
We study the origin of Lorentz force on the spinons in a U(1) spin liquid. We are inspired by the previous observation of gauge field correlation in the pairwise spin correlation using the neutron scattering measurement when the Dzyaloshinskii-Moriya interaction intertwines with the lattice geometry. We extend this observation to the Lorentz force that exerts on the (neutral) spinons. The external magnetic field, that polarizes the spins, effectively generates an internal U(1) gauge flux for the spinons and twists the spinon motion through the Dzyaloshinskii-Moriya interaction. Such a mechanism for the emergent Lorentz force differs fundamentally from the induction of the internal U(1) gauge flux in the weak Mott insulating regime from the charge fluctuations. We apply this understanding to the specific case of spinon metals on the kagome lattice. Our suggestion of emergent Lorentz force generation and the resulting topological thermal Hall effect may apply broadly to other non-centrosymmetric spin liquids with Dzyaloshinskii-Moriya interaction. We discuss the relevance with the thermal Hall transport in kagome materials volborthite and kapellasite.
Zero temperature momentum distribution of an impurity in a polaron state of one-dimensional Fermi and Tonks-Girardeau gases
Oleksandr Gamayun, Oleg Lychkovskiy, Mikhail B. Zvonarev
We investigate the momentum distribution function of a single distinguishable impurity particle which formed a polaron state in a gas of either free fermions or Tonks-Girardeau bosons in one spatial dimension. We obtain a Fredholm determinant representation of the distribution function for the Bethe ansatz solvable model of an impurity-gas δ-function interaction potential at zero temperature, in both repulsive and attractive regimes. We deduce from this representation the fourth power decay at a large momentum, and a weakly divergent (quasi-condensate) peak at a finite momentum. We also demonstrate that the momentum distribution function in the limiting case of infinitely strong interaction can be expressed through a correlation function of the one-dimensional impenetrable anyons.
Previous 1 2 3 4 5 6 7 8 9 10 11 12 ... 31 Next | CommonCrawl |
Tangent Ratios (proofs and equivalences)
Complementary results
Reciprocal identities
Pythagorean Identities
Applications of Pythagorean identities
Applications of reciprocal and Pythagorean identities
Simplify expressions with reciprocals using complementary results
Evaluate trig expressions using angle sum and difference identities
Expand expressions using angle sum and difference identities
Applications of angle sum and difference identities
Evaluate trig expressions using double and half angle identities
Expand expressions using double and half angle identities
Apply double and half angle identities
Sums and differences as products
Apply sum and difference and double angle identities
Solve equations using trigonometric identities
Prove and apply other trigonometric identities
By using the expansion of $\cos\left(A+B\right)$cos(A+B), verify $\cos2x=2\cos^2\left(x\right)-1$cos2x=2cos2(x)−1.
Approx 5 minutes
By using the expansion of $\sin\left(A+B\right)$sin(A+B), verify that $\sin2x=2\sin x\cos x$sin2x=2sinxcosx.
By simplifying the left hand side (LHS) of the identity, verify that $\sin\left(x+y\right)-\sin\left(x-y\right)=2\cos x\sin y$sin(x+y)−sin(x−y)=2cosxsiny.
By simplifying the left hand side of the identity, verify that $\frac{\sin\left(x-y\right)}{\cos x\cos y}=\tan x-\tan y$sin(x−y)cosxcosy=tanx−tany
M8-6
Manipulate trigonometric expressions
Apply trigonometric methods in solving problems | CommonCrawl |
Black hole Creation
I just had an idea not sure if this would work but if we dropped something into very deep water like the marina trench could the pressure crush it small enough to create a black hole?
black-hole
NoahNoah
$\begingroup$ Welcome to the astronomy SE btw. Earn another badge by taking the tour! astronomy.stackexchange.com/tour $\endgroup$ – El Bromista Sep 21 '16 at 20:21
$\begingroup$ Why stop at bottom of marina trench, why not all the way to the center of the Earth. $\endgroup$ – Knu8 Sep 22 '16 at 10:01
$\begingroup$ There is already stuff down there like water and likely dead flora and fauna. Those aren't being crushed into black holes are they? $\endgroup$ – zephyr Sep 22 '16 at 15:28
There already is something at the bottom of Mariana Trench. Rocks and stuff. Luckily they haven't been turned into black holes. So empirically, we can say that the answer is no. If it had, it would have attracted its surroundings, and eventually the rest of Earth would fall into it.
To create a black hole, you need to compress a certain mass to within a certain radius. But solids and liquids are really hard to compress. Although the pressure at the bottom of the Mariana Trench is over 1000 atmospheres, water is compressed only by 5%. And solids like rocks and metals are virtually incompressible$^\dagger$.
Even at the center of the Sun, where the pressure is 250 billion atm, the density is only 150 times that of water (under 1 atm). That is, 1 kg of water placed at the center of the Sun would fill not one liter, but $0.7\,\mathrm{cl}$. But to turn a mass $M$ into a black hole, you'd have to compress it to within it so-called Schwarzschild radius, which is given by $ r_\mathrm{S} = 2GM/c^2, $ where $G$ and $c$ are constants. For $M = 1\,\mathrm{kg}$, this equates to $10^{-25}\,\mathrm{cm}$, much smaller than the radius of an atom.
Such conditions arise only in extreme events, such as the core of a dying star collapsing, and even then only the most massive stars. So unfortunately, you won't be able to create a black hole.
$^\dagger$Solids are not entirely incompressible, since otherwise sounds wouldn't be able to propagate through them.
pelapela
No, for a black hole to form the core of a star much larger than our sun must collapse into itself, forming a singularity. This happens after a star has expended it's fuel no longer being able to offset the inward force of gravity. The result is a rapid release of its outer layers in a supernova. The resulting core then collapses into the singularity. The mass is so great that this singularity is an infinite bend in space time.
Of note, if we took your object and were able to squeeze it down so small that it's radius reached the Schwarzschild radius, it could become a black hole. The Mariana Trench could not provide enough pressure to this.
There exists little fish and organisms near the bottom of the Mariana Trench. Dropping something like a penny would just cause it be distorted by the pressure above it.
El BromistaEl Bromista
$\begingroup$ While we are mostly used to black holes forming from stars, that's not a criterium for something to be a black hole. The LHC at CERN is potentially creating black holes, and they are definitely not dealing with core collapse stars over there. Anything can become a black hole, if it becomes dense enough! $\endgroup$ – nataliaeire Sep 21 '16 at 21:37
$\begingroup$ @nataliaeire aye, but I didn't want to confuse this person. I didn't want the follow up of, "Yeah it would crush it so small that a black hole would form." Although, my answer is isn't as complete as pela's. However, I'll try and edit my answer to be more complete. $\endgroup$ – El Bromista Sep 21 '16 at 21:43
$\begingroup$ @nataliaeire: Under the standard model, you'd need 1e17 times the energy of the LHC to create BHs from colliding particles. If string theory is correct, you could perhaps create one, but it would evaporate immediately. Even if Hawking radiation is a lie, it would grow too slowly to engulf Earth in billions of years (if there is ≥7 dimensions) or have observable effects in the Universe (if there are 5-6 dimensions). This is not my field of expertise, though, but Earth is hit ~1e5 times per second by cosmic particles with energies ≥ the LCH energy and we're still here, so I think it doesn't work $\endgroup$ – pela Sep 22 '16 at 10:44
$\begingroup$ @pela Yes, though, I don't see why you would bring up engulfing the Earth, seeing how that, too, is not a criterium for being a black hole. And that is my point; that to get a black hole you "simply" need to compress something within a certain radius for any given mass. In other words, I thought your answer was more complete. $\endgroup$ – nataliaeire Sep 22 '16 at 21:51
$\begingroup$ @nataliaeire: I apologize, I put more into your comment than you deserved. It's just that a couple of years ago, there was quite a lot of criticism from some people who thought that the LHC were able to create BHs, that would then engulf Earth. Your point is absolutely valid. $\endgroup$ – pela Sep 23 '16 at 9:21
Theoretically any object with a mass great enough that its own gravity is able to overcome its molecular movement and provided there is no forces to counteract that force such as atomic fission or fusion can in deed implode to a singularity, however the pressures from the sea are far to weak even at the center of the earth to initiate such a reaction it takes a mass of over 3 times that of our sun to even come close. On the other hand it may be possible hypothetically to compress an Eisenstein Condensate kept at temperatures near 0K (say 1k) at extreme pressure might be able to form a mini hole. But the heat caused by that pressure would disrupt the process and even if successful it would evaporate in pico seconds
JMCJMC
Not the answer you're looking for? Browse other questions tagged black-hole or ask your own question.
What is a singularity? What is at the center of a black hole? Specifically regarding space-time
Entropy of black hole
How small would you have to crush an object for it to become a black hole?
Whats the deal with black holes and "no information from inside the event horizon can leave"?
Can a star eat a black hole?
M87 Black hole. Why can we see the blackness?
Could anything consume a small black hole?
Control the behavior of a Black Hole | CommonCrawl |
Home > JSAP > Vol. 11 (2022) > Iss. 2
Journal of Statistics Applications & Probability
Subset Selection of ε- Better Exponential Populations under Heteroscedasticity
Anju Goyal, Department of Statistics, Panjab University, Chandigarh, IndiaFollow
A. N. Gill, School of Basic Sciences, IIIT, Una, H.P., IndiaFollow
Vishal Maurya, Department of Statistics and Information Management, RBI, Mumbai, IndiaFollow
Suppose we have $k( \geq 2)$independent processes/populations/sources/ treatments such that the data from $i^{th}$ treatment follow two-parameter exponential distribution with location parameter $\mu_i$ and scale parameter $\theta_i$, denoted $E(\mu_i,\theta_i )$, $i=1,\dots, k$. The location parameters $\mu_1, \dots, \mu_k$ and scale parameters $\theta_1, \dots, \theta_k$ are unknown and possibly unequal. Let $\underline{\delta}=(\mu_1 , \dots ,\mu_k, \theta_1, \dots \theta_k ) \in R^k \times R_+^k =\Omega $ and $\mu_{[k]} =max_{1\leq i \leq k}\mu_{i}$. For a given $\epsilon_1>0$, we define a set of good populations as $G=\{i:\mu_i \geq \mu_{[k]}-\epsilon_1 \}$. In this paper two-stage and one-stage subset selection procedures have been proposed to select a subset, say $S$, of $k$ populations which contains $G$ with a pre-specified probability $ P^*$, i.e., $P_{\underline{\delta}}= (G\subseteq S|$under the proposed procedure)$ \geq P^* \forall \underline{\delta} \in \Omega)$. The related simultaneous confidence intervals for $\mu_{[k]} -\mu_i, i=1,…,k $ and $\mu_{[j]} -\mu_{[i]}, i \neq j=1,…,k,$ have been derived. A subset selection procedure is also proposed which controls the probability of omitting a "good" treatment or selecting a "bad" treatment at $1-P^*$ by considering a set $B=\{i:\mu_i\leq \mu_{[k]}-\epsilon_2 \}$ of bad treatments, where $\epsilon_2>\epsilon_1$.The implementation of proposed procedure is demonstrated through a real life data.
http://dx.doi.org/10.18576/jsap/110201
Goyal, Anju; N. Gill, A.; and Maurya, Vishal (2022) "Subset Selection of ε- Better Exponential Populations under Heteroscedasticity," Journal of Statistics Applications & Probability: Vol. 11: Iss. 2, Article 1.
DOI: http://dx.doi.org/10.18576/jsap/110201
Available at: https://digitalcommons.aaru.edu.jo/jsap/vol11/iss2/1
All Issues Vol. 12, Iss. 2 Vol. 12, Iss. 1 Vol. 11, Iss. 3 Vol. 11, Iss. 2 Vol. 11, Iss. 1 Vol. 10, Iss. 3 Vol. 10, Iss. 2 Vol. 10, Iss. 1 Vol. 9, Iss. 3 Vol. 9, Iss. 2 Vol. 9, Iss. 1 Vol. 8, Iss. 3 Vol. 8, Iss. 2 Vol. 8, Iss. 1 Vol. 7, Iss. 3 Vol. 7, Iss. 2 Vol. 7, Iss. 1 Vol. 6, Iss. 3 Vol. 6, Iss. 2 Vol. 6, Iss. 1 Vol. 5, Iss. 3 Vol. 5, Iss. 2 Vol. 5, Iss. 1 Vol. 4, Iss. 3 Vol. 4, Iss. 2 Vol. 4, Iss. 1 Vol. 3, Iss. 3 Vol. 3, Iss. 2 Vol. 3, Iss. 1 Vol. 2, Iss. 3 Vol. 2, Iss. 2 Vol. 2, Iss. 1 Vol. 1, Iss. 3 Vol. 1, Iss. 2 Vol. 1, Iss. 1 | CommonCrawl |
ScheduleWorkshop Videos
Workshop Files
Final Report (PDF) Testimonials
Schedule for: 19w5131 - Representation Theory Connections to (q,t)-Combinatorics
Arriving in Banff, Alberta on Sunday, January 20 and departing Friday January 25, 2019
16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre)
17:30 - 19:30 Dinner ↓
A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110))
07:00 - 08:45 Breakfast ↓
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
08:45 - 09:00 Introduction and Welcome by BIRS Staff ↓
A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions.
(TCPL 201)
09:00 - 10:00 François Bergeron: Multivariate modules for (m,n)-rectangular combinatorics I ↓
I will first describe explicit (GL_k x S_n)-modules, in k sets on n variables, whose graded Frobenius correspond (conjecturally) to the symmetric functions that occur in the rectangular shuffle theorem. I will then discuss many properties of the associated character, and show how the k-variate version (in fact one can assume that k goes to infinity) sheds new light and simplifies many aspect of the problems that have been considered in the last 25 years in relation to spaces of diagonal harmonic polynomials. I will also show how some of the properties alluded to are entirely natural in view of the natural ties that the subject seems to have with the study of (m,n)-links on the torus. I will also explain how to directly relate this to the Delta-conjecture, opening a clear path to its generalization to the rectangular context.
10:00 - 10:30 Coffee Break (TCPL Foyer)
10:30 - 11:30 Adriano Garsia: Some Conjectures with Surprising Consequences ↓
In the $1980$ paper ``Une famille de Polynomes ayant Plusieurs Propriétés Enumeratives", Kreweras gives a bijection that shows that the polynomials $P_n(t)$ that enumerate $n$-labelled rooted trees by number of inversions also $t$-enumerates $n$-Parking Functions by the area statistic. In the $1993$ paper ``A Remarkable $q,t$-Catalan sequence and Lagrange Inversion" with Haiman we relate the Frobenius Characteristic of Diagonal Harmonics to Parking Functions. A recent search in the Encyclopedia of integer sequences connects these two papers in a surprising manner leading to a variety of beautiful conjectures. In this talk the focus will be on what we have proved.
11:30 - 13:00 Lunch ↓
Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
13:00 - 14:00 Guided Tour of The Banff Centre ↓
Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus.
(Corbett Hall Lounge (CH 2110))
14:00 - 15:00 Matthew Hogancamp: How to compute superpolynomials ↓
The phrase superpolynomial is often taken to mean ``graded dimension of Khovanov-Rozansky link homology''. In this talk I will discuss a combinatorial technique for computing Khovanov-Rozansky homology, which in the past couple years has led to the computation of superpolynomials for torus links (and also new recursions for the rational q,t-Catalan), through various works of myself, Ben Elias, and Anton Mellit. My goal will be to communicate all the main ideas, and outline how the torus link computation is carried out.
15:45 - 16:05 Group Photo ↓
Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo!
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 10:00 François Bergeron: Multivariate modules for (m,n)-rectangular combinatorics II ↓
10:30 - 11:30 Lauren Williams: From multiline queues to Macdonald polynomials via the exclusion process ↓
Recently James Martin introduced multiline queues, and used them to give a combinatorial formula for the stationary distribution of the multispecies asymmetric simple exclusion exclusion process (ASEP) on a circle. The ASEP is a model of particles hopping on a one-dimensional lattice, which was introduced around 1970, and has been extensively studied in statistical mechanics, probability, and combinatorics. In this article we give an independent proof of Martin's result, and we show that by introducing additional statistics on multiline queues, we can use them to give a new combinatorial formula for both the symmetric Macdonald polynomials P_{lambda}(x; q, t), and the nonsymmetric Macdonald polynomials E_{lambda}(x; q, t), where lambda is a partition. This formula is rather different from others that have appeared in the literature, such as the Haglund-Haiman-Loehr formula. Our proof uses results of Cantini-de Gier-Wheeler, who recently linked the multispecies ASEP on a circle to Macdonald polynomials. This is joint work with Sylvie Corteel and Olya Mandelshtam.
11:30 - 13:30 Lunch (Vistas Dining Room)
14:00 - 15:00 Sami Assaf: Nonsymmetric Macdonald polynomials and Demazure characters ↓
Nonsymmetric Macdonald polynomials are a polynomial generalization of their symmetric counterparts that exist for all root systems. The combinatorial formula for type A, due to Haglund, Haiman and Loehr, resembles the symmetric formula by the same authors, but with rational functions that complicate the combinatorics. By specializing one parameter to 0, the combinatorics simplifies and we are able to give an explicit formula for the expansion into Demazure characters, a basis for the polynomial ring that contains and generalizes the Schur basis for symmetric polynomials. The formula comes via an explicit Demazure crystal structure on semistandard key tabloids, constructed jointly with Nicolle Gonzalez. By taking stable limits, we return to the symmetric setting and obtain a new formula for the Schur expansion of Hall-Littlewood polynomials that uses a simple major index statistic computed from highest weights of the crystal.
15:30 - 16:30 Brendon Rhoades: Spanning configurations ↓
An ordered tuple of 1-dimensional subspaces $(L_1, \dots, L_n)$ of a fixed vector space $V$ is a {\em spanning line configuration} if $L_1 + \cdots + L_n = V$. We discuss the combinatorics of spanning line configurations, describing enumerative results when $V$ is a vector space over the finite field $\mathbb{F}_q$, and presenting the cohomology ring of the moduli space of spanning line configurations when $V$ is a vector space over $\mathbb{C}$. We present some ideas about how to extend these results to tuples $(W_1, \dots, W_n)$ of potentially higher-dimensional subspaces $W_i$ of $V$. Joint with Brendan Pawlowski and Andy Wilson.
17:30 - 19:30 Dinner (Vistas Dining Room)
09:00 - 10:00 Gabriel Frieden: Kostka--Foulkes polynomials at $q = -1$ ↓
The Kostka--Foulkes polynomials, $K_{\lambda, \mu}(q)$, arise in many contexts in combinatorics and representation theory. Lascoux and Schutzenberger showed that they are generating functions over $\text{SSYT}(\lambda, \mu)$ (the set of semistandard Young tableaux of shape $\lambda$ and content $\mu$) with respect to a statistic called charge. In particular, they evaluate to the familiar Kostka number at $q = 1$. One might hope that the evaluation at $q = -1$ counts the number of fixed points of a natural involution on $\text{SSYT}(\lambda, \mu)$. When the content $\mu$ is palindromic (for instance, in the case of standard tableau) it follows from work of Stembridge and Lascoux--Leclerc--Thibon that $K_{\lambda, \mu}(-1)$ is equal, up to sign, to the number of elements of $\text{SSYT}(\lambda, \mu)$ that are fixed by evacuation (the Schutzenberger involution). This restriction on $\mu$ is necessary because evacuation is content-reversing. In recent joint work with Mike Chmutov, Dongkwan Kim, Joel Lewis, and Elena Yudovina, we showed that in general, $K_{\lambda, \mu}(-1)$ counts, up to sign, the number of fixed points of the involution obtained by composing evacuation with the action of the long element $w_0$ by the Lascoux--Schutzenberger (or crystal) symmetric group action on tableaux. When the content is palindromic, the action of $w_0$ is trivial, so our result reduces to the above-mentioned one. The proof relies on the theory of rigged configurations.
10:30 - 11:30 Ying Anna Pun: Catalan Functions and $k$-Schur functions ↓
Li-Chung Chen and Mark Haiman studied a family of symmetric functions called Catalan (symmetric) functions which are indexed by pairs consisting of a partition contained in the staircase $(n-1, ..., 1,0)$ (of which there are Catalan many) and a composition weight of length $n$. They include the Schur functions ,the Hall-Littlewood polynomials and their parabolic generalizations. They can be defined by a Demazure-operator formula, and are equal to GL-equivariant Euler characteristics of vector bundles on the flag variety by the Borel-Weil-Bott theorem. We have discovered various properties of Catalan functions, providing a new insight on the existing theorems and conjectures inspired by Macdonald positivity conjecture. A key discovery in our work is an elegant set of ideals of roots that the associated Catalan functions are $k$-Schur functions and proved that graded $k$-Schur functions are G-equivariant Euler characteristics of vector bundles on the flag variety, settling a conjecture of Chen-Haiman. We exposed a new shift invariance property of the graded $k$-Schur functions and resolved the Schur positivity and $k$-branching conjectures by providing direct combinatorial formulas using strong marked tableaux. We conjectured that Catalan functions with a partition weight are $k$-Schur positive which strengthens the Schur positivity of Catalan function conjecture by Chen-Haiman and resolved the conjecture with positive combinatorial formulas in cases which capture and refine a variety of problems. This is joint work with Jonah Blasiak, Jennifer Morse and Daniel Summers.
13:30 - 17:30 Free Afternoon (Banff National Park)
09:00 - 10:00 Luc Lapointe: $m$-symmetric Macdonald polynomials ↓
We study non-symmetric Macdonald polynomials whose variables $x_{m+1},x_{m+2},...$ are symmetrized (using the Hecke symmetrization), which we call $m$-symmetric Macdonald polynomials (the case $m=0$ corresponds to the usual Macdonald polynomials). In the space of $m$-symmetric polynomials, we define $m$-symmetric Schur functions (now depending on the parameter $t$) by certain triangularity conditions. We conjecture that the $m$-symmetric Macdonald polynomials are positive (after a plethystic substitution) when expanded in the basis of $m$-symmetric Schur functions and that the corresponding $m-(q,t)$-Kostka coefficients embed naturally into the $m+1-(q,t$)-Kostka coefficients. When $m=1$, an analog of the nabla operator can be defined, which provides a refinement of the bigraded Frobenius series of the space of diagonal harmonics. When $m$ is larger, how to define such a nabla operator is still an open problem.
10:30 - 11:30 Hugh Morton: A skein-theoretic model for the double affine Hecke algebras ↓
We consider oriented braids in the thickened torus $T^2 \times I$, together with a single fixed base string. The based skein $H_n(T^2,*)$ is defined to be $\mathbb Z [s^{\pm 1}, q^{\pm 1}]$-linear combinations of $n$-braids subject to the Homflypt skein relation $X_{+}- X_{-} = (s-s^{-1})X_0$. In addition a braid string is allowed to cross through the base string at the expense of multiplying by the parameter $q$. Composition of braids induces an algebra structure on $H_n(T^2,*)$. We show that this algebra satisfies the relations of the double affine Hecke algebra ${\tilde H}_n$, as defined by Cherednik. We discuss how to include closed curves in the thickened torus in the model in an attempt to incorporate earlier work with Peter Samuelson on the Homflypt skein of $T^2$ into the setting of the algebras ${\tilde H}_n$, with an eye on the elliptic Hall algebra and the work of Schiffman and Vasserot.
14:00 - 15:00 Problem Session 1 ↓
The problem session was split into two pieces. The first part the following participants submitted a problem during this session:
François Bergeron, Lauren Williams, Hugh Morton, Brendan Pawlowski, Peter Samuelson.
A video of this session is available here: https://www.birs.ca/workshops/2019/19w5131/files/19w5131-PS1-20190124-1406-1505.mp4
Each person was asked to provide a written summary and potentially provide references. The written version that accompanies this video is at: https://www.birs.ca/workshops/2019/19w5131/files/open_problem_session_summary.pdf
The problem session was split into two pieces. The first part the following participants submitted/summarized a problem: Marino Romero, Mikhail Mazin, Gabriel Frieden, François Bergeron, Mike Zabrocki.
09:00 - 10:00 Open discussion/Collaboration (TCPL 201)
11:30 - 12:00 Checkout by Noon ↓
5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon.
(Front Desk - Professional Development Centre)
12:00 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room) | CommonCrawl |
Inequality ${n \choose k} \leq \left(\frac{en}{ k}\right)^k$
This is from page 3 of http://www.math.ucsd.edu/~phorn/math261/9_26_notes.pdf (Wayback Machine).
Copying the relevant segment:
Stirling's approximation tells us $\sqrt{2\pi n} (n/e)^n \leq n! \leq e^{1/12n} \sqrt{2\pi n} (n/e)^n$. In particular we can use this to say that $$ {n \choose k} \leq \left(\frac{en}{ k}\right)^k$$
I tried the tactic of combining bounds from $n!$, $k!$ and $(n-k)!$ and it didn't work. How does this bound follow from stirling's approximation?
inequality binomial-coefficients
Martin Sleziak
JasonMondJasonMond
$\begingroup$ A related question: math.stackexchange.com/q/132519/7266 $\endgroup$ – Fabian Apr 16 '12 at 18:51
First of all, note that $n!/(n-k)! \le n^k$. Use Stirling only for $k!$.
${n \choose k} \le \frac{n^k}{k!} \le \frac{n^k}{(\sqrt{2\pi k}(k/e)^k)} \le \frac{n^k}{(k/e)^k} = (\frac{en}{k})^k$
WonderWonder
$\begingroup$ Everything is right except that your inequalities are all pointing backwards. Other than that, good answer! $\endgroup$ – David E Speyer Apr 16 '12 at 18:54
$\begingroup$ thanks, just noticed that. $\endgroup$ – Wonder Apr 16 '12 at 18:54
$\begingroup$ This might be useless, but the inequality you're using (namely $k! \ge k^k e^{-k}$) has a very elementary proof (without need for the full Stirling) : $$e^k = \sum_{i = 0}^{\infty} \frac{k^i}{i!} \ge \frac{k^k}{k!}$$ $\endgroup$ – Joel Cohen Apr 16 '12 at 21:11
$\begingroup$ Great, that is very nice. Thanks for pointing it out. $\endgroup$ – Wonder Apr 17 '12 at 2:40
$$\begin{align*} \binom{n}k&=\frac{n!}{k!(n-k)!}\\ &\le\frac{e^{1/12n} \sqrt{2\pi n} (n/e)^n}{\sqrt{2\pi k}(k/e)^k\sqrt{2\pi(n-k)}((n-k)/e)^{n-k}}\\ &=\frac{e^{1/12n}\sqrt{n}}{\sqrt{2\pi k(n-k)}}\left(\frac{n/e}{k/e}\right)^k\left(\frac{n/e}{(n-k)/e}\right)^{n-k}\\ &\le\frac{e^{1/12n}\sqrt{n}}{\sqrt{2\pi k(n-k)}}\left(\frac{n}{k/e}\right)^k\\ &\le\frac{e^{1/12n}\sqrt{n}}{\sqrt{2\pi(n-1)}}\left(\frac{en}k\right)^k\\ &\le\left(\frac{en}k\right)^k \end{align*}$$
Brian M. ScottBrian M. Scott
$\begingroup$ Isn't $({n/e \over (n-k)/e})^{n-k} \gt 1$ ? $\endgroup$ – adamG Feb 23 '13 at 12:15
$\begingroup$ @adamG: It's $\left(1+\frac{k}{n-k}\right)^{n-k}\le e^k$. $\endgroup$ – Brian M. Scott Feb 23 '13 at 12:22
$\begingroup$ Thanks Brian for the clarification! $\endgroup$ – adamG Feb 23 '13 at 13:13
$\begingroup$ @adamG: My pleasure! (Over the years I've been hung up often enough over such things.) $\endgroup$ – Brian M. Scott Feb 23 '13 at 13:16
I found a different proof of this fact avoiding Stirling.
Note $f(x) = (\frac{ex}{k})^k$ is a $C^2$ strictly convex function.
So $f(x) + f'(x)h < f(x+h)$ for $0 < h$
In particular, letting $h = 1$ we get
$$f'(x-1)+ f(x-1) < f(x)$$ $$(\frac{e(x-1)}{k})^{k-1}e + (\frac{e(x-1)}{k})^k< (\frac{ex}{k})^k$$ Noting that $(\frac{k}{k-1})^{k-1} < e$ since the ratio limits to $e$ from below. Substituting in the LHS for the second $e$, we get $$(\frac{e(x-1)}{k})^{k-1}(\frac{k}{k-1})^{k-1} + (\frac{e(x-1)}{k})^k< (\frac{ex}{k})^k$$ $$(\frac{e(x-1)}{k-1})^{k-1} + (\frac{e(x-1)}{k})^k< (\frac{ex}{k})^k$$
Now the result follows by induction on $n+k$ for $k\lt n$, and Pascal's formula.
$\binom {n-1}{k} + \binom {n-1}{k-1} = \binom {n}{k}$
$ \binom {n-1}{k} \le (\frac{e(n-1)}{k})^k$ and $\binom {n-1}{k-1} \le (\frac{e(n-1)}{k-1})^{k-1}$ imply $\binom{n}{k} \le (\frac{en}{k})^k$
The base case $\binom{n}{n}$ and $\binom{n}{0}$ and $\binom{n}{1}$ are trivial so we can avoid the technicality where $k=1$ and $k=0$.
edited May 17 '14 at 8:06
MarkMark
Not the answer you're looking for? Browse other questions tagged inequality binomial-coefficients or ask your own question.
Inequality involving factorial $\binom nk<(en/k)^k$
A combinatorial inequality
Simplest proof that $\binom{n}{k} \leq \left(\frac{en}{k}\right)^k$
Prove the estimate $\binom{n}{k} \le (\frac{en}{k})^k$
Inequality involving binomial coefficients and $e$
Proving upper bound for $\binom{n}{k}$ directly from given fact
Regarding bound for n choose k
Derivation of bound on expression involving binomial coefficient from Erdős and Rényi 1959
Using Stirling's formula to uniformly bound Bernoulli success probabilities
Best upper and lower bound for a binomial coefficient
How was the following lower bound on ${n \choose pn}$ derived?
Finding Tight bound for Binomial Coefficient inequality
$1435\binom{20000}{10000}*0.515^{10000}*0.485^{10000} \leq 0.01$ | CommonCrawl |
How does a capacitor smooth energy?
I'm trying to wrap my mind around how capacitors work. I understand they store a charge and generally understand how but I don't understand how using them "smoothes" the flow of the charge. Doesn't, say a motor, drawing power from a charged capacitor do the same thing when drawing power from a power source? What does is mean that the charge is smoothed and how??
Michael RaderMichael Rader
\$\begingroup\$ The power-source-to-capacitor wire has a lot more inductance (because it's longer) than the capacitor-to-load wire. \$\endgroup\$ – immibis Jan 12 '15 at 7:05
Capacitors don't store charge. That's such a worthless statement because it's based on this word "charge" that has multiple meanings. Please forget you ever heard it. They also do not smooth energy. What they smooth is voltage.
I will answer you question, but first you must really understand how capacitors work.
What capacitors store is energy. The stuff that flows around in electric circuits is electric charge. We measure rate of flow of charge in amperes. Quantity of charge is measured in coulombs. Because charge is never created nor destroyed, whenever we are measuring charge we are usually counting charge that flows past a metaphorical gate. Except for some very odd circuits, the total charge in an electronic device is also constant. It is very much like a closed hydraulic system: there's some fluid in it and you can move it around, but none ever enters or leaks out. You can count how much fluid flows past some point, but it must come from somewhere, and it must go somewhere else.
Imagine if you had a spherical vessel, filled with a fluid. Down the center of the vessel is a rubber plate that you can stretch by pushing fluid in one side and pumping it out the other. That's what a capacitor is like:
This is from Bill Beaty's excellent capacitor misconceptions.
When you push water in one side, an equal amount of water must come out the other side. Further, once this rubber membrane is stretched, it wants to return to being straight. Thus, the water pressure on one side will be higher than the other. If you were to remove the stoppers and replace them with a hose, water would flow until the rubber were not stretched.
Now replace "water" with "electric charge", and "pressure" with "voltage", and you have a capacitor.
Now imagine two vessels, one the size of a golf ball, and one the size of a swimming pool. Each has a membrane of identical stretchiness in the middle. If you pump a tablespoon of water through the golf ball sized vessel, the membrane will be stretched a lot, and consequently the pressure difference between the sides will be great. If you do the same to the swimming pool sized vessel, the membrane will barely move at all, and the pressure difference will just be slightly more than nothing.
This is what capacitance is. It tells you, for a given quantity of water moved, what the pressure difference is. It tells you, for a given amount of electric charge moved through the capacitor, what the voltage will be. It is defined as:
$$ C = {q \over V} $$
\$C\$ is capacitance, measured in farads,
\$q\$ is charged moved through the capacitor, measured in coulombs, and
\$V\$ is voltage, measure in (you guessed it) volts.
Don't get hung up on "coulomb". A coulomb is how much charge moves past a point if 1 ampere is flowing for 1 second. Or, 2 amperes for half a second. Or, 1/2 ampere for 2 seconds.
If you took calculus, then you will recognize that charge is the integral of current. In other words, charge is to current as distance is to velocity. You can replace "ampere" with "coulomb per second" -- the units are exactly the same.
Using that knowledge and a bit of basic calculus, capacitance can also be defined in terms of voltage and current:
$$ {\mathrm d V(t) \over \mathrm d t} = {I(t) \over C} $$
What this says is: the rate of change of voltage over time (volts per second) is equal to the current (amperes or coulombs per second) divided by the capacitance (farads).
If you have a 1 farad capacitor, and you are moving 1 ampere (1 coulomb per second) through it, then voltage across the capacitor will change at the rate of 1 volt per second.
If you double that capacitance, then the rate of change of voltage will be half.
And here, I think, is the answer to your question. Frequently capacitors are put across the power supply to hold the voltage steady. This works because the more capacitance you have, the harder it is to change the voltage, because it requires more current to do so.
In this application, capacitors don't smooth energy, they smooth voltage. They do so by providing a storage of energy from which the load can draw during times of transient high current. This makes the power supply's job easier because it doesn't have to deal with high changes in current. In effect, the capacitor helps to average the current demand of the load as seen by the power supply.
Phil FrostPhil Frost
\$\begingroup\$ If my capacitor has a higher voltage rating, is it still ok to use it to smooth a lower voltage circuit? \$\endgroup\$ – timlyo Jan 8 '18 at 19:08
\$\begingroup\$ @timlyo That would probably be better asked as a new question. \$\endgroup\$ – Phil Frost Jan 9 '18 at 0:52
Smoothing capacitors are used to suppress voltage ripples, usually on power supply lines. They do this by periodically storing and replenishing energy. The image below shows a very common use case of these capacitors in a full bridge rectifier.
As you can see, the smoothing capacitor discharges and replenishes energy when the output voltage drops. This "evens out" the output voltage, which is why this capacitor is called a "smoothing" capacitor.
FullmetalEngineerFullmetalEngineer
Capacitors are there to provide the illusion to your load that they are connected to an ideal voltage source.
For example, your power source has some internal resistance and there may be a significant inductance due to long leads.
Adding the capacitor allows the load to see an approximate Vs as the switch is opened/closed. Otherwise, there will be a variable supply voltage as the load is opened/closed.
helloworld922helloworld922
\$\begingroup\$ This is only true for AC loads, or put another way, to the extent that the duration the switch remains in a particular state is significantly shorter than the time constant of C and the load. \$\endgroup\$ – Phil Frost Jan 9 '18 at 0:50
Yes, it works basically the same way. However, a capacitor typically has a lower capacity than, say, a battery. When you connect a load to a capacitor, its charge and voltage will decrease over time. That's why it's called smooth. A battery does that in the exact same way but much, much slower, because of the higher capacity.
Also there's smooth in the sense of smoothing a voltage signal. If we charge and discharge a capacitor at the same time with some variable voltage signal, you will understand that the capacitor charges on rising edges. On the falling edges, the capacitor 'helps' the other power supply, which makes the falling edge smoother. Eventually this can lead to an almost constant voltage.
KeelanKeelan
\$\begingroup\$ The other two answers provide images for my second paragraph. Unfortunately, I'm on my phone now, so adding pictures is a little complicated. \$\endgroup\$ – Keelan Jan 11 '15 at 0:45
Imagine a capacitor as a glass of water with a hole.So no matter how fast you fill the glass the output through the hole is roughly same. That's exactly how a capacitor works,it first charges up,then it provides an output which filters out the noise and provides a clean output,irrespective of how the input fluctuates.
Saptarshi GhoshSaptarshi Ghosh
Not the answer you're looking for? Browse other questions tagged capacitor or ask your own question.
How does a capacitor work as a filter in rectifier circuits (with equations)?
How do I know the maximum voltage that a capacitor releases?
Voltage a 60uF Capacitor will Hold
How can a capacitor store charge whilst also passing current?
How does a decoupling capacitor handle a spike (increase) in the voltage from the power supply?
Distribution of electrons in a capacitor charge circuit?
how to charge a non-polar capacitor?
A few questions about basic capacitor concepts
What is meant by "capacitors try to try to maintain voltage at a constant level"
How does a capacitor resist changes in voltage?
How Does DC Current Charge a Capacitor? | CommonCrawl |
History of the terms "prime" and "irreducible" in Ring Theory.
In ring theory, a nonzero, nonunit element $p$ of a integral domain is called irreducible if $p=ab$ implies that exactly one of $a$ and $b$ is a unit, and it's called prime if $p\mid ab$ implies that $p\mid a$ or $p\mid b$, or equivalently if the principal ideal generated by $p$ is a prime ideal.
I assume that the notion of a prime number (integer) existed before ring theory was developed. The usual definition of a prime number is a positive integer with exactly two positive factors , $1$ and itself. This is equivalent to the definition of "irreducible." It also turns out that, since $\mathbb Z$ is a UFD, every irreducible is prime and every nonzero prime is irreducible. But since the usual definition of a prime integer is what we call "irreducible," why weren't irreducible elements called prime elements? Why was it decided the divisibility property was a more intrinsic property of "primeness" than irreducibility?
ring-theory terminology math-history
NishantNishant
$\begingroup$ The books mentioned in my answer at math.stackexchange.com/questions/362/… probably contain some material on this but I can't check right now. $\endgroup$ – lhf May 25 '14 at 2:10
$\begingroup$ The definition of prime was changed by Dedekind who considered unique factorization into primes to be more important, see What changes in mathematics resulted in the change of the definition of primes and exclusion of 1? $\endgroup$ – Conifold Dec 23 '20 at 8:51
I'm not an expert in the history of ring theory but this is, I think, pretty close to a correct answer:
You are right that the notion of "prime integer" predates the more general notions of "prime element" and "irreducible element" in an arbitrary ring. In fact, prime numbers go back to ancient Greece! But there is a missing link in the evolution of that original notion into the (two distinct) modern notions: namely, the notion of a prime ideal.
Ideals were regarded as a kind of "generalized number"; in fact, the original terminology was "ideal number", only later shortened to "ideal". One ideal $I$ was said to divide another ideal $J$ if and only if $J \subset I$. A prime ideal is then defined, in precise analogy with the "classical" definition of prime numbers (i.e. as indecomposables) to be an ideal that is not divisible by any ideals other than itself and the entire ring.
Once "prime ideal" was defined, the next development was to say that an element was prime if it generated a prime ideal. It is a fairly straightforward exercise to show that this translates directly to the modern definition of prime element. It is also fairly easy to show that (as long as there are no zero-divisors in the ring) every prime element is indecomposable in the classic sense. So everything fits together quite nicely.
It is only at this point that somebody starts looking at rings like $\mathbb{Z}[\sqrt{-5}]$, which are not unique factorization domains, and realizes that those rings can contain elements that are indecomposable in the classic sense, but do not generate prime ideals. Whoah! So we need a name for those types of elements. "Prime" is already taken, so they get called "irreducible".
So there you have it. The elements that we now call "irreducible elements", despite the fact that they have the property that we usually associate with "prime numbers", were not called "prime elements" because that word was already in use for elements that generate "prime ideals", which are defined in direct analogy with how we "usually" define prime numbers.
mweissmweiss
$\begingroup$ Wait, but that definition of "prime ideal" seems to actually be the definition for "maximal ideal"? $\endgroup$ – Nishant May 29 '14 at 16:36
$\begingroup$ Yes, you are correct, and I think there is more to the story than my summary includes. I believe that Dedekind's original definition of "prime ideal" (which is what I gave above) corresponds to what we today call "maximal ideal". The ideas bifurcated later. I think. :) $\endgroup$ – mweiss May 29 '14 at 16:47
$\begingroup$ @Nishant This is a neat reconstruction, but Dedekind defined ideals only after changing the definition of prime in ring theory to what it is today, i.e. $p\mid ab$ implies that $p\mid a$ or $p\mid b$. His stated reason was that this is the "characteristic property" that produces unique factorization. The numbers given by the traditional definition he renamed into "indecomposables", now irreducibles, see What changes in mathematics resulted in the change of the definition of primes and exclusion of 1? Prime ideals weren't the missing link. $\endgroup$ – Conifold Dec 23 '20 at 8:43
Not the answer you're looking for? Browse other questions tagged ring-theory terminology math-history or ask your own question.
History of the Concept of a Ring
Number systems violating easy primes
Constructing irreducible polynomials over the Polynomial Ring
Ring theory : Completely lost and overwhelmed
Prime elements in a noncommutative ring
About the ways prove that a ring is a UFD.
Is there a name for those elements $x$ of a commutative ring $R$ such that $Rx$ is maximal among all proper ideals?
A question about ring theory | CommonCrawl |
Checking if a symbolic symmetrical Matrix is negative definite
I have the following problem finding the value ranges for the parameters of a symbolic symmetrical matrix in order to make it negative definite:
The matrix I'm talking about looks as follows
A := {{-1, -b, a, 0}, {-b, -1, 0, 0}, {a, 0, -a, -b a}, {0, 0, -b a, -a}}
and as you can see in matrix form, it is a symmetrical matrix
\begin{array}{cccc} -1 & -b & a & 0 \\ -b & -1 & 0 & 0 \\ a & 0 & -a & -a b \\ 0 & 0 & -a b & -a \\ \end{array}
Now I'm trying to find the value ranges of a and b in order to make the matrix negative definite. It is important that b depends on a and not the other way round, since the matrix is part of an economic model, which doesn't make any sense otherwise.
First I used the approach to find the value ranges, which make all Eigenvalues negative and thus lead to a negative definite matrix
Reduce[Eigenvalues[A] < 0, {a, b}]
which yields
0 < a < 1 && -Sqrt[1 - Sqrt[a]] < b < Sqrt[1 - Sqrt[a]]
Everything fine so far. But then I tried a different approach. If the k-th order leading principal minor of the matrix has sign (-1)^k, then the matrix should be negative definite, so I'm expecting the same result:
A1 := {{-1}}
A2 := {{-1, -b}, {-b, -1}}
A3 := {{-1, -b, a}, {-b, -1, 0}, {a, 0, -a}}
Reduce[{Det[A1] < 0, Det[A2] > 0, Det[A3] < 0, Det[A] > 0}, {a, b}]
0 < a < 1 && Root[1 - a - 2 #1^2 + #1^4 &, 2] < b < Root[1 - a - 2 #1^2 + #1^4 &, 3]
which is in radicals
ToRadicals[
0 < a < 1 && Root[1 - a - 2 #1^2 + #1^4 &, 2] < b < Root[1 - a - 2 #1^2 + #1^4 &, 3]]
0 < a < 1 && -Sqrt[1 - Sqrt[a]] < b < Sqrt[1 + Sqrt[a]]
As you can see, the result is different than in the first approach (to be more specific the upper bound of b is different), which makes no sense, since both approaches should yield the same result.
Does anyone know what I am doing wrong or which of the results is correct?
matrix linear-algebra eigenvalues
PTSammyPTSammy
$\begingroup$ The latter answer is incorrect: A /. {a -> 1/2, b -> Sqrt[1 + Sqrt[1/2]]} // Eigenvalues // N has a positive eigenvalue. Maybe you set up the deterimnants incorrectly since it looks like a sign error. $\endgroup$ – bill s Aug 11 '17 at 18:51
$\begingroup$ See @Szabolcs' answer to a question about reordering when using ToRadicals on Root objects. $\endgroup$ – Carl Woll Aug 11 '17 at 19:27
$\begingroup$ Thanks @CarlWoll, I think the ordering of the Root objects is on the right track. But I still haven't found a solution for my specific case $\endgroup$ – PTSammy Aug 13 '17 at 12:56
Both of your approaches yield the same answer. it is the application of ToRadicals that causes the answer to be different. First, compare the two limits before the application of ToRadicals:
upperLimit1 = Sqrt[1 - Sqrt[a]];
upperLimit2 = Root[1 - a - 2 #1^2 + #1^4 &, 3];
Plot[upperLimit1, {a, 0, 1}]
upperLimit1 and upperLimit2 are the same over the region 0 < a < 1. Converting the Root object into radicals is problematic because the Root ordering depends on the parameter a. One suggestion would be to not use ToRadicals and just work with the Root objects. If you really want radicals, a naive application of ToRadicals:
ToRadicals[upperLimit2]
Sqrt[1 + Sqrt[a]]
is only correct for some values of the parameter a. However, in your case, you know something about the parameter a, so you should make use of that by giving ToRadicals an assumption:
ToRadicals[upperLimit2, Assumptions -> 0 < a < 1]
Sqrt[1 - Sqrt[a]]
Note that using:
ToRadicals[0 < a < 1 && upperLimit2]
0 < a < 1 && Sqrt[1 + Sqrt[a]]
does not cause ToRadicals to use 0 < a < 1 as an assumption. The assumption needs to be given explicitly as an option to ToRadicals.
Carl WollCarl Woll
In this answer I will briefly go through the method of using the leading principal minors to derive negative definiteness. Hope this helps you verify your results
A matrix is negative definite when ALL its leading principal minors alternate in sign, with the k-th order leading principal minor having the same sign as $(-1)^k$.
Please note, I am NOT talking about a problem instance with constraints ie I am not talking about bordered matrices. In that case, the rules are a bit different, but this is not the case here.
Also, note, that the order of a minor is derived from the number of columns and rows one has to delete in order to obtain the corresponding submatrix.
Counting is straightforward: For an $nxn$ matrix the k-th order leading principal minor is produced by deleting the last $n-k$ rows and columns of the original matrix. In this fashion, the 1-st order leading principal minor of a $4x4$ matrix is produced after deleting the last $4-1=3$ rows and columns while the 2-nd order leading principal minor of the same matrix needs to have the last $4-2=2$ rows and columns deleted etc.
Finally, note that taking the n-th order leading principal minor of an $nxn$ matrix means deleting $n-n=0$ rows and columns ie the n-th order minor is the determinant of the matrix itself.
Mathematica has (arguably) a lot of ways to produce the leading principal submatrices that are needed to produce the minors (their determinants). One simple and fast way to do it, is:
LeadingPrincipalMinors=Array[Minors[A, #][[1, 1]] &, 4]
Now, checking to verify if the signs of the minors are in agreement with the sign rule we can do the following:
Reduce[
And @@ MapIndexed[(
Reduce[#1 (-1)^First[#2] > 0, b[a], Reals]
) &, LeadingPrincipalMinors], b[a], Reals]
The 'trick' I use in this piece of code is to take advantage of the fact that, when two numbers have the same sign, their product is positive. This is what the following excerpt of code from above, does:
#1 (-1)^First[#2] >= 0
Please, note that I have used b[a] instead of b in order to make explicit the dependence of b on a. If you chose to do so, you have to replace the definition of matrix A with something like
A=A/.b->b[a]
but that is not necessary. I just find it useful to have dependence relationships be defined as explicitly as possible in my code.
Therefore the range of values over which the initial matrix is negative definite depends on a and b[a] being in an appropriate range of values.
Hope that helps :)
-- update --
) &, LeadingPrincipalMinors], b[a], Reals]//ToRadicals
Reduce[And @@ Thread[Eigenvalues[A] < 0], b[a], Reals]
yields the same output, namely:
0 < a < 1 && -Sqrt[1 - Sqrt[a]] < b[a] < Sqrt[1 - Sqrt[a]]
$\begingroup$ Thanks for your answer! There is a small mistake in your code #1 (-1)^First[#2] >= 0 needs to be #1 (-1)^First[#2] > 0, otherwise the leading principal minors can also be 0, which does not imply negative definiteness. Changing the code this way leads to the same result I had earlier in my second approach and does not solve my problem $\endgroup$ – PTSammy Aug 13 '17 at 13:00
$\begingroup$ @PTSammy you are right; I wrote in the body of the text "[w]hen two numbers have the same sign, their product is positive" but miss-typed ">=" instead of the correct ">" i the code segment. Will correct it. $\endgroup$ – user42582 Aug 13 '17 at 15:15
$\begingroup$ @PTSammy As far as the last part of your comment is concerned-just for the sake of argument-technically speaking, my answer, answers the second part of your question "[D]oes anyone know what I am doing wrong or which of the results is correct?" as I already stated it would do, in the beginning of the text. Having said that, it is true that it does not identify what your possible error might be-which is something which I didn't state it would do. $\endgroup$ – user42582 Aug 13 '17 at 15:22
$\begingroup$ You're right, you didn't state that. No worries, I didn't mean to offend you ;) $\endgroup$ – PTSammy Aug 13 '17 at 18:03
$\begingroup$ @PTSammy all's well; hope you figure it out; have a nice time :) $\endgroup$ – user42582 Aug 13 '17 at 18:21
Not the answer you're looking for? Browse other questions tagged matrix linear-algebra eigenvalues or ask your own question.
Root and ToRadical[Root] do not preserve order of roots
Obtaining the square-root of a general positive definite matrix
Manipulate doesn't work for plotting a region where a matrix is positive semi-definite
Eigenvectors of numerical matrix
Small positive eigenvalues found for a negative definite matrix
Powers of an Orthogonal Matrix
Why doesn't the matrix rank decrease in this case?
Proving the positive semidefiniteness of a 6X6 symbolic matrix
Analytic calculation of the Pfaffian using sqrt | CommonCrawl |
mm-Wave channel estimation with accelerated gradient descent algorithms
Hossein Soleimani1,
Danilo De Donno2 &
Stefano Tomasin ORCID: orcid.org/0000-0003-3253-67931,3
EURASIP Journal on Wireless Communications and Networking volume 2018, Article number: 272 (2018) Cite this article
The availability of millimeter wave (mm-Wave) band in conjunction with massive multiple-input-multiple-output (MIMO) technology is expected to boost the data rates of the fifth-generation (5G) cellular systems. However, in order to achieve high spectral efficiencies, an accurate channel estimate is required, which is a challenging task in massive MIMO. By exploiting the small number of paths that characterize the mm-Wave channel, the estimation problem can be solved by compressed-sensing (CS) techniques. In this paper, we propose a novel CS channel estimation method based on the accelerated gradient descent with adaptive restart (AGDAR) algorithm exploiting a ℓ1-norm approximation of the sparsity constraint. Moreover, a modified re-weighted compressed-sensing (RCS) technique is considered that iterates AGDAR using a weighted version of the ℓ1-norm term, where weights are adapted at each iteration. We also discuss the impact of cell sectorization and tracking on the channel estimation algorithm. We compare the proposed solutions with existing channel estimations with an extensive simulation campaign on downlink third-generation partnership project (3GPP) channel models.
Due to its huge spectrum availability, the millimeter wave (mm-Wave) band is currently considered for the fifth generation (5G) of cellular networks [1–3]. The high attenuation incurred at those frequencies imposes the use of multiple antennas at each device, typically resulting in massive multiple-input-multiple-output (MIMO) systems, giving rise to various challenges. We focus here on channel estimation that is needed for proper transmit beamforming. In fact, the least square (LS) estimate using short training sequences and limited transmit power (to reduce overhead in massive MIMO systems) is not accurate enough for capacity achieving beamforming. However, the mm-Wave MIMO channel comprises a small number of dominant clusters of paths and even with many antennas a small set of parameters characterizes the entire channel. This induces a sparsity of the mm-Wave channel matrix when transformed by a Fourier transform into the so-called virtual channel, and compressed-sensing (CS) techniques can be used for channel estimation.
Various solutions have been proposed for channel estimation in mm-Wave communication systems, and the reader may refer to [4] for their survey. Part of the literature has considered transceivers with hybrid beamformers (cascade of beamformers before and after the digital to analog converters): the joint optimization of both training and estimation has been pursued in [5] for these structures using a feedback channel. Orthogonal matching pursuit (OMP) solutions have been considered with both single path [6, 7] and multiple-path cancelation [8, 9]. In [10], an enhanced approach for generating the beamforming codebook has been proposed, using the continuous basis pursuit (CBP) method, while [11] considers a fast iterative shrinkage-thresholding algorithm (FISTA) approach.
For fully digital beamformers, [12] considers the sparse channel estimation as a least absolute shrinkage and selection operator (LASSO) problem. In [13], OMP is used to estimate the channel by iteratively detecting and canceling paths from the virtual channel estimate. In [14], a basis pursuit denoise (BPDN) approach is suggested where a weighted version of ℓ1-norm term is considered in the LASSO problem and weights are iteratively adapted. A sparsity adaptive matching pursuit (SSAMP) approach is instead used in [15], while in [16] the LASSO problem is solved by applying a generalized approximate message passing (GAMP) algorithm exploiting the Bernoulli-Gaussian distribution of paths in the virtual channel.
In this paper, we propose a novel sparse channel estimation method based on the accelerated gradient descent with adaptive restart (AGDAR) algorithm [17]. Focusing on a scenario where the receiver obtains first the LS estimate of the narrowband mm-Wave MIMO channel, we relax the sparse optimization problem using LASSO, wherein the ℓ0-norm is replaced by the ℓ1-norm. We apply then the AGDAR algorithm [17] to solve the sparse channel estimation problem. In order to further enhance the channel estimation procedure, a re-weighted ℓ1-norm problem is considered leading to the re-weighted compressed-sensing (RCS) algorithm [18], which iterates AGDAR with different weights of the ℓ1-norm term. We also discuss the impact of cell sectorization and channel tracking on the channel estimation algorithm. We compare the proposed solutions with OMP solutions [6, 8, 13]. With respect to the rest of the literature we reduce the complexity (with respect to the random search of A-LASSO in [12]), we swap the objective functions and the constraints with respect to [14], effectively minimizing the mean square error (MSE) and providing details on the implementation of the optimization algorithm. Compared to [15], we use different algorithms (AGDAR and RCS instead of SSAMP) that trade-off between sparsity and noise reduction. Lastly, we consider a single user and a static pilot transmission for the initial estimate, while [16] considers the adaptation of the transmit and receive beamformers to allow channel estimation simultaneously for more users. An extensive simulation campaign on third-generation partnership project (3GPP) channel models [3] for a downlink scenario has been conducted to show the merits of the proposed approach in terms of both estimate MSE and computational complexity.
The rest of the paper is organized as follows. We introduce the system model in Section 2, providing the description of both the mm-Wave channel model and the existing OMP solutions. The sparse channel estimation problem is introduced in Section 3, together with a discussion on sectorization and channel tracking. The proposed AGDAR technique is described in Section 4, together with the refined RCS approach. Numerical results are presented in Section 5 to assess the performance of the considered techniques in a 5G scenario, before conclusions are driven in Section 6.
We consider a massive MIMO narrowband communication system with Nt antennas at the transmitter and Nr antennas at the receiver. This models indifferently either the uplink or the downlink of a cellular communication system. Let \(\mathbf {H} \in \mathbb {C}^{N_{\mathrm {r}} \times N_{\mathrm {t}} }\) be the channel matrix with complex entries. Antennas are organized into either uniform linear arrays (ULAs) [19] or uniform planar arrays (UPAs) [20] at both the transmitter and receiver: ULA antennas are uniformly spaced along the z axis while UPA antennas are uniformly tiled over the yz-planeFootnote 1. For the sake of a clearer explanation in the main body of the paper, we only provide derivations for ULA, while we report in Appendix A the results for an UPA with D2×D3=Nt transmit antennas and D0×D1=Nr receive antennas.
We indicate with L the number of paths for the signal from the transmitter to the receiver, so that the channel matrix entries can written as
$$ [ \boldsymbol{H}]_{i_{1},i_{2}} =\sum\limits_{l=1}^{L} \alpha_{l} e^{2\pi \eta_{1}^{l}i_{1}j}e^{2\pi \eta_{2}^{l}i_{2}j}\,, $$
where i1=0,…,Nr−1, i2=0,…,Nt−1, \(|\eta _{d}^{l}|\leq \frac {\delta }{\lambda }\), d=1,2, λ is the carrier wave length, δ is the antenna spacing, and αl is the l-path amplitude including path loss, shadowing, and fading. Note that parameters \(\eta _{d}^{l}\) are related to the angles of departure and arrival of the l-th path. By assuming δ≤λ/2, we have \(\eta _{d}^{l} \in \left [-\frac 12, \frac 12\right ]\). The statistics of each parameter depend on the considered propagation scenario, and various relevant cases can be found for example in the 3GPP mm-Wave channel model [3] including channel models with clustered sub-paths [3], where L becomes the total number of (sub-)paths from all clusters. Typically, in mm-Wave systems, the number of paths (or sub-paths) L is small [21].
Figure 1 shows an example of receiver with Nr=3 receive antennas and a single path arriving at the antennas with an angle 𝜗 from a distance D: in this case \(\eta _{1}^{1} = \frac {\delta }{\lambda } \cos \vartheta \) and \(\alpha _{1} = \frac {1}{D^{2}} e^{-j2\pi \frac {D}{\lambda }}\).
ULA receiver. Single path received by ULA with Nr=3 antennas at distance D from the source. In this case \(\eta _{1}^{1} = \frac {\delta }{\lambda } \cos \vartheta \) and \(\alpha _{1} = \frac {1}{D^{2}} e^{-j2\pi \frac {D}{\lambda }}\)
LS estimate
The considered channel estimation techniques in this paper are all based on the LS channel estimate, briefly summarized here.
The set of Nt training symbolsFootnote 2 transmitted with the Nt antennas are collected into the Nt×Nt matrix S, assumed here to be unitary. The corresponding received Nt×Nr matrix signal is
$$ \boldsymbol{R} = \boldsymbol{H} \boldsymbol{S} + \boldsymbol{N}\,, $$
where \(\boldsymbol {N} \in \mathbb {C}^{N_{\mathrm {r}} \times N_{\mathrm {t}} }\) is the noise matrix with independent and identically distributed (iid) zero-mean complex Gaussian entries, each with power σ2. The LS channel estimate at the receiver is obtained as [22]
$$ \boldsymbol{H}' = \boldsymbol{R}\boldsymbol{S}^{-1} = \boldsymbol{H}+\boldsymbol{N}', $$
where N′ is a matrix with iid zero-mean complex Gaussian entries having power σ2 (thanks to the unitary property of S). Note that this estimation procedure may yield a significant overhead for the transmission of training sequences only when the number of transmit antennas grows large [23], since the number of transmitted symbols (the columns of S) is Nt. We will further address this problem in Section 3.
OMP methods
We will compare our channel estimation algorithm with two OMP techniques: single peak cancelation (SPC) [6] and joint peak cancelation (JPC) [8]. Both methods use the Fourier transform of the channel matrix, in what is usually denoted as virtual channel or angular domain representation [24, Sec. 7.3.3]. With reference to ULA, let \(I_{n}(x)=\frac {\sin (\pi x)}{n \sin \left ({\pi \frac {x}{n}}\right)}\) be the 1D-periodic sinc function and let
$$ \begin{aligned} \left[\boldsymbol{W}\left(\boldsymbol{\Omega}^{l}\right)\right]_{\boldsymbol{f}} \,=\, &\left[ \frac{N_{\mathrm{r}}}{M_{1}} I_{N_{\mathrm{r}}}\left(\frac{N_{\mathrm{r}}\left(f_{1}+\Omega_{1}^{l}\right)}{M_{1}}\right) e^{j\pi \left(f_{1}+\Omega_{1}^{l}\right)\frac{N_{\mathrm{r}}-1}{M_{1}}}\right] \times \\ &\left[ \frac{N_{\mathrm{t}}}{M_{2}} I_{N_{\mathrm{t}}}\left(\frac{N_{\mathrm{t}}(f_{2}+\Omega_{2}^{l})}{M_{2}}\right) e^{j\pi \left(f_{2}+\Omega_{2}^{l}\right)\frac{N_{\mathrm{t}}-1}{M_{2}}}\right], \end{aligned} $$
be the two-dimensional (2D) sampled periodic sinc function, where M1 and M2 are the number of samples per period in the 2D virtual channel domain, f=(f1,f2), fd=0,…,Md−1, d=1,2, are the indices of the samples, and \(\boldsymbol {\Omega }^{l}=\left (\Omega _{1}^{l}, \Omega _{2}^{l}\right)=\left (M_{1}\eta _{1}^{l},M_{2}\eta _{2}^{l}\right)\).
The virtual channel matrix \(\boldsymbol {V} \in \mathbb C^{M_{1} \times M_{2}}\) is the 2D-discrete Fourier transform (DFT) of H with entries [6]
$$\begin{array}{*{20}l} [\!\boldsymbol{V}]_{\boldsymbol{f}} &= \frac{1}{M_{1}M_{2}} \sum\limits_{i_{1}=0}^{{N_{\mathrm{r}}}-1} \sum\limits_{i_{2}=0}^{{N_{\mathrm{t}}}-1} [\boldsymbol{H} ]_{(i_{1},i_{2})} e^{-\frac{2\pi f_{1}i_{1}j}{M_{1}}} e^{-\frac{2 \pi f_{2}i_{2}j}{M_{2}}} \\&= \sum\limits_{l=1}^{L} \alpha_{l} \left[\boldsymbol{W}(\boldsymbol{\Omega}^{l})\right]_{\boldsymbol{f}}\,, \end{array} $$
where f=(f1,f2) are the indices of the channel sample in the virtual domain.
The SPC method [6] reported in Algorithm 1 (for algo = SPC), iteratively estimates the amplitude αl and the discrete positions Ωl of I paths in the virtual channel and cancels their corresponding periodic sinc functions in the virtual channel. After I iterations, the channel estimate \(\boldsymbol {\widehat {H}}\) is obtained by taking the 2D-inverse discrete Fourier transform (IDFT) of the estimated virtual channel \(\boldsymbol {\widehat {V}}\) reconstructed by summing the contributions of all the detected paths.
The number of iterations (i.e., the number of detected paths) is a trade-off between L (the number of paths) and the noise level. On the one hand, it is advisable to estimate all L paths, and on the other hand, noise can make small-power paths not detectable; therefore, it is better not to estimate all of them by using I<L. In Section 5, we determine by simulations the optimal I that minimizes the MSE of the channel estimate. Note that SPC provides an intrinsically approximated solution even in the absence of noise, since the peak positions Ωl are estimated on a fixed discrete grid.
The JPC algorithm of [8] reported in Algorithm 1 (for algo= JPC) is a modification of SPC that at each iteration jointly estimates the amplitudes of all previously detected peaks by the LS approach and cancels the corresponding periodic sinc's from the virtual channel. In particular, x=vec(X) stacks the columns of matrix X into the column vector x; at iteration l, one peak is detected (line 3) and then the amplitudes of all previously detected peaks are jointly estimated (line 8), and the new virtual channel with removed peaks is obtained (lines 9–11). This is achieved by building matrix w that contains in column l the vector version of \(\boldsymbol {W}\left (\widehat {\boldsymbol {\Omega }}^{l}\right)\) (line 6).
This algorithm has the advantage over the SPC that each amplitude estimate is refined at each iteration thus taking advantage also of the peaks detected in further iterations.
Sparse dual channel estimation
In order to obtain an efficient and simple channel estimator, we exploit the specific channel structure described in the previous section. In particular, we use the fact that the channel is composed of a small number of paths with respect to the typically large number of transmit and receive antennas.
In this paper, we directly refer to the representation (1) and interpret it as 2D-IDFT of a sparse matrix having only L non-zero entries. First, the channel H is rearranged into the channel column vector \( {\boldsymbol {h}}=\text {vec}(\boldsymbol {H}) \in \mathbb {C}^{N_{\mathrm {r}}N_{\mathrm {t}} \times 1}\) with entries
$$ [\boldsymbol{h}]_{i_{1}+N_{\mathrm{r}} i_{2}} = [ \boldsymbol{H} ]_{i_{1},i_{2}}, $$
where i1=0,…,Nr−1, i2=0,…,Nt−1, while the 2D-IDFT matrix is \(\boldsymbol {F} \in \mathbb {C}^{N_{\mathrm {t}} N_{\mathrm {r}} \times M_{2} M_{1}} \) with entries
$$ [\boldsymbol{F}]_{(i_{1}+N_{\mathrm{r}}i_{2},f_{1}+M_{1}f_{2})} = \prod_{d \in \{1, 2\}} e^{\frac{2 \pi j f_{d}i_{d}}{M_{d}}}\,, $$
where fd=0,…,Md−1, d=1,2. Lastly, we define the column vector v of length M1M2 with L non-zero entries at position \(\bar {\Omega }_{1}^{l} + M_{1} \bar {\Omega }_{2}^{l}\), for l=1,…,L, i.e.,
$$ [\boldsymbol{v}]_{\bar{\Omega}_{1}^{l}+M_{1}\bar{\Omega}_{2}^{l}} = \alpha_{l}, $$
$$ \bar{\Omega}_{1}^{l} = \langle \eta_{1}^{l} M_{1} \rangle \,, \bar{\Omega}_{2}^{l} = \langle \eta_{2}^{l} M_{2} \rangle \,, $$
and 〈x〉 denotes the integer part of x. From (1), we can approximate the channel vector as
$$ \boldsymbol{h} \approx \boldsymbol{F}\boldsymbol{v}. $$
We will denote with v as the dual channel in the M1×M2 domain. Note that the dual channel is sparse as it contains only L non-zero entries.
The approximation (10) stems from the rounding of (9), i.e., from approximating \(\Omega _{d}^{l}\) with \(\bar {\Omega }_{d}^{l}\). As Md→∞, the approximation becomes more accurate. Moreover, we have used DFTs with Md points along dimension d, as for the dual channel representation, in order to make a simpler comparison among various channel estimation schemes. Lastly, note that v is not the vectorial representation of the virtual channel, since the DFT used to obtain the virtual channel does not invert the IDFT of (10): in fact, the DFT is taken on the reduced set of Nr×Nt samples, thus yielding the periodic sinc's of (4).
Similarly to the vectorial representation of the channel, we define
$$ \boldsymbol{h}' = \text{vec}(\boldsymbol{H'})\approx \boldsymbol{F}\boldsymbol{v}+\boldsymbol{n'}, $$
where indices of H′ to obtain h′ are selected similarly to (6). From (11), we observe that the LS estimate is a noisy version of a linear transformation of the dual channel v.
We propose an algorithm that improves the LS channel estimate by exploiting the sparsity of v. In particular, we define by \(\hat {\boldsymbol {v}}\) the new estimate of the dual channel v and write the sparse channel estimation problem as
$$ \widehat{\boldsymbol{v}}=\underset{\boldsymbol{v}'}{\text{argmin}} \left(\Vert \boldsymbol{F}\boldsymbol{v}'- \boldsymbol{h}' \Vert_{2}^{2}+ \rho\Vert \boldsymbol{v}' \Vert_{0}\right), $$
where ∥v′∥0 is the ℓ0-norm that counts the non-zero elements in v′ and ρ is a parameter that controls the sparsity of the solution. This problem formulation aims at minimizing the MSE between the estimated channel and the LS channel estimate, under a constraint on the sparsity of vector \(\hat {\boldsymbol {v}}\), imposed by the norm-zero term.
Unfortunately, problem (12) is non-convex and NP-hard [25]. Thus, we relax the problem by replacing the ℓ0-norm with the ℓ1-norm obtaining the LASSO problem
$$ \widehat{\boldsymbol{v}}=\underset{\boldsymbol{v}}{\text{argmin}} \left(\Vert \boldsymbol{F}\boldsymbol{v}-\boldsymbol{h}' \Vert^{2}_{2}+ \rho\Vert \boldsymbol{v} \Vert_{1}\right), $$
with \(||\boldsymbol {v}||_{1} = {\sum \nolimits }_{i=0}^{M_{1}M_{2}-1} |v_{i}|\), which is now convex.
Note that estimating the dual channel opens the possibility of reducing the training overhead. We observe that systems with different numbers of antennas (placed at the same position) share the same dual channel. Thus, once we have an estimate of v, we can change F to obtain the channel estimate for a different antenna setting. Indeed, we can use a fewer transmit antennas to transmit the training sequence, then obtain an estimate of the dual channel and finally project the estimate into a larger number of antennas by modifying the size of the IDFT matrix in (10). Typically paths are concentrated in clusters on part of the dual channel, thus by an iterative channel estimation procedure, we can beamform training signals in the part of the dual channel covered by the clusters.
Solution of the sparse channel estimation problem
A vast literature is available for the solution of the sparse channel estimation problem (13), see for example the survey [26]. We propose here to use two recent and efficient methods based on the gradient descent algorithm with improved convergence speed, namely the AGDAR algorithm (also named FISTA with adaptive restart [27]) and the RCS algorithm [18].
Accelerated gradient descent with adaptive restart
The AGDAR algorithm [27] has been developed to solve problems where the objective function is the sum of a differentiable function and a general but simple closed convex function.
Here, we briefly summarize the motivation of the AGDAR algorithm. We first observe that the minimization problem \(\min _{\boldsymbol {x} \in \mathbb R^{N}} f(\boldsymbol {x})\) when f(·) is convex and smooth can be solved by the gradient descent algorithm that iteratively updates the solution, computing at iteration p
$$ \boldsymbol{x}_{p} = \boldsymbol{x}_{p-1} - t \nabla f(\boldsymbol{x}_{p-1})\,, $$
where t is the step size and ∇f(x) is the gradient of f(·) computed in x. An alternative formulation of (14) is provided by the proximal form [28]
$$ \boldsymbol{x}_{p} = \text{argmin}_{\boldsymbol{x}} \nabla f(\boldsymbol{x}_{p-1})^{T}(\boldsymbol{x}-\boldsymbol{x}_{p-1}) + \frac{1}{2t} ||\boldsymbol{x}- \boldsymbol{x}_{p-1}||^{2}_{2}\,. $$
Now, in order to minimize f(x)+g(x) with f(·) still convex and smooth but g(·) convex, nondifferentiable, and lower semicontinuous, the proximal form must be modified as follows [28]
$$\begin{array}{*{20}l} {}\boldsymbol{x}_{p} =& \text{argmin}_{\boldsymbol{x}} \nabla f(\boldsymbol{x}_{p-1})^{T}(\boldsymbol{x}-\boldsymbol{x}_{p-1}) + \frac{1}{2t} ||\boldsymbol{x}- \boldsymbol{x}_{p-1}||^{2}_{2} \\ {}&+\! g(\boldsymbol{x}) \,=\, \text{argmin}_{\boldsymbol{x}} g(\boldsymbol{x})\,+\,\! \frac{1}{2t} ||\boldsymbol{x}\,-\,\! (\boldsymbol{x}_{p-1}\,-\, t \nabla f(\boldsymbol{x}_{p-1}))||^{2}_{2} \,. \end{array} $$
In general, this optimization problem may be hard to solve; however, when g(x)=ρ||x||1, problem (16) is efficiently solved by splitting it into N separate one-dimensional problems for each entry of \(\boldsymbol {x} \in \mathbb R^{N}\), i.e.,
$$ [\boldsymbol{x}_{p}]_{n} = \text{argmin}_{x}\; \rho |x| + \frac{1}{2t} |x - z_{n}|^{2} \,, $$
with zn=[xp−1−t∇f(xp−1)]n. Problem (17) can be solved in closed form [29] as \([\boldsymbol {x}_{p}]_{n} = \mathcal T_{\rho t}(z_{n})\), with the shrinkage operator defined as
$$ \mathcal T_{a} (x) = \text{sgn}(x) \text{max}(|x| -a,0)\,, $$
obtaining the iterative shrinkage-thresholding algorithm (ISTA) algorithm. This solution can be made faster by applying the Nesterov acceleration principle [30]: instead of using the gradient descent (14), xp is updated as a linear combination of the gradient descent terms (14) at the current and previous iterations, i.e.,
$$ \boldsymbol{y}_{p} = \boldsymbol{x}_{p-1} - t \nabla f(\boldsymbol{x}_{p-1})\,, $$
(19a)
$$ \boldsymbol{x}_{p} = (1- \gamma_{p-1}) \boldsymbol{y}_{p} + \gamma_{p-1} \boldsymbol{y}_{p-1}\,. $$
(19b)
Combining this approach with ISTA, we obtain the FISTA algorithm where \([\boldsymbol {y}_{p}]_{n} = \mathcal T_{\rho t}(z_{n})\) and xp is updated using (19b) It turns out that this approach fastens the convergence of the algorithm for example by choosing as linear combination coefficients [30]
$$ \gamma_{p} = \left\{\begin{array}{ll} 0 &\ p= 0 \\ \frac{1 - \theta_{p-1}}{\theta_{p}} &\ p >0\,, \end{array}\right. $$
$$ \theta_{p} =\left\{\begin{array}{ll} 1 &\ p = 0 \\ \left(1+\sqrt{1+4 \theta_{p-1}^{2}}\right)/2 &\ p > 0\,. \end{array}\right. $$
The explanation of the Nesterov iteration is not very intuitive, and the interested reader can find more details in [30].
The parameter choice (20) is not in general optimal while its optimization is a difficult task. An alternative approach is the adaptive restart technique [27], in which γp is set according to (20) (thus in a suboptimal way) but the FISTA algorithm is restarted whenever the objective function is locally increasing (thus the iterative solution is moving in the wrong direction), i.e., when
$$ \nabla f(\boldsymbol{x}_{p-1})^{T} (\boldsymbol{y}_{p} - \boldsymbol{y}_{p-1}) > 0\,, $$
From (19a) we obtain that the restarting condition (22) can be written as
$$ (\boldsymbol{x}_{p-1}-\boldsymbol{y}_{p})(\boldsymbol{y}_{p}-\boldsymbol{y}_{p-1}) >0\,. $$
The restart consists in resetting θp=1 and using as initial point the last point produced by the algorithm.
Re-weighted compressed sensing
The RCS method is proposed [18] to improve the sparsity of the gradient descent solution of (13). The algorithm weights the entries of xp in the ℓ1-norm in (13) in order to better approximate the ℓ0-norm term in (12). Therefore, instead of solving (12), the RCS method aims at solving problem
$$ \widehat{{\boldsymbol{v}}}=\underset{{\boldsymbol{v}'}}{\text{argmin}} \left(\Vert \boldsymbol{F}{\boldsymbol{v}'}-{\boldsymbol{h}'} \Vert_{2}+ \rho\Vert \boldsymbol{D}{\boldsymbol{v}'} \Vert_{1}\right)\,, $$
where the diagonal matrix D contains the weights. This problem can be seen as a ℓ1-norm relaxation of a weighted version of the ℓ0-norm problem (12), i.e.,
$$ \widehat{\boldsymbol{v}}=\underset{\boldsymbol{v}'}{\text{argmin}} \left(\Vert \boldsymbol{F}\boldsymbol{v}'- \boldsymbol{h}' \Vert_{2}^{2}+ \rho\Vert \boldsymbol{D} \boldsymbol{v}' \Vert_{0}\right). $$
As the ℓ0-norm counts the non-zero entries, regardless of their amplitude, for non-zero weights, the two problems (12) and (25) have the same solutions.
About the choice of the weights, they are meant to provide a good approximation of the ℓ0-norm using the (weighted) ℓ1-norm. Therefore, imposing that at solution
$$ \Vert \boldsymbol{D} \hat{\boldsymbol{v}} \Vert_{1} = \Vert \hat{\boldsymbol{v}} \Vert_{0}\,, $$
we obtain the optimum weights (diagonal entries of matrix D)
$$ [\boldsymbol{D}]_{k,k} = \left\{\begin{array}{ll} \frac{1}{|\hat{v}_{k}|}\,, &\ \hat{v}_{k} \neq 0\,,\\ * &\ \hat{v}_{k} = 0\,, \end{array}\right. $$
where ∗ denotes any non-zero value. However, this choice requires the knowledge of the problem solution \(\hat {\boldsymbol {v}}\), which is not available while solving the problem.
In [18], an iterative approach has been proposed, where the weights are adapted to converge to (27) without knowing the optimal solution. In particular, RCS runs q2 times the AGDAR algorithm, using at each iteration a different set of weights chosen according to (28). It has been shown by extensive simulations over a variety of examples that the following weight adaptation strategy is performing well: starting from D0=diag{[1,…,1]}, for which (24) corresponds to (13), and then at iteration i+1 update the weights as
$$ [\boldsymbol{D}_{i+1}]_{j,j}=\frac{1}{\vert{[\boldsymbol{x}_{p}]_{j}\vert}+\zeta}, \quad j=1, \ldots, M_{1} M_{2}\,, $$
where xp is the solution of (24) for weights Di and ζ is a small number. Note that at convergence (for ζ≈0) we obtain (27).
The implementation of RCS is obtained by running q2 times the AGDAR algorithm and computing the shrinkage function with a weighted parameter, i.e., \(\mathcal T_{w_{n} \rho t}(z_{n})\). The resulting procedure is reported in Algorithm 3.
It has been shown [18] that the RCS algorithm is a majorization-minimization algorithm that iteratively minimizes a simple surrogate function majoring the objective function, and indeed provides in general a better approximation to the original ℓ0-norm problem.
Sectorization and channel tracking
When the antennas at either or both the transmitter and the receiver are transmitting/receiving in a focused direction (in what is known as cell sectorization), the departure and arrival angles are within sub-intervals of [0,2π); therefore, also \(\eta _{d}^{l}\) will take value in sub-intervals of [−1/2,1/2) and the rounding of \(\eta _{d}^{l} M_{d}\) will be in a sub-interval of \(\left (-\frac {M_{d}}{2}, \frac {M_{d}}{2}\right)\). Therefore, vector v can be reduced by eliminating the entries corresponding to values of \(\eta _{d}^{l}\) that are never taken by the channel realization. Correspondingly, the columns of F are removed and the AGDAR algorithm is run over a reduced space, thus increasing its accuracy.
About channel tracking, once the channel has been estimated, it may slowly change due to the variations of the propagation environment. In this case, we can reduce the complexity of the channel estimation and make it more effective by simply tracking its changes rather than starting from scratch its estimation. We propose to focus the search of the paths in the dual channel within intervals around the initial estimates of arrival and departure angles. Therefore, for both AGDAR and RCS approaches, we have a reduction of the dual channel vector v. Indeed, this is similar to sectorization; however, for channel tracking we must consider multiple angle intervals, one for each initially estimated path.
Sectorization or tracking are also possible for SPC and JPC, wherein the search of the peak positions Ωl will be done on a sub-grid of the M1×M2 grid, according to the intervals of \(\eta _{1}^{l}\) and \(\eta _{2}^{l}\). Also in this case, the benefits for the channel estimation process will be a lower probability of periodic sinc misplacement, as the search space is smaller.
The computational complexity of the considered algorithms is evaluated in terms of complex multiplications (CMUX), complex additions (CADD), and comparisons (COM). Let \(M_{tot}=\prod _{d} M_{d}\), where the product is along all dimensions, depending on the use of either UPA or ULA. Hence, using the fast Fourier transform algorithm, a (I)DFT of Mtot samples requires Mtot log2Mtot CMUX, Mtot log2Mtot CADD and no comparison. A summary of the computational complexity of the various algorithms is reported in Table 1, where q3 denotes the effective number iterations over the variable p of the AGDAR and RCS algorithms. In the following section, we will present numerical results also for a complexity comparison among the various techniques.
Table 1 Computational complexity of the channel estimation methods
Numerical results
In this section, we compare the proposed channel estimation techniques by evaluating the MSE in decibels (dB)
$$ \text{MSE} =10\log_{10} \mathbb{E}\left[ || \widehat{\boldsymbol{H}} - \boldsymbol{H} ||_{2}^{2}\right]\,, $$
where \(\mathbb E[\cdot ]\) denotes the expectation operator and \(\hat {\boldsymbol {H}}\) is the estimated channel matrix.
We consider the urban macro cell (UMa), urban micro cell (UMi), rural macro cell (RMa), and Indoor Hotspot (InH) 3GPP channel models [3], with both line-of-sight (LOS) and non-line-of-sight (NLOS). In these scenarios, the number of clusters (typically from 4 to 20) depends on the channel model and the number of sub-paths is 20 per cluster, thus totaling L in the range of tens to hundreds. Note that although the number of sub-paths is large, only three or four sub-paths have a notable power. Therefore de facto, we find the sparse channel model described in this paper and in many literature papers and measurement campaign results. Channels are obtained for a downlink, where the base station (BS) and user equipment (UE) are on the same plane, with parameters defined in Table 2. The average channel gain is unitary, so we assume that transmit power has been adapted to compensate for the path loss; therefore, the average SNR is the reciprocal of the noise power. This also provides that the MSE for the LS estimate is simply the reciprocal of the SNR, which is then not reported in the figures.
Table 2 Simulation parameters
For the proposed AGDAR and RCS algorithms, we use parameters as in Table 2. In the following, we will always consider the same antenna geometry (either ULA or UPA) at both BS and UE, with a different number of antennas at the two ends.
Parameters setting
We first evaluate the impact of the parameters on the channel estimate MSE. The performance of both AGDAR and RCS is determined by the parameter ρ that weights the ℓ1-norm term in the objective function and should be chosen according to the channel sparsity and the operating signal to noise ratio (SNR). For each value of SNR, we have assessed the optimum value of ρ that minimizes the average MSE of the channel estimate. The results are reported in Fig. 2 for both (at both devices) with Nr=16=4×4 (D0=D1=4) and Nt=4=2×2 (D2=D3=2) and M0=M1=M2=M3=8, and ULAs (at both devices) with Nr=16, Nt=4 and M1=M2=32. In this case the channel model is UMi LOS for which the number of clusters is random, between 3 and 12, while the number of sub-paths per cluster is 20, for a maximum total of L=240 paths. We can see a smooth behavior of ρ with respect to the average SNR, that can be described with simple functions for its adaptation to operating conditions. Moreover, as the average SNR increases, the optimal ρ decreases, since the LS estimate is less noisy and a limited sparsification of the channel is required. We have optimized the value of this parameter also for other conditions (e.g., different number of antennas) and forthcoming results are obtained with the optimized ρ.
Choice of ρ. Optimized value of ρ vs SNR, for both UPA, with Nr=16=4×4 (D0=D1=4) and Nt=4=2×2 (D2=D3=2) and M0=M1=M2=M3=8, and ULAs with Nr=16, Nt=4 and M1=M2=32. UMi LOS model
A second relevant parameter for both AGDAR and RCS is the maximum number of iterations q1. Figure 3 shows the average MSE as a function of the number of maximum allowed iterations q1 for ULAs with Nr=16, Nt=4, and various DFT sizes. We note that for all DFT sizes the MSE is flooring as q1 increases: in particular, with log10q1=2.5, all algorithms converge to the minimum MSE. We also observe that the required number of iterations grows with (M1,M2), and RCS achieves a lower asymptotic MSE and converges faster.
Effects of iterations q1. MSE vs the maximum number of iterations q1 for ULA with Nr=32,Nt=4 using AGDAR (solid lines) and RCS (dotted lines). UMi LOS model
For a comparison with SPC and JPC, we have also optimized parameter I, i.e., the number of detected paths. Note that the existing literature typically assumes the knowledge of L and sets I=L. We instead observe that I does not necessarily correspond to the number of paths, since small paths can be neglected as may be easily confused with noise artifacts. This is particularly true in the 3GPP channel model, where many clusters and sub-paths are present, most of which however have a very limited power. Figure 4 shows the value of I that minimizes the average MSE versus the average SNR for various values of M=M1=M2, using ULAs with Nr=16, Nt=4 and SPC. The channel model is UMi LOS. We observe that we need a large value of I when the SNR is high, as the considered channel model has many (sub-)paths, and at high SNRs, they can be distinguished from the noise. Also, note that the optimal value of I is decreasing as M increases: indeed, for higher M the approximation between positions Ωl and \(\hat {\boldsymbol {\Omega }}_{l}\) becomes more accurate, thus fewer sinc functions (closer to the actual number of paths) is enough (and better) for channel estimation. The results reported in the rest of this section are obtained with the optimized value of I.
Choice of I. Optimal I for both SPC and JPC methods and various values of M=M1=M2 using ULA with Nr=16, Nt=4. UMi LOS model
Lastly, we consider the optimization of the DFT size, i.e., the number of points used to approximate \(\eta _{d}^{l}\) in all considered channel estimation methods. In Appendix B, we provide the analytical derivation of the MSE as a function of M, for ULA and a Rayleigh fading channel with a small number of taps. Figure 5 shows the MSE as a function of M=M1=M2 for ULA with Nt=8, Nr=2 and the channel model described in Appendix B. As expected the MSE decreases for a higher value of M, as the quantization error is reduced, flooring for high values of M. We then have considered a ULA system with Nr=32 and Nt=4 and a DFT size multiple μ of the number of antennas, i.e., M1=μNr and M2=μNt. For a UMi LOS model, Fig. 6 shows the average MSE for an SNR of 0, 10, and 20 dB, and different channel estimation techniques. We observe that in all cases by increasing μ we obtain a better channel estimate, thanks to a better quantization precision of either the virtual channel domain (for SPC and JPC) or the values of \(\eta _{d}^{l}\) (for both AGDAR and RCS). Moreover, both AGDAR and RCS methods achieve a lower MSE than SPC and JPC techniques at both low and high SNRs, thanks to their better exploitation of compact channel representation in the dual domain. The RCS has almost negligible performance improvement with respect to AGDAR, as both have a gain from 3 to 5 dB with respect to interference cancelation techniques. Note that the gain is more remarkable at a lower SNR, showing that the compressed-sensing techniques are able to better reject the noise. Lastly, note that JPC has an almost negligible improvement over SPC; thus, we can conclude that the detection of the peaks is already accurate when performed sequentially rather than in parallel. Overall, we conclude that μ=2 already provides close-to-optimal results for all methods. Similar observations can be drawn from Fig. 7, where we report the MSE for a UPA configuration with 2×2 antennas at the UE and 6×6 antennas at BS in a UMa LOS channel model.
Analysis of choice of M. MSE vs M=M1=M2 for the scenario of Appendix B, with Nt=16 and Nr=4
Choice of M for Nr=32 and Nt=4. MSE vs μ for ULA with Nr=32 and Nt=4 at an SNR of 0 (dotted lines), 10 (solid lines) and 20 dB (dashed lines) and M1=μNr and M2=μNt. UMi LOS model
Choice of M for Nr=32 and Nt=4. MSE vs μ for UPA with Nr=36 and Nt=4 at an SNR of 0 (dotted lines), 10 (solid lines) and 20 dB (dashed lines) and M1=μNr and M2=μNt. UMa LOS model
In order to show the performance of the proposed solution in a scenario with a large number of antennas, Fig. 8 shows the MSE as a function of μ for ULA with Nr=128 and Nt=16 and the UMi LOS model. Also in this case, we can appreciate the advantage of all techniques with respect to the LS method, as we recall that for LS the MSE is the reciprocal of the SNR, thus 0 dB in correspondence of the dotted lines, −10 dB for the solid lines and −20 dB for the dashed lines. Indeed, a higher number of antennas with respect to Fig. 6 increases the gain of the other channel estimation techniques with respect to LS. About the comparison among the various methods, we can derive similar conclusions as those of Fig. 6, confirming also that other results are representative of a massive MIMO scenario.
Choice of M for Nr=32 and Nt=4. MSE vs μ for ULA with Nr=128 and Nt=16 at an SNR of 0 (dotted lines), 10 (solid lines), and 20 dB (dashed lines) and M1=μNr and M2=μNt. UMi LOS model
As we already discussed, sectorization provides a faster and more accurate search of the channel paths. Here, we consider a system where channel angles are uniformly distributed in intervals of 6, 60, and 180 degrees. We have L=14 paths, with independent Gaussian-distributed amplitudes αl: by this simple channel model, we better capture the effects of sectorization and channel tracking.
Figure 9 shows the resulting MSE as a function of M=M1=M2 for the various systems, when the average SNR is 10 dB. We observe that sectorization indeed reduces the MSE of all channel estimates, and sectors of 6° provide a MSE of 10 to 16 dB smaller than that of 6° sectors. Comparing the various techniques we observe that with large (180°) and small (6°) sectors all techniques take advantage of the sectorization in a similar way, while for intermediate values (60°) the compressed-sensing methods have a higher gains than interference cancelation methods.
Effects of sectorization. MSE vs M=M1=M2 for ULAs and various channel estimation techniques, when the average SNR is 10 dB when sectorization is used, with 6 (solid lines), 60 (dashed lines), and 180° (dotted lines). Nr=Nt=16
We also consider channel tracking where, after an initial channel estimation performed according to the various considered techniques and with an angle span of 360°, when channels are time-invariant. Estimators are run using an angle span of 6° around the angles of each path. Figure 10 shows the MSE of the channel estimates at an average SNR of 10 dB, and for various values of M1 and M2. We observe that, thanks to the search over a smaller angle span both SPC and JPC achieve similar performance to the proposed approaches, a phenomenon that we already observed with sectorization. Also, the difference between RCS and AGDAR is further reduced, again because of the easier task of channel estimation in this case. We still note instead a high sensitivity to the DFT-size, which corresponds to an accuracy in the estimate of the angles of arrival and departure. Lastly, both sectorization and channel tracking reduce the complexity of the proposed solutions, as path search operations can be performed on a reduced space.
Effects of tracking. MSE for ULAs of various channel estimation techniques, when the average SNR is 10 dB with channel tracking at 6°, and various values of M. Nr=16 and Nt=4
3GPP channel scenarios comparison
Until now, we considered only the UMi LOS channel model: in this section instead, we consider also the other 3GPP channel models. Figure 11 shows the average MSE for various algorithms and UPAs with Nt=8×8 at the BS and Nr=2×2 at the UE, and M0=M1=10 and M2=M3=4. We compare various channel estimation techniques for an average SNR of 10 dB. At this intermediate SNR value, we observe that the proposed AGDAR and RCS significantly outperform both SPC and JPC for all the considered channel models, by 6 to 7 dB. The indoor-office model (InH Mixed) has a few significant taps with reduced dispersion, a favorable condition for SPC and JPC, which exhibit a reduced gap with respect to the proposed (still better performing) techniques. On the other hand, dispersive channels with many low-power taps (UMi Street Canyon NLOS) make the channel estimation more problematic for SPC and JPC, while can be handled very efficiently by the compressed-sensing techniques, thanks to their ability to better distinguish between noise and channel components. This provides a gain of 7 dB between SPC and AGDAR. We also note that both AGDAR and RCS have comparable performance across all channel models. Similar results are obtained at low and high SNR (results are not reported here for the sake of conciseness).
Effects of channel models. MSE under different channel models with SNR of 10 dB for UPA with Nt=8×8 at the BS and Nr=2×2 at the UE, and M0=M1=10 and M2=M3=4
Complexity comparison
In order to assess the complexity of the various channel estimation methods, we first report in Fig. 12 the effective number of iterations q3 of AGDAR as a function of the maximum allowed number of iterations q1, for ULAs with Nr=32, Nt=4, and average SNR of 10 dB on a UMi LOS channel model. As expected, when the number of allowed iterations increases, also the number of effective iterations increases, until reaching a floor. Moreover, a higher value of M1 and M3 requires a higher number of iterations. We also observe that RCS requires fewer iterations (while achieving a better performance in terms of MSE) as the reweighting fastens the convergence.
Choice of q3. Number of iterations q3 (in log scale) vs the number iterations q1 (in log scale) for a ULA system with Nr=32 and Nt=4 for AGDAR (solid lines) and RCS (dotted lines). UMi LOS model
In Fig. 13, we compare the complexity of the various scenarios, by considering the number of complex multiplications, as derived in Section 4.4, as a function of the number of canceled paths I. Parameters are those of Fig. 3, so that we can compare the achieved MSE of the various schemes. We observe that the number of multiplications grows exponentially for the SPC and JPC techniques. We also report the number of complex multiplications for the AGDAR and RCS methods that do not depend on I. We note that the AGDAR has a remarkably lower complexity than other methods. However, we also notice that for M1=64 and M2=8, RCS has a significantly higher complexity with respect to the other methods. When comparing the MSE performance (Fig. 3), we see that AGDAR achieves a much lower MSE than SPC and JPC methods for a lower complexity (in terms of CMUX and CADD).
Computational complexity. Average number of complex multiplications as function of I for various systems with (M1=32, M2=4) (solid lines) and (M1=64, M2=8) (dotted lines), ULA system with (Nr=32, Nt=4) and SNR of 10 dB
In this paper, we have proposed channel estimation techniques for mm-Wave massive MIMO systems, based on a CS approach, where we have exploited the sparse nature of the channel, considering in particular the small number of channel paths at those frequencies. Efficient innovative solutions based on the adaptive restart of the Nesterov accelerated gradient algorithm have been explored. Numerical results have shown the superiority of the proposed approaches with respect to existing procedures, with similar or lower computational complexity. We have also considered the effects of sectorization and proposed a channel tracking technique that exploits slow channel variations.
Appendix A: CS channel estimation for UPA
For a UPA system, we denote as D2 and D3 the number of transmit antennas along the y- and z-axes, respectively; therefore, Nt=D2×D3. Similarly D0 and D1 are the numbers of antennas at the receiver along the two axes; therefore, Nr=D0×D1. The channel matrix has entries
$$ \left[ \boldsymbol{H}^{\text{UPA}} \right]_{i_{1}+D_{1}i_{0},i_{3}+D_{3}i_{2}} \,=\, \sum\limits_{l=1}^{L} \alpha_{l}e^{2\pi \eta_{0}^{l}i_{0}j} e^{2\pi \eta_{1}^{l}i_{1}j} e^{2\pi \eta_{2}^{l}i_{2}j}e^{2\pi \eta_{3}^{l}i_{3}j}. $$
The channel HUPA is transformed into the channel column vector \( {\boldsymbol {h}^{\text {UPA}}}=\text {vec}\left (\boldsymbol {H}^{\text {UPA}}\right) \in \mathbb {C}^{N_{r}N_{\mathrm {t}} \times 1}\) with entries
$$ \left[\boldsymbol{h}^{\text{UPA}}\right]_{i_{0}+D_{0}i_{1}+D_{0}D_{1}i_{2}+D_{2}D_{1}D_{0}i_{3}} = \left[ \boldsymbol{H}^{\text{UPA}} \right]_{i_{1}+D_{1}i_{0},i_{3}+D_{3}i_{2}}. $$
We also define the 4D-DFT matrix as \(\boldsymbol {F}^{\text {4D}} \in \mathbb {C}^{N_{r}N_{\mathrm {t}} \times (M_{0} M_{1} M_{2}M_{3})}\) with entries
$$\begin{array}{*{20}l} &\left[\boldsymbol{F}^{4D}\right]_{(i_{0}+D_{0}i_{1}+D_{1}D_{0}i_{2}+D_{2}D_{1}D_{0}i_{3},f_{0}+M_{0}f_{1}+M_{1}M_{0}f_{2}+M_{2}M_{1}M_{0}f_{3})}\\&= \prod_{d=0}^{3} e^{-\frac{2 \pi i_{d}f_{d}j}{M_{d}}}. \end{array} $$
Lastly, we define the column vector vUPA of length M0M1M2M3 with L non-zero entries, namely
$$ \left[\boldsymbol{v}^{\text{UPA}}\right]_{\bar{\Omega}_{0}^{l}+M_{0}\bar{\Omega}_{1}^{l}+M_{1}M_{0}\bar{\Omega}_{2}^{l}+M_{2}M_{1}M_{0}\bar{\Omega}_{3}^{l}} = \alpha_{l} $$
$$ \bar{\Omega}_{0}^{l} \,=\, \left\langle \eta_{0}^{l} M_{0} \right\rangle\,, \bar{\Omega}_{1}^{l} \,=\, \left\langle \eta_{1}^{l} M_{1} \right\rangle\,, \bar{\Omega}_{2}^{l} \,=\, \left\langle \eta_{3}^{l} M_{3} \right\rangle, \bar{\Omega}_{3}^{l} \,=\, \left\langle \eta_{3}^{l} M_{3} \right\rangle. $$
From (30), we can approximate the channel as
$$ \boldsymbol{h}^{\text{UPA}} \approx \boldsymbol{F}^{\text{4D}}\boldsymbol{v}^{\text{UPA}}. $$
Similarly, we can define \(\boldsymbol {h}^{'\text {UPA}}\phantom {\dot {i}\!}\) and \(\boldsymbol {v}^{'\text {UPA}}\phantom {\dot {i}\!}\) for the LS estimate of the channel and its dual representation. The AGDAR and RCS algorithms for UPA can be obtained as described in Section 4, with F, h′, and v′ replaced by F4D, \(\phantom {\dot {i}\!}\boldsymbol {h}^{'\text {UPA}}\), and \(\boldsymbol {v}^{'UPA}\phantom {\dot {i}\!}\), respectively.
Appendix B: On the choice of M d
The choice of the number of DFT points per dimension Md is important to determine the performance of the channel estimation algorithm. From (34), we have that the AGDAR solution approximates \(\eta _{d}^{l}\) with a quantized value taken over Md possible points. Therefore, we can write the quantization error on the l-th path as
$$ \epsilon^{l}_{d} = \eta^{l}_{d} - \frac{\left\langle \eta_{d}^{l} M_{d} \right\rangle}{M_{d}} \,. $$
Focusing now on a scenario wherein ULAs are used at both transmitter and receiver, assuming that all other estimates (i.e., the amplitude angle estimates of each path) are correct and the only imperfection is the quantization error, from (36), the estimated channel can be written as (compared with (1))
$$ [ \hat{\boldsymbol{H}}]_{i_{1},i_{2}} =\sum\limits_{l=1}^{L} \alpha_{l} e^{2\pi \left(\eta_{1}^{l} - \epsilon^{l}_{1}\right) i_{1}j}e^{2\pi \left(\eta_{2}^{l} - \epsilon^{l}_{2}\right) i_{2}j} $$
and using the definition of H of (1), we have that the MSE of the channel estimate is
$$ \begin{aligned} \gamma^{\text{(q)}} & = \frac{1}{N_{\mathrm{r}}N_{\mathrm{t}}}\sum\limits_{i_{1}=0}^{N_{\mathrm{r}}-1} \sum\limits_{i_{2}=0}^{N_{\mathrm{t}}-1} {\mathbb E}\left[|[\boldsymbol{H}]_{i_{1},i_{2}}- \hat{\boldsymbol{H}}]_{i_{1},i_{2}} |^{2}\right] \\ & = \frac{1}{N_{\mathrm{r}}N_{\mathrm{t}}}\sum\limits_{i_{1}=0}^{N_{\mathrm{r}}-1} \sum\limits_{i_{2}=0}^{N_{\mathrm{t}}-1} {\mathbb E}\left[ \left|\sum\limits_{l=1}^{L} \alpha_{l} e^{2\pi \eta_{1}^{l}i_{1}j}e^{2\pi \eta_{2}^{l}i_{2}j}\left(1 - e^{-2\pi \epsilon^{l}_{1} i_{1}j}e^{-2\pi \epsilon^{l}_{2}i_{2}j}\right) \right|^{2}\right]. \end{aligned} $$
Now, assuming that the amplitudes and angles are independent random variables, we have
$$ \begin{aligned} \gamma^{\text{(q)}} = & \frac{1}{N_{r}N_{\mathrm{t}}} \sum\limits_{i_{1}=0}^{N_{\mathrm{r}}-1} \sum\limits_{i_{2}=0}^{N_{\mathrm{t}}-1}\sum\limits_{l=1}^{L} \sigma_{\alpha}^{2}(l) {\mathbb E}\left[ \left| 1 - e^{-2\pi \epsilon^{l}_{1} i_{1}j}e^{-2\pi \epsilon^{l}_{2}i_{2}j} \right|^{2} \right]\,, \end{aligned} $$
where \(\phantom {\dot {i}\!}\sigma _{\alpha }^{2}(l) = {\mathbb E}[|\alpha _{l}|^{2}]\). Expanding the expectation we have
$$ \begin{aligned} \gamma^{\text{(q)}} = & \frac{1}{N_{\mathrm{r}}N_{\mathrm{t}}} \sum\limits_{i_{1}=0}^{N_{\mathrm{r}}-1} \sum\limits_{i_{2}=0}^{N_{\mathrm{t}}-1}\sum\limits_{l=1}^{L}\sigma_{\alpha}^{2}(l) {\mathbb E}\left\{ \left[1 - \cos\left(2\pi \left(\epsilon^{l}_{1} i_{1}+ \epsilon^{l}_{2}i_{2}\right)\right)\right]^{2}\right.\\&\left. + \sin^{2}\left(2\pi \left(\epsilon^{l}_{1} i_{1}+ \epsilon^{l}_{2}i_{2}\right)\right) \right\} \\ = & \frac{1}{N_{\mathrm{r}}N_{\mathrm{t}}} \sum\limits_{i_{1}=0}^{N_{\mathrm{r}}-1} \sum\limits_{i_{2}=0}^{N_{\mathrm{t}}-1}\sum\limits_{l=1}^{L}\sigma_{\alpha}^{2}(l) {\mathbb E}\left\{ 2\left[1 - \cos\left(2\pi \left(\epsilon^{l}_{1} i_{1}+ \epsilon^{l}_{2}i_{2}\right)\right)\right]\right\} \\ = & \frac{1}{N_{\mathrm{r}}N_{\mathrm{t}}} \sum\limits_{i_{1}=0}^{N_{\mathrm{r}}-1} \sum\limits_{i_{2}=0}^{N_{\mathrm{t}}-1} \sum\limits_{l=1}^{L} \sigma_{\alpha}^{2}(l) 2 \times \\ & {\mathbb E}\left\{ 1 - \cos\left[2\pi \left(\eta^{l}_{1} - \frac{\langle \eta_{1}^{l} M_{1} \rangle}{M_{1}}\right) i_{1}+ \left(\eta^{l}_{2} - \frac{\left\langle \eta_{2}^{l} M_{2} \right\rangle}{M_{2}} \right) i_{2} \right]\right\}\,. \end{aligned} $$
Let us assume that arrival and departure angles (\(\vartheta ^{r}_{l}\) and \(\vartheta ^{t}_{l}\)) are uniformly distributed in the interval [0,2π) and \(\eta _{1}^{l}={\frac {\delta }{\lambda }\cos \vartheta ^{r}_{l}}, \eta _{2}^{l}=-{\frac {\delta }{\lambda }\cos \vartheta ^{t}_{l}}\). Then, the probability density function of \(\eta _{1}^{l}\) and \(\eta _{2}^{l}\) for \(\delta =\frac {\lambda }{2}\) becomes
$$ {p}_{{\eta}_{1}^{l}}(a) = \left\{\begin{aligned} &\frac{\lambda}{\delta \pi \sqrt{1-\lambda^{2} {a}^{2}/{d}^{2}}},\ a \in \left[-1/2,\right. \left.1/2\right) \\ &0 \ \qquad\qquad\qquad\quad\text{otherwise,} \end{aligned}\right. $$
and (40) becomes
$$ \begin{aligned} \gamma^{\text{(q)}} = & \frac{1}{N_{\mathrm{r}}N_{\mathrm{t}}} \sum\limits_{i_{1}=0}^{N_{\mathrm{r}}-1} \sum\limits_{i_{2}=0}^{N_{\mathrm{t}}-1}\sum\limits_{l=1}^{L} \sigma_{\alpha}^{2}(l) \sum\limits_{m_{1}=0}^{M_{1}-1} \sum\limits_{m_{2}=0}^{M_{2}-1} \\ & \int_{m_{1}/M_{1}-1/2}^{(m_{1}+1)/M_{1}-1/2} \int_{m_{2}/M_{2}-1/2}^{(m_{2}+1)/M_{2}-1/2} p_{\eta_{1}^{l}}(a) p_{\eta_{2}^{l}}(b) \times \\ &\left\{ 1 \,-\, \cos\left[2\pi \left(a \,-\, \frac{m_{1}}{M_{1}}\right) i_{1}\,+\, \left(b \,-\, \frac{m_{2}}{M_{2}} \right) i_{2} \right]\right\} da db\,. \end{aligned} $$
This MSE provides a guideline for the choice of Md, as we must have at least γ(q)>σ2, so that quantization does not introduce more errors (in terms of its power) than noise already present.
Note that other configurations (e.g., UPA on one side and ULA on the other side) can be obtained with similar derivations.
In general, the number of symbols can be larges than Nt but we consider here this simpler case for the sake of conciseness.
3GPP:
Third-generation partnership project
AGDAR:
AWGN:
Additive white Gaussian noise
BPDN:
Basis pursuit denoise
CBP:
Continuous basis pursuit
Compressed sensing
DFT:
Discrete Fourier transform
FISTA:
Fast iterative shrinkage-thresholding algorithm
IDFT:
Inverse discrete Fourier transform
InH:
Indoor Hotspot
ISTA:
Iterative shrinkage-thresholding algorithm
JPC:
Joint peak cancelation
LASSO:
Least absolute shrinkage and selection operator
Line-of-sight
Least square
Multiple-input-multiple-output
ML:
MSE:
NLOS:
Non-line-of-sight OMP: Orthogonal matching pursuit
RCS:
RF:
RMa:
Rural macro cell
SNR:
SPC:
Single peak cancelation
SSAMP:
Sparsity adaptive matching pursuit
User equipment
ULA:
Uniform linear array
UMa:
Urban macro cell
UMi:
Urban micro cell
UPA:
Uniform planar array
T. S. Rappaport, J. N. Murdock, F. Gutierrez, State of the art in 60-ghz integrated circuits and systems for wireless communications. Proc. IEEE. 99(8), 1390–1436 (2011).
S. Rangan, T. S. Rappaport, E. Erkip, Millimeter-wave cellular wireless networks: Potentials and challenges. Proc. IEEE. 102(3), 366–385 (2014).
Tecnical report, 5G; study on channel model for frequencies from 0.5 to 100 GHz (3GPP TR 38.901 version 14.3.0 release 14) (2018).
R. W. Heath, N. González-Prelcic, S. Rangan, W. Roh, A. M. Sayeed, An overview of signal processing techniques for millimeter wave MIMO systems. IEEE J. Sel. Top. Sig. Proc. 10(3), 436–453 (2016). https://doi.org/10.1109/JSTSP.2016.2523924.
A. Alkhateeb, O. E. Ayach, G. Leus, R. W. Heath, Channel estimation and hybrid precoding for millimeter wave cellular systems. IEEE J. Sel. Top. Sig. Proc. 8(5), 831–846 (2014). https://doi.org/10.1109/JSTSP.2014.2334278.
S. Montagner, N. Benvenuto, S. Tomasin, in Proc 2015 IEEE Int. Conf. on Communication Workshop (ICCW). Taming the complexity of mm-wave massive MIMO systems: Efficient channel estimation and beamforming, (2015), pp. 1251–1256. https://doi.org/10.1109/ICCW.2015.7247349.
D. De Donno, J. P. Beltrán, D. Giustiniano, J. Widmer, in 2016 IEEE International Conference on Communications Workshops (ICC), Kuala Lumpur. Hybrid analog-digital beam training for mmWave systems with low-resolution RF phase shifters, (2016), pp. 700–705. https://doi.org/10.1109/ICCW.2016.7503869.
J. Lee, G. T. Gil, Y. H. Lee, Channel estimation via orthogonal matching pursuit for hybrid MIMO systems in millimeter wave communications. IEEE Trans. Commun. 64(6), 2370–2386 (2016). https://doi.org/10.1109/TCOMM.2016.2557791.
J. Palacios, D. De Donno, D. Giustiniano, J. Widmer, in 2016 IEEE 27th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Valencia. Speeding up mmWave beam training through low-complexity hybrid transceivers, (2016), pp. 1–7. https://doi.org/10.1109/PIMRC.2016.7794709.
S. Sun, T. S. Rappaport, in 2017 IEEE International Conference on Communications Workshops (ICC Workshops), Paris. Millimeter Wave MIMO channel estimation based on adaptive compressed sensing, (2017), pp. 47–53. https://doi.org/10.1109/ICCW.2017.7962632.
X. Li, J. Fang, H. Li, P. Wang, Millimeter wave channel estimation via exploiting joint sparse and low-rank structures. IEEE Trans. Wirel. Commun. 17(2), 1123–1133 (2018). https://doi.org/10.1109/TWC.2017.2776108.
G. Destino, M. Juntti, S. Nagaraj, in Proc 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). Leveraging sparsity into massive MIMO channel estimation with the adaptive-LASSO, (2015), pp. 166–170. https://doi.org/10.1109/GlobalSIP.2015.7418178.
Z. Marzi, D. Ramasamy, U. Madhow, Compressive channel estimation and tracking for large arrays in mm-wave picocells. IEEE J. Sel. Top. Sig. Proc.10(3), 514–527 (2016). https://doi.org/10.1109/JSTSP.2016.2520899.
S. Malla, G. Abreu, in Proc 2016 Int. Symposium on Wireless Comm. Systems (ISWCS). Channel estimation in millimeter wave MIMO systems: Sparsity enhancement via reweighting, (2016), pp. 230–234. https://doi.org/10.1109/ISWCS.2016.7600906.
Z. Gao, L. Dai, Z. Wang, in Proc 2016 IEEE Int. Conf. on Commun. (ICC). Channel estimation for mmwave massive MIMO based access and backhaul in ultra-dense network, (2016), pp. 1–6. https://doi.org/10.1109/ICC.2016.7511578.
M. Kokshoorn, H. Chen, Y. Li, B. Vucetic, Beam-on-graph: Simultaneous channel estimation for mmwave MIMO systems with multiple users. IEEE Trans. Commun. PP(99), 1–1 (2018). https://doi.org/10.1109/TCOMM.2018.2791540.
Y. Nesterov, Gradient methods for minimizing composite objective function. Math. Program. Ser. B. 140:, 125–161 (2007).
E. J. Candès, M. B. Wakin, S. P. Boyd, Enhancing sparsity by reweighted ℓ 1 minimization. J. Fourier Anal. Appl.14(5), 877–905 (2008). https://doi.org/10.1007/s00041-008-9045-x.
A. M. Sayeed, Deconstructing multiantenna fading channels. IEEE Trans. Sig. Process. 50(10), 2563–2579 (2002). https://doi.org/10.1109/TSP.2002.803324.
J. Mo, P. Schniter, N. G. Prelcic, R. W. Heath, in Proc 2014 48th Asilomar Conference on Signals, Systems and Computers. Channel estimation in millimeter wave MIMO systems with one-bit quantization, (2014), pp. 957–961. https://doi.org/10.1109/ACSSC.2014.7094595.
C. A. Balanis, Antenna Theory: Analysis and Design (Wiley-Interscience, Hoboken, New Jersey, 2005).
Y. S. Cho, J. Kim, W. Y. Yang, C. G. Kang, MIMO-OFDM Wireless Communications with MATLAB (Wiley, Hoboken, New Jersey, 2010).
E. Björnson, E. G. Larsson, T. L. Marzetta, Massive mimo: ten myths and one critical question. IEEE Commun. Mag. 54(2), 114–123 (2016). https://doi.org/10.1109/MCOM.2016.7402270.
D. Tse, P. Viswanath, Fundamentals of Wireless Communication (Cambridge University Press, New York, 2005).
S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge university press, Cambridge, 2004).
M. A. T. Figueiredo, R. D. Nowak, S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Sig. Proc.1(4), 586–597 (2007). https://doi.org/10.1109/JSTSP.2007.910281.
B. O'Donoghue, E. Candès, Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 15(3), 715–732 (2015). https://doi.org/10.1007/s10208-013-9150-3.
N. Parikh, S. Boyd, Proximal algorithms. Found. Trends Optim.1(3), 127–239 (2014). https://doi.org/10.1561/2400000003.
A. Chambolle, R. A. D. Vore, N. -Y. Lee, B. J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998). https://doi.org/10.1109/83.661182.
Y Nesterov, A method of solving a convex programming problem with convergence rate o(1/k 2). Sov. Math. Dokl. 27:, 372–376 (1983).
No acknowledgements.
This work has been supported by Huawei Technology, Italy.
No data is available.
Department of Information Engineering, University of Padova, via Gradenigo 6/A, Padua, Italy
Hossein Soleimani
& Stefano Tomasin
Huawei Technologies Italia, Milan, Italy
Danilo De Donno
Consorzio Nazionale Interuniversitario per le Telecomunicazioni, Padua, Italy
Stefano Tomasin
Search for Hossein Soleimani in:
Search for Danilo De Donno in:
Search for Stefano Tomasin in:
The ain contributions of this paper are as follows: The proposal of two new algorithms for the channel estimation in mm-wave systems and the performance evaluation of the proposed algorithms in a 5g scenario. All authors read and approved the final manuscript.;
Correspondence to Stefano Tomasin.
Soleimani, H., De Donno, D. & Tomasin, S. mm-Wave channel estimation with accelerated gradient descent algorithms. J Wireless Com Network 2018, 272 (2018) doi:10.1186/s13638-018-1282-3
mm-Wave | CommonCrawl |
How to transport veterinary drugs in insulated boxes to avoid thermal damage by heating or freezing
Johannes Horak1,2,
Astrid Haberleitner3 &
Günther Schauberger ORCID: orcid.org/0000-0003-2418-36921
The transport of veterinary drugs must comply with the general standards for drug storage. Although many vehicles are equipped with active heating and/or cooling devices assuring recommended storage conditions, simple insulated transport boxes are also often used. In this study, measurements for typical transport boxes were performed under laboratory conditions by the use of a climate chamber for a temperature of −20 °C and 45 °C to investigate the impact of box size, insulation material, liquid vs. dry filling products, filling degree and other parameters on the thermal performance of insulated boxes. Model calculations and instructions are presented to predict the retention time of recommended drug storage temperatures.
The measurements and the model calculations showed that the loading of the transport boxes with additional water bottles to increase the heat capacity is appropriate to prolong the retention time of the recommended temperature range of the drugs. Insulated transport boxes are not suitable to store drugs over a period of more than approximately 12 h. For practical use a recipe is presented to measure the thermal properties of a transport box and the related retention time for which the recommended storage temperatures can be assured.
The following principles for drug transportation in vehicles are recommended: (1) Before transfer into boxes, drugs should always be thermally preconditioned (2) Increase the filling degree of the boxes with thermally preconditioned water bottles or re-usable thermal packs will increase the heat capacity. Do not deep-freeze the bottles or packs below 0 °C to avoid drug freezing due to contact. (3) Open the lid of the boxes only to uncase drugs that are immediately needed. (4) The bigger the box and the higher the filling degree, the longer the retention time of the transport box. (5) Wherever possible, place the drug box at a cool site inside the vehicle. (6) The monitoring of the inside temperature of the transport boxes is recommended. By the proper use of such transport boxes the recommended temperatures can be maintained over one working day.
The transport of veterinary drugs in vehicles must comply with the general standards for drug storage in veterinary dispensaries. Although many veterinary vehicles are equipped with active heating and/or cooling devices (active systems [1]) assuring recommended drug storage conditions, simple insulated transport boxes (passive systems [1]) are also often used in veterinary vehicles [2] as well as in emergency medical service (EMS) vehicles [3]. Such boxes are especially needed for the time when the engine is in non-operating state and the air conditioning of the vehicles is not working (parking time, during consultancy). However, such "simple" transport boxes must assure the maintenance of the appropriate storage temperatures of the drugs kept in them. The thermal properties of the transport boxes, which are exposed to the thermal conditions inside the vehicles, are major factors controlling the retention time for which the recommended storage temperatures can be assured.
The quality of medicines can be negatively affected when subjected to inadequate storage temperatures; thermal degradation and the loss of potency of drugs have been reported following the exceedance of certain temperature thresholds [4,5,6]. Küppers et al. [7] recommended that temperature-sensitive drugs should be replaced after any temperature stress beyond the limits given by the manufacturers (e.g., 25 °C) and that the drugs should be replaced at least once per year. The inactivation process is described by the Arrhenius equation: the higher the temperature, the higher the degradation of active substances [5, 8,9,10]. The recommendations distinguish in general between cool (controlled room temperature) and cold storage conditions defined by the ranges 2 °C to 25 °C and 2 °C to 8 °C, respectively [11, 12].
Beside bags [13], although boxes are used to transport and store temperature-sensitive drugs in veterinary vehicles [2, 3, 14], it often remains unclear whether they are appropriate for this purpose. However, to the knowledge of the authors, no systematic investigations on the suitability or practicability of passive storage boxes for drug transport in vehicles have been published so far. Both exceedance of the upper temperature limit and a shortfall of the temperature to below the lower limit (e.g., freezing) must be considered. The discrepancy between the thermal needs and requirements to store drugs and the realities was shown by Haberleitner et al. [2] and Ondrak et al. [14] for veterinary vehicles.
The dynamic behaviour of the inside temperature of the box depends on two main features. (1) The heat flow rate through the walls of the box depends of the temperature difference between inside and outside temperature, the insulation of the box (wall thickness and thermal conductivity), and the area of the box walls. (2) The heat content depends on the mass and the thermal capacity of the filling. The lower the heat flow rate (1) and the higher the heat capacity of the box (2), the slower the change of the inside temperature.
In this study, the thermal performance of typical transport boxes were specifically investigated under laboratory conditions to reveal the impact of box size, insulation material, liquid vs. dry filling products, filling degree and other parameters. For practical use, a recipe is presented to measure the thermal properties of a transport box and the related retention time for which the recommended storage temperatures can be assured. With this simple model it can be assessed whether a certain transport box is appropriate to carry veterinary drugs without exceeding a certain threshold temperature. The objective of this paper is to contribute to the quality assurance of veterinary drug storage especially in vehicles.
Six insulated boxes considered typical for drug storage use in veterinary vehicles were investigated under laboratory conditions.
The transport boxes differed in the type of insulation material, volume, surface area, wall thickness, and colour (Table 1). With the exception of one box (A), which employed a different insulation technique (vacuum plates), the insulation material was expanded polystyrene with a heat conductivity of λ = 0.028 W K−1 m−1, whereas for a vacuum insulated plate, a value of λ = 0.005 W K−1 m−1 was assumed [15]. A vacuum insulated plate consists of an outer layer of expanded polypropylene foam (EPP) with openings for a vacuum insulated panel which is a form of thermal insulation consisting of a gas-tight enclosure surrounding a rigid core, from which the air has been evacuated [15].
Table 1 Properties of the studied transport boxes
Figure 1 shows two typical transport boxes used in the experiments.
Typical drug transportation boxes. The insulation of box a (left) consists of vacuum plates, and box b (right) is insulated only with polystyrene (EPS). The length of the scale is 33 cm
To determine the thermal properties of the boxes, the time courses of the inner temperature of water-filled 100 mL bottles, which simulate injection vials, at defined positions in the box were measured in a heating (45 °C) or cooling chamber (−20 °C) to provide a constant ambient temperature. The contact area of the box to the wall of the climate chamber was minimised to prevent heat exchange due to conduction, by the use of point bearings.
The temperature was measured at positions P1 to P6 as depicted in Fig. 2. Measurements were performed using a calibrated Fluke Hydra 2620A with 7 channels in 15 s intervals, with channel 7 recording the ambient temperature.
Measurement points inside a transport box. Points P 1 to P 6 denote the 100 mL water-filled bottles (simulating injection vials) in which the temperature sensors were placed
Determination of the thermal properties of the boxes
The time course of the temperature difference T B = Θ B - Θ A between the inside temperature Θ B (water bottle) and the ambient temperature Θ A of the heating chamber (Θ A = 45 °C) or cooling chamber (Θ A = −20 °C) is given by the Newton's law of cooling T B = T B , 0 exp(−γ t) with the temperature difference T B,0 for t = 0 [16]. The thermal constant γ is given by γ = A U m /c and specifies the thermal properties of the transport box. This factor was used to characterise the box and depends on its geometry, the overall coefficient of heat transmission U m (U-value) and the entire heat capacity of its content c (transported drugs). The weighted mean of the overall coefficient of heat transmission U m was calculated with \( {U}_m=\sum_{i=1}^n{A}_i\;{U}_i/\sum_{i=1}^n{A}_i \), where U i is the coefficient of heat transmission of each box wall, and A i is the corresponding wall surface area. The coefficient of heat transmission of a box wall U i is given by \( {U}_i=1/\left(\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{${\alpha}_i$}\right.+\sum_{j=0}^n\frac{d_j}{\lambda_j}+\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{${\alpha}_o$}\right.\right) \), where α i (α o ) are the coefficients of heat transfer between the air inside (outside) the box and the adjacent surface. The thickness of a material layer is given by d j, and its thermal conductivity by λ j . Since there is almost no forced convection inside the box and in the heating or cooling chamber, the value of α i and α o can be assumed to be α i = α o = 3 W m−2 K−1 [17]. The content's heat capacity is c = ∑ m i c i depending on the mass of the drugs m i and their heat capacity c i .
Following the determination of the time course of the temperature difference T B during a cooling or heating process, the thermal constant γ was determined by a regression analysis. The temperature was measured at the positions depicted in Fig. 2, and the thermal constant γ i was determined for each of these positions i. The maximum value of the thermal constant was selected for further analysis to reflect a worst-case scenario.
The value of γ was also calculated using the mechanical and thermal properties given in Table 2. For the calculation of the entire heat capacity of the content (stored drugs) of the boxes, two different types of drugs were distinguished: liquid drugs, which show a high specific heat capacity close to that of water, and dry drugs (e.g., powder, tablets). For dry drugs, the specific heat capacity was estimated to be c d = 1100 kJ kg−1 K−1; for liquid drugs (independently if the drugs are hydrophilic or lipophilic), it was estimated to be c l = 4000 kJ kg−1 K−1. To increase the heat capacity of the entire box, resulting in a lower thermal constant γ, additional water bottles with a volume of 1000 mL were added.
Table 2 Mechanical and thermal properties of the materials of the insulated boxes and the drugs inside [15,16,17]
The total heat capacity of a box's content was calculated as c = m d c d + m l c l + m w c w where m d and m l denote the mass of dry and liquid drugs (simulated by 100 mL bottles) respectively, and m w the mass of additional 1000 mL water bottles. The corresponding specific heat capacities are shown in Table 2.
The filling degree F was defined as F = c/c maxwith the maximum heat capacity of a boxc max = V i ρ w c w . This assumes that the box's volume V i is filled entirely with water (water density ρ w = 1000 kg m−3, and specific heat capacity of water c w = 4.18 kJ kg−1 K−1). The heat capacity of the air inside a box was neglected. For the measurements inside the climate chamber, the 100 mL bottles as well as the additional 1000 mL water bottles were preconditioned.
Model calculations
Two boxes were modelled (Table 3): box X with a volume of 27 dm3 (inner dimension 3 × 3 × 3 dm3), and box Y with a volume of 54 dm3 (inside dimension 6 × 3 × 3 dm3). The wall thickness is 5 cm, which results in an outside surface area of 0.96 m2 and 1.44 m2, respectively. The coefficient of heat transmission was calculated for expanded polystyrene (EPS) with U m = 0.408 W m −2 K−1.
Table 3 Parameters for the model calculation for the two transport boxes X and Y
For the first calculation, the two boxes, X and Y, were filled only with three 100 mL bottles (as used for injectables), and in the second calculation, the remaining volume was filled with 1000 mL water bottles, resulting in a total of 24 to 49 bottles for boxes X and Y respectively. The mass of a full 100 mL bottle totals approximately 190 g. This yielded a heat capacity of 2.28 kJ K−1 for the two boxes filled only with drugs and 102.6 kJ K−1 for box X and 207.1 kJ K−1 for box Y when filled with additional 1000 mL water bottles (Table 3).
The measurements in the heating (Θ A = 45 °C) and cooling chamber (Θ A = −20 °C) were used to determine the thermal constant γ meas which was then compared to corresponding calculated values γ calc (Table 4). The calculations were carried out with the mechanical and thermal properties (Table 2) of the transport boxes. A comparison of measured and calculated γ values by a regression analyses showed a high agreement with a coefficient of determination of r 2 = 0.843, which is significant at the 0.001 level. The filling degree varied between 4.5% and 17.9%. A mean relative deviation of 22% between γ meas and γ calc was found which we attribute to geometric factors (e.g., thermal bridges due to corners, edges and the cover plate). These are not accounted for the chosen model of the coefficient of heat transmission which assumes a plane wall.
Table 4 Comparison of the measured γ meas and the calculated γ calc thermal constant for the six transport boxes and various relative heat capacities, measured in the heating (30 °C) and/or cooling (−10 °C) chamber
Model calculations were performed to demonstrate the impact of box size and their degree of filling on thermal performance for the two boxes X and Y (Table 3).
These calculations were performed for two different environmental conditions: Summer with an ambient temperature of Θ A = 30 °C and winter with Θ A = −10 °C. The retention time, defined as the time period during which the storage requirements are fulfilled, was then calculated for the two boxes under the assumption that the box content was thermally preconditioned. For summer calculations it was assumed to have an initial temperature of 2 °C, equal to the lower threshold of cold and cool storage conditions. For winter the corresponding upper thresholds where chosen yielding initial temperatures of 8 °C and 25 °C respectively. The resulting time courses of the drug's temperatures are plotted in Fig. 3, for a (A) summer scenario and (B) winter scenario.
Model calculation of the time course of the inside temperature Θ B of the two virtual boxes X and Y depending on the size and the filling degree (heat capacity) of the boxes. The chosen parameters represent a summer (a) and a winter scenario (b) for drugs that must be stored either in a cool (2 °C ≤ Θ B ≤ 25 °C) or cold environment (2 °C ≤ Θ B ≤ 8 °C). For the summer scenario, the environmental temperature was Θ A = 30 °C and the initial temperature Θ B = 2 °C at t = 0 h. For the winter scenario, the environmental temperature was Θ A = −10 °C, and the initial temperature (t = 0 h), depending on the storage conditions (cool or cold), was Θ B = 25 °C and Θ B = 8 °C, respectively
Retention time calculation and various filling degrees for the two boxes are summarised in Table 5. Values below 12 h were marked in bold to show possible drug storage problems during a working day. These calculations show the high impact of the filling degree on retention time, especially for cold storage conditions (2 °C and 8 °C).
Table 5 Retention time (h) of the inside temperature in the recommended temperature range (cool storage conditions 2 °C – 25 °C and cold storage conditions 2 °C – 8 °C) in the insulated boxes X and Y for thermal summer and winter conditions with an ambient temperature of Θ A = 30 °C and Θ A = 30 °C, respectively
Calculations for boxes X and Y also demonstrate the influence of a box's size. The ratio of surface area (responsible for the heat exchange between box and environment) to volume decreases with box size. For the same degree of filling F, the retention time is greater for the larger box, Y, as compared to the smaller box, X (Table 5).
Due to the fact that Table 5 represents model calculations we added high filling degrees to show the importance of this parameter for the retention time, even if these cannot be reached in reality. This demonstrates that the filling degree F should be as big as possible to maximise the retention time of a transport box.
The proper transport of temperature-sensitive drugs in vehicles is a widespread and notorious problem [18,19,20,21]. For veterinary vehicles a one year field study was published recently by Haberleitner et al. [2] showing that the drugs are frequently exposed to temperatures not compliant with recommended storage conditions. In many vehicles, drugs are stored in boxes without an active cooling or heating supply. Field measurements showed transport temperatures for emergency medical technician (EMS) vehicles far outside of the recommended temperature range, as summarised by Brown et al. [20]. For summer conditions, maximum inner temperatures up to 60 °C were determined, and for wintertime, minimum temperatures significantly below the freezing point [18, 21]. It became evident that either active heating or cooling systems should be used [18, 22] or other efforts made to maintain the recommended temperatures when passive storage systems are used. However, to the knowledge of the authors, no systematic investigations on the suitability or practicability of passive storage boxes for drug transport in vehicles have been published so far.
Storage temperatures outside the recommended range can affect the quality of drugs [6]. Exceedance of the upper limit of the recommended temperature range may cause a loss of drug quality due to degradation processes. The inactivation process is described by the Arrhenius equation, the higher the temperature the higher the degradation of active substances using the mean kinetic temperature [5, 8,9,10]. The mandatory shelf life of a temperature-sensitive drug allows for a degradation of active substances of less than 5% during correct storage [7, 9, 23]. For a typical drug, an increase in the mean kinetic temperature of 5 °C will decrease the drug's shelf life by a factor of two [21]. For the US the storage standards of the US Pharmacopeial Convention are summarised by Brown and Campagna [24].
The lower limit of 2 °C for the storage temperature was selected in this study to avoid freezing the drugs, even though the freezing point is an individual constant of each drug and may differ over a wide range [7]. In particular, injection preparations and vaccines are sensitive to freezing and lose quality and efficacy after thawing. Because "frozen" and "not frozen" distinguish between "(potentially) usable" and "not usable" [7, 12], we selected this limit of 2 °C for pragmatic reasons.
To investigate the practicability of insulated boxes for storing drugs in vehicles, we measured the retention time for winter conditions and summer conditions with an experimental ambient temperature of −10 °C and 30 °C, respectively. The measuring protocol of the World Health Organisation [25] for vaccines suggested the ambient temperatures of −5 °C and 43 °C, respectively. The filling degree of the transport boxes, which describes the heat capacity load, was defined in a similar way as proposed by the German [26] and French [27] standards. The measuring points inside the boxes were selected according to the World Health Organisation [25], and the most adverse measuring point was selected for the calculation [28].
The results obtained from the experiments were compared with the values of the thermal constant γ obtained from the model calculation (Table 4). The high coefficient of determination r 2 showed that the selected exponential model corresponds well with the measured time course data for the inner temperature of the box and demonstrates that the thermal constant of practically any transport box can analogously be determined. Thus, the time course and the retention time of any box can be predicted by its geometry, the thermal properties, and the heat capacity of the box as determined by the drugs and the additional water bottles.
Model calculations were performed for two boxes that differ significantly in size and volume to show the influence of these parameters. The time course of the inner temperature of the two boxes differs only slightly (by the constant filling degree of the volume) because the dominant factor is the entire heat capacity of a box. By adding 1000 mL water bottles, the filling degree F can be increased up to 90%, and the retention time can be increased by a factor of approximately 45 and 80 for box X and box Y, respectively. This shows the dominant influence of the filling degree on the retention time [25,26,27]. Insulated transport boxes cannot be considered appropriate to store drugs over a period of more than 12 h (Table 5). This means that after a "working day", the content of the transport boxes must be stored under appropriate conditions (e.g., transfer into a refrigerator) and thermally preconditioned according to the recommended temperatures before it is transferred back into the box for the next tour. A Haberleitner, G Schauberger, J Horak and I Schmerold [2] were able to show that a proper management of the storage of drugs by using solely styrofoam transport boxes can guarantee the requirements to the thermal storage conditions.
The ambient temperature of the transport boxes is the inside temperature of the vehicles being itself predominantly subjected to the outdoor temperature and the radiation balance if no additional heating and cooling (air-conditioning) is in operation. As long as the incoming solar radiation and the outgoing long-wave radiation is nearly balanced, the indoor temperature of the vehicles is close to the ambient temperature (e.g., during an overcast sky, under a car port, or inside a garage).
If the radiation balance is dominated by the incoming solar radiation, the temperature inside the car stabilises within a range between 20 and 35 K above the outside temperature [29,30,31,32,33,34,35,36]. Marty et al. [37] found a temperature difference even close to 60 K. As a rough estimation, they assessed the inside temperature due to solar radiation could reach 30 °C for winter time, 60 °C for spring and autumn and up to 90 °C during summer time. The harsh environment inside parked vehicles can cause heat stroke as a life-threatening syndrome observed in human and animals [38, 39] which is documented in the US, by 37 lethal heat stokes by children per year (1998–2015) [33, 40].
Grundstein et al. [31] developed simple models to calculate the inside steady state temperature of the cabin as a function of the outside air temperature, the irradiance of the solar radiation, and the cloud cover. The dynamic behaviour of the temperature increase after parking a vehicle in the sun was also measured by several authors [29, 30, 32,33,34, 36]. A dynamic model for the cabin temperature, which is driven by outside air temperature, solar radiation and wind velocity was presented recently [41]. In most cases, a value close to the maximum temperature is achieved 20 min after stopping the ventilation. The most effective measure to reduce the inside temperature is to increase the ventilation by partly opened windows [32, 34,35,36]. The protection of the windows by a cover to reduce the incoming solar radiation was investigated by Jascha and Keck [36] (paper fabrics and tin foil) and Devonshire and Sayer [42] using infrared reflecting foils. They found a reduction up to 11 K [36] and a higher score of thermal comfort [42], compared to the unprotected windows. Veterinary vehicles should, therefore, always be parked during daytime (solar radiation) in the shade. The direct exposure of the transport box to solar radiation through car windows should be strictly avoided during summer conditions. The additional heat flow rate could reach up to 1000 W/m2. This additional heat load would reduce the retention time of a transport box drastically. Therefore the box should be stored also inside the cabin in the shade.
The use of passive boxes for drug transportation should only be a temporary substitute for air conditioning of the cabin to avoid heating as well as freezing. Estimation and knowledge of the thermal properties of the boxes used is a crucial factor for appropriate drug transport in vehicles and an important contribution to quality assurance in veterinary practice.
Observing the physical rules governing the thermal performance of passive boxes, the following principles for drug transportation in vehicles can be recommended:
Before transfer into boxes, the drugs should always be thermally preconditioned (i) at the upper limit of the recommended temperature range for ambient temperatures below the recommended temperature ("winter") or (ii) at the lower limit for ambient conditions above the recommended temperature ("summer"). The most critical storage condition is the range between 2 °C and 8 °C.
Increase the filling degree of the box as much as possible by thermally preconditioned water bottles or re-usable thermal packs to increase the heat capacity of the transport box. Do not deep-freeze the bottles or packs below 0 °C to avoid drug freezing due to contact. This recommendation applies also to active devices (e.g., refrigerators).
Open the lid of the boxes only to uncase needed drugs. Avoid air exchange especially due to wind effects and due to the thermal rise of the warmer air from inside the boxes during cold outside conditions.
The bigger the box and the higher the filling degree, the longer the retention time of the transport box.
Wherever possible, place the drug box at a cool site. The vehicle should always be parked in the shade or, if possible, inside a well ventilated car port. The transport box should not be exposed to direct solar radiation.
The monitoring of the inside temperature of the transport boxes is recommended using remote temperature sensors and/or temperature data loggers to avoid and/or detect violation of recommended storage temperatures.
WHO. Appendix 9, model guidance for the storage and transport of time- and temperature–sensitive pharmaceutical products. In: Forty-fifth report of the WHO expert committee on specifi cations for pharmaceutical preparations. Geneva: World Health Organization; 2011.
Haberleitner A, Schauberger G, Horak J, Schmerold I. Thermal drug storage conditions in veterinary vehicles - a one-year field study in Austria. Wien Tierärztl Monatsschr. 2014;101(5–6):110–9.
Mejia J: Vehicular Based Drug Box Temperature Control Study. Norfolk, VA: Old Dominion University, Unpublished master's thesis. Old Dominion University, Norfolk, VA. Retrieved from http://www.digital.lib.odu.edu; 2006.
Grant TA, Carroll RG, Church WH, Henry A, Prasad NH, Abdel-Rahman AA, et al. Environmental temperature variations cause degradations in epinephrine concentration and biological activity. Am J Emerg Med. 1994;12(3):319–22.
De Winter S, Vanbrabant P, Vi NTT, Deng X, Spriet I, Van Schepdael A, et al. Impact of temperature exposure on stability of drugs in a real-world out-of-hospital setting. Ann Emerg Med. 2013;62(4):380–7. e381
Gammon DL, Su S, Huckfeldt R, Jordan J, Patterson R, Finley PJ, et al. Alteration in prehospital drug concentration after thermal exposure. Am J Emerg Med. 2008;26(5):566–73.
Küpper TEAH, Schraut B, Rieke B, Hemmerling AV, Schöffl V, Steffgen J. Drugs and drug administration in extreme environments. J Travel Med. 2006;13(1):35–47.
Haynes JD. Worldwide virtual temperatures for product stability testing. J Pharm Sci. 1971;60(6):927–9.
ICH Q1A(R2). Stability testing of new drug substances and products Q1A(R2) step 4. In: Geneva: international conference on harmonisation of Technical requirements for registration of pharmaceuticals for human use; 2003.
Nakamura T, Yamaji T, Takayama K. Effects of packaging and heat transfer kinetics on drug-product stability during storage under uncontrolled temperature conditions. J Pharm Sci. 2013;102(5):1495–503.
Summerhays GES. Monitoring of temperature in cars with regard to the pharmaceutical precautions of medicine storage. Equine Vet Educ. 2000;12(6):307–11.
Taylor J. Recommendations on the control and monitoring of storage and transportation temperatures of medicinal products. Pharm J. 2001;267(7158):128–31.
Rudland SV, Jacobs AG. Visiting bags: a labile thermal environment. Br Med J. 1994;308(6934):954–6.
Ondrak J, Jones M, Fajt V. Temperatures of storage areas in large animal veterinary practice vehicles in the summer and comparison with drug manufacturers' storage recommendations. BMC Vet Res. 2015;11(1):248.
Fricke J, Heinemann U, Ebert HP. Vacuum insulation panels-from research to market. Vacuum. 2008;82(7):680–90.
Gröber H, Erk S, Grigull U. Fundamentals of heat transfer. Berlin, Heidelberg: Springer-Verlag; 1961.
ASHRAE. Heat transfer (chapter 4). In: ASHRAE handbook—fundamentals volume chapter 4 SI edn. Atlanta, USA: American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc.; 2009.
Brown LH, Bailey LC, Medwick T, Okeke CC, Krumperman K, Tran CD. Medication storage temperatures on U.S. ambulances: a prospective multicenter observational study. Pharm Forum. 2003;29(2):540–4.
Brown LH, Wojcik SM, Bailey LC, Tran CD. Can stock rotation effectively mitigate EMS medication exposure to excessive heat and cold? Am J Emerg Med. 2006;24(1):14–8.
Brown LH, Krumperman K, Fullagar CJ. Out-of-hospital medication storage temperatures: a review of the literature and directions for the future. Prehosp Emerg Care. 2004;8(2):200–6.
Allegra JR, Brennan J, Lanier V, Lavery R, MacKenzie B. Storage temperatures of out-of-hospital medications. Acad Emerg Med. 1999;6(11):1098–103.
Helm M, Castner T, Lampl L. Environmental temperature stress on drugs in prehospital emergency medical service. Acta Anaesthesiol Scand. 2003;47(4):425–9.
ICH Q1E. Evaluation for stability data Q1E. In: Geneva international conference on harmonisation of Technical requirements for registration of pharmaceuticals for human use; 2003.
Brown LH, Campagna JD. Medication storage in the EMS environment: understanding the science and meeting the standards. Emerg Med Serv. 2005;34(3):71. 73–77, 90
World Health Organisation. Guidelines on the international packaging and shipping of vaccines in., vol. WHO / IVB / 05.23. Geneva: WHO; 2005.
DIN 55545–1. Packagings - Packagings with insulating properties - part 1: initial evaluation testing. Berlin: Beuth Verlag; 2006.
AFNOR NF S99-700. Emballages isothermes et emballages réfrigérants pour produits de santé. Méthode de qualification des performances thermiques. Paris: Association Française de Normalisation; 2007.
WHO. WHO good distribution practices for pharmaceutical products. WHO Technical report series, no. 957, 2010, annex 5. Geneva: WHO; 2010.
Gibbs LI, Lawrence DW, Kohn MA. Heat exposure in an enclosed automobile. J Louisiana State Med Soc. 1995;147(12):545–6.
Gregory NG, Constantine E. Hyperthermia in dogs left in cars. Vet Rec. 1996;139(14):349–50.
Grundstein A, Meentemeyer V, Dowd J. Maximum vehicle cabin temperatures under different meteorological conditions. Int J Biometeorol. 2009;53(3):255–61.
King K, Negus K, Vance JC. Heat stress in motor vehicles: a problem in infancy. Pediatrics. 1981;68(4):579–82.
McLaren C, Null J, Quinn J. Heat stress from enclosed vehicles: moderate ambient temperatures cause significant temperature rise in enclosed vehicles. Pediatrics. 2005;116(1):e109–12.
Roberts KB, Roberts EC. The automobile and heat stress. Pediatrics. 1976;58(1):101–4.
Surpure JS. Heat-related illness and the automobile. Ann Emerg Med. 1982;11(5):263–5.
Jascha I, Keck G. Klima im Personenkraftwagen-ein Beitrag zum Tierschutz. Wien Tierärztl Monatschr. 1984;71(8/9):227–37.
Marty W, Sigrist T, Wyler D. Temperature variations in automobiles in various weather conditions: an experimental contribution to the determination of time of death. Am J Forensic Med Pathol. 2001;22(3):215–9.
Grundstein AJ, Duzinski SV, Dolinak D, Null J, Iyer SS. Evaluating infant core temperature response in a hot car using a heat balance model. Forensic Sci Med Pathol. 2015;11(1):13–9.
Grundstein A, Duzinski S, Null J, Impact of dangerous microclimate conditions within an enclosed vehicle on pediatric thermoregulation. Theor Appl Climatology. 2014. doi:10.1007/s00704-015-1636-2.
Hyperthermia Deaths of Children in Vehicles. [www.ggweather.com/heat], retrieved: May 19, 2017.
Horak J, Schmerold I, Wimmer K, Schauberger G. Cabin air temperature of parked vehicles in summer conditions: life-threatening environment for children and pets calculated by a dynamic model. Theor Appl Climatology. 2016. doi:10.1007/s00704-016-1861-3.
Devonshire JM, Sayer JR. Radiant heat and thermal comfort in vehicles. Hum Factors. 2005;47(4):827–39.
We want to thank Prof. Ivo Schmerold for providing the funding. We are grateful to Kurt Wimmer, who supported the measurements.
This study was supported by funds of the Austrian Federal Ministry of Health, Family and Youth (BMGFJ-70420/0303-I/A/15/2007) and the Austrian Federal Chamber of Veterinarians.
The dataset supporting the conclusions of this article will be available in the Zenodo repository of the CERN (DOI: 10.5281/zenodo.570449).
JH acquired, analysed, and interpreted the data, drafted and revised the manuscript. AH collated and analysed the data and revised the manuscript. GS conceived the design of the study, drafted and revised the article. All authors read and approved the final manuscript.
JH is a PhD student at the Institute of Atmospheric and Cryospheric Sciences, University of Innsbruck. AH is a PhD student at the Institute of Pharmacology and Toxicology, Department for Biomedical Sciences, University of Veterinary Medicine Vienna. She is working at the veterinary services of the Federal Province of Lower Austria. GS is professor and head of the WG Environmental Health at Division for Physiology and Biophysics, Department for Biomedical Sciences at the University of Veterinary Medicine Vienna.
Neither patients nor animal data were included in the study.
WG Environmental Health, Division for Physiology and Biophysics, Department for Biomedical Sciences, University of Veterinary Medicine Vienna, Veterinärplatz 1, A 1210, Vienna, Austria
Johannes Horak
& Günther Schauberger
Institute of Atmospheric and Cryospheric Sciences, University of Innsbruck, Innrain 52f, A-6020, Innsbruck, Austria
Institute of Pharmacology and Toxicology, Department for Biomedical Sciences, University of Veterinary Medicine Vienna, Veterinärplatz 1, A 1210, Vienna, Austria
Astrid Haberleitner
Search for Johannes Horak in:
Search for Astrid Haberleitner in:
Search for Günther Schauberger in:
Correspondence to Günther Schauberger.
Recipe to measure thermal properties of a box and to calculate the retention time
For practical use, a recipe is described to calculate the retention time, for a transport box to avoid an exceedance of a temperature threshold. The recipe includes two steps: (1) measurement of the thermal properties of the box, and (2) calculation of the retention time.
In a first step the thermal constant γ of a selected transport box is measured. The box is filled in the same way as it is proposed for practical use. For the application of the transport box the retention time is calculated for a certain ambient temperature. This retention time gives the reliability that the storage temperature of the transported drugs will not exceed the recommended temperature range.
Step 1: Measurement of the thermal properties of a box
For the measurements and the calculation the following equipment is required (1) the transport box, (2) water bottles, to simulate the expected filling degree of the box, (3) thermometer (unit: degree Celsius), and (4) calculator. Bottles of arbitrary size are to be filled with water. The mass of water should equal the mass of the pharmaceuticals that are usually transported inside the box at hand. The filled bottles should be cooled in a refrigerator to about 4 °C. It is crucial that none of the water is frozen.
After the cooldown to the preconditioned temperature, the initial box (water) temperature Θ B,i of at least one of the bottles is measured. Afterwards the box that is to be tested is loaded with the cooled water bottles.
The box should now be placed in a room that fulfils the following requirements (1) low temperature fluctuation, (2) no direct solar radiation, and (3) no draught. The box should stay about 1 m away from corners or walls and the ambient room temperature Θ A has to be measured.
After about 8 h (depending on the filling degree (Table 5)), the final box (water) temperature Θ B,f has to be measured. If the difference between Θ B,f and Θ B,i is at least 5 °C the experiment is finished, otherwise the duration of 8 h has to be expanded and a new value of the final box temperature Θ B,f has to be measured. The time t is the duration that the box was left in the room in hours. All four parameters Θ B,i , Θ B,f , Θ A , and t, which have to be measured, are listed in the highlighted Box of Step 1. The measurement procedure in short: (1) Measure the initial temperature inside the box Θ B,i , (2) leave the box for several hours in a room with constant ambient temperature Θ A , (3) measure the final temperature inside the box Θ B,f , and (4) determine the duration for the measurements inside the room t.
These temperatures and the duration of the experiment are inserted into the following equation:
$$ {\gamma}_C=\left| \ln \left(\frac{\Theta_{B, f}-{\Theta}_A}{\Theta_{B, i}-{\Theta}_A}\right)\right|\cdot \frac{1}{t} $$
This yields the constant γ C which characterises the thermal properties of the investigated box. To be on the safe side, a correction factor κ = 1.45 has to be applied which yields
$$ \gamma =\kappa\;{\gamma}_C $$
Measurement of the four parameters
Initial temperature Θ B,i = 5°C
Final temperature Θ B,f = 10°C
Mean ambient temperature Θ A = 25°C
Experiment duration t = 9 h
Measurement of the thermal constant γ
\( {\gamma}_C=\left| \ln \left(\frac{10\hbox{-} 25}{5\hbox{-} 25}\right)\right|\cdot \frac{1}{9}=0.0320\kern0.5em {\mathrm{h}}^{\hbox{-} 1} \)
Finally the safety factor κ = 1.45 is applied and yields γ = 0.0320 1.45 = 0.0463 h‐1
Step 2: Calculation of the retention time
In step 2, the retention time t th (in hours) is calculated. Based on the drugs which are transported in the box, the threshold temperature Θ th has to be selected. The initial temperature is the temperature inside the transport box, before the box is moved to the vehicle Θ B,i .The assumed cabin temperature during the day is used as ambient temperature Θ A . The thermal constant γ of the box was calculated in step 1. These values are to be inserted into the following equation
$$ {t}_{t h}=\left| \ln \left(\frac{\Theta_{t h}-{\Theta}_A}{\Theta_{B, i}-{\Theta}_A}\right)\right|\cdot \frac{1}{\gamma} $$
The retention time t th can be used as estimation if the box can be used to transport the drugs inside the vehicle without risk, that the threshold temperature Θ th will be exceeded.
Calculation of the retention time t th
Threshold temperature (which should not exceeded) Θ th = 20 °C
Assumed ambient temperature inside the vehicles Θ A = 32 °C
Thermal constant γ (calculated in Step1) γ = 0.0463 h-1
\( {t}_{t h}=\left| \ln \left(\frac{20\kern0.5em \hbox{-} \kern0.5em 32}{8\kern0.5em \hbox{-} \kern0.5em 32}\right)\right|\cdot \frac{1}{0.0463}=15\kern0.5em \mathrm{h} \)
This means that in a vehicle compartment with an ambient temperature of Θ A = 32°C the temperature of the drugs inside the box would rise above the threshold temperature Θ th = 20°C after approximately 15 hours.
Attention: the filling degree of the transport box must be similar to the situation during the measurement of the thermal constant γ (step 1)
Horak, J., Haberleitner, A. & Schauberger, G. How to transport veterinary drugs in insulated boxes to avoid thermal damage by heating or freezing. BMC Vet Res 13, 140 (2017) doi:10.1186/s12917-017-1058-8
Thermal conditions
Guidelines, policy and education | CommonCrawl |
SN Computer Science
January 2020 , 1:57 | Cite as
Vision Tracking: A Survey of the State-of-the-Art
Anjan Dutta
Atreyee Mondal
Nilanjan Dey
Soumya Sen
Luminiţa Moraru
Aboul Ella Hassanien
Survey Article
Vision tracking is a well-studied framework in vision computing. Developing a robust visual tracking system is challenging because of the sudden change in object motion, cluttered background, partial occlusion and camera motion. In this study, the state-of-the art visual tracking methods are reviewed and different categories are discussed. The overall visual tracking process is divided into four stages—object initialization, appearance modeling, motion estimation, and object localization. Each of these stages is briefly elaborated and related researches are discussed. A rapid growth of visual tracking algorithms is observed in last few decades. A comprehensive review is reported on different performance metrics to evaluate the efficiency of visual tracking algorithms which might help researchers to identify new avenues in this area. Various application areas of the visual tracking are also discussed at the end of the study.
Visual tracking Visual computing Motion estimation Object motion Object localization
Visual tracking is one of the significant problems in computer vision having wide range of application domains. A remarkable advancement of the visual tracking algorithm is observed because of the rapid increase in processing power and availability of high resolution cameras over the last few decades in the field of automated surveillance [1], motion-based recognition [2], video indexing [3], vehicle navigation [4], and human–computer interaction [5, 6]. Visual tracking can be defined as, estimating the trajectory of the moving object around a scene in the image plane [7].
Various computer vision tasks to detect, track and classify the target from image sequences are grouped in visual surveillance to analyze the object behavior [7]. A better surveillance system is developed by integrating the motion detection and visual tracking system in [8]. A content-based video indexing technique is evolved from object motion in [9]. The proposed indexing method is applied to analyze the video surveillance data. Visual tracking is effectively applied in vehicle navigation. A method for object tracking and detection is developed in [10] for maritime surface vehicle navigation using stereo vision system to locate objects as well as calculating the distance from the target object in the harsher maritime environment. A methodology of human computer interaction to compute eye movement by detecting the eye corner and the pupil center using visual digital signal processor camera is invented in [11]. The mentioned novel approach helps the users to move their head freely without wiring any external gadgets.
In visual tracking system, the 3D world is projected on a 2D image that results in loss of information [12]. The problem becomes more challenging due to the presence of noise in images, unorganized background, random complex target motion, object occlusions, non-rigid object, variation in the number of objects, change in illumination, etc. [13]. These issues need to be handled effectively to prevent the degradation of tracking performance and even failure. Different visual representations and statistical models are used in literature to deal with these challenges. These models use state-of-the-art algorithms and different methodologies for visual tracking. Different metrics are used to effectively measure the performance of the tracker. Motivated by this, different state-of-the-art visual tracking models widely used in literature are discussed in this paper. In each and every year, a substantial number of algorithms for visual tracking are proposed in literature. To efficiently evaluate their performance, different performance metrics for robust evaluation of trackers are elaborated here after vividly describing the tracking models. Several popular application domains of visual tracking are identified and briefly described here. One can have overall overview of visual tracking methods and best practices as well as a vivid idea about the different application domains related to visual tracking from this study.
A visual tracking system consists of four modules, i.e., object initialization, appearance modeling, motion estimation and object localization. Each of these components and associated tracking methods are briefly described in Sect. 2. Some popular performance measures for visual tracking, for, e.g., center location error, bounding box overlap, tracking length, failure rate, area under the lost-track-ratio curve, etc. are discussed in Sect. 3. Progress in visual tracking methodologies introduced a revolution in health care, space science, education, robotics, sports, marketing, etc. Section 4 highlights some pioneering works related to different application domains of visual tracking. Conclusion section is presented in Sect. 5.
Visual Tracking Methods
In visual tracking system, a trajectory of the target over the time is generated [14] based on the location of the target, positioned in consecutive video frames. The detected objects from the consecutive frames maintained a correspondence [15] using visual tracking mechanism.
The fundamental components of a visual tracking system are object initialization, appearance modeling, motion estimation and object localization [16]. Figure 1 reports the detailed taxonomy of vision tracking.
Visual tracking taxonomy
Object Initialization
Manual or automatic object initialization is the initial step of visual tracking methods. Manual annotation using bounding boxes or ellipses is used to locate the object [17]. Manual annotation is a time-consuming human-biased process, which claims an automated system for easily, efficiently and accurately locating and initializing the target object. In recent decades, automated initialization has wide domains for real-time problem solving (for, e.g, face detection [18], human tracking, robotics, etc. [19, 20, 21, 22].). A dynamic framework for automated initialization and updating the face feature tracking process is proposed in [23]. Moreover, a new method to handle self-occlusion is presented in this study. This approach matched each candidate with a set of predefined standard eye templates, by locating the eyes of the candidates. Once the subject's eyes are located accurately, lip control points are located using the standard templates. An automated, integrated model comprising of robust face and hand detection for initializing a 3D body tracker to recover from failure is proposed in [24]. Useful data for initialization and validation are provided to the intended tracker by this system.
Object initialization is the prerequisite for the appearance modeling. A detailed description of appearance modeling is reported in the following section.
Appearance Modeling
The majority of the object properties (appearance, velocity, location, etc.) are described by the appearance or observation model [25]. Various special features are used to differentiate the target and background or different objects in a tracking system [26]. Features like color, gradient, texture, shape, super-pixel, depth, motion, optic flow, etc. or fused features are most commonly used for robust tracking to describe the object appearance model.
Appearance modeling is done by visual representation and statistical modeling. In visual representation, different variants of visual features are used to develop effective object descriptors [27]. Whereas, statistical learning techniques are used in statistical modeling to develop mathematical models that are efficient for object identification [28]. A vivid description of these two techniques is given in the below section.
Global Visual Representation
Global visual representation represents the global statistical properties of object appearance. The same can also be represented by various other representation techniques, namely—(a) raw pixel values (b) optical flow method (c) histogram-based representation (d) covariance-based representation (e) wavelet filtering-based representation and (f) active contour representation.
(a) Raw pixel values
Values based on raw pixels are the most frequently used features in vision computing [29, 30] for the algorithmic simplicity and efficiency [31]. Raw color or intensity information of the raw pixels is utilized to epitomize the object region [32]. Two basic categories for raw pixel representation are—vector based [33, 34] and matrix based [35, 36, 37].
In vector-based representation, an image region is transformed into a higher-dimensional vector. Vector-based representation performed well in color feature-based visual tracking. Color features are robust to object deformation, insensitive to shape variation [38], but suffer from small sample size problem and uneven illumination changes [39].
To overcome the above-mentioned limitations of vector-based representation, matrix-based representation is proposed in [40, 41]. In matrix-based representation, the fundamental data units for object representation are built using the 2D matrices or higher-order tensors because of their low-dimensional property.
Various other visual features (e.g., shape, texture, etc.) are embedded in the raw pixel information for robust and improved visual object tracking. A color histogram-based similarity metric is proposed in [42], where the region color and the special layout (edge of the colors) are fused. A fused texture-based technique is proposed to enrich the color features in [43].
(b) Optical flow representation
The relative motion of the environment with respect to an observer is known as optical flow [44]. The environment is continuously viewed to find the relative movement of the visual features, e.g., points, objects, shapes, etc. Inside an image region, optical flow is represented by dense field displacement vectors of each pixel. The data related to the spatial–temporal motion of an object are captured using the optical flow. From the differential point of view, optical flow could be represented as the change of image pixels with respect to the time and is expressed by the following equation [45].
$$ I_{i} \left( {x_{i} + \Delta x,y_{i} + \Delta y,t + \Delta t} \right) = I_{i} \left( {x_{i} ,y_{i} ,t} \right), $$
where \( I_{i} \left( {x_{i} ,y_{i} ,t} \right) \) is the intensity of the pixel at a point \( \left( {x_{i} ,y_{i} } \right) \) at a given time \( t \). The same is moved by \( \Delta x,\Delta y,\Delta t \) in the subsequent image frame.
The Eq. 1 is further expanded by applying the Taylor Series Expansion [23] and the following equation is obtained.
$$ I_{i} \left( {x_{i} + \Delta x,y_{i} + \Delta y,t + \Delta t} \right) = I_{i} \left( {x_{i} ,y_{i} ,t} \right) + \frac{{\partial I_{i} }}{\partial x}\Delta x + \frac{{\partial I_{i} }}{\partial y} \Delta y + \frac{{\partial I_{i} }}{\partial t} \Delta t. $$
From these Eqs. (1 and 2), Eq. 3 is obtained as follows:
$$ \frac{{\partial I_{i} }}{\partial x}\Delta x + \frac{{\partial I_{i} }}{\partial y} \Delta y + \frac{{\partial I_{i} }}{\partial t} \Delta t = 0. $$
Dividing both RHS and LHS by \( \Delta t \), the following equation is obtained
$$ \frac{{\partial I_{i} }}{\partial x}\left( {\frac{\Delta x}{\Delta t}} \right) + \frac{{\partial I_{i} }}{\partial y}\left( {\frac{\Delta y}{\Delta t}} \right) + \frac{{\partial I_{i} }}{\partial t}\left( {\frac{\Delta t}{\Delta t}} \right) = 0. $$
A differential point of view is used here to establish the estimation of the optical flow. The variations of the pixels with respect to time is the basis of the explanation. The solution of the problem can be reduced to the following equation:
$$ \frac{{\partial I_{i} }}{\partial x}v_{x} + \frac{{\partial I_{i} }}{\partial y}v_{y} + \frac{{\partial I_{i} }}{\partial t} = 0, $$
$$ i_{x} v_{x} + i_{y} v_{y} + i_{t} = 0, $$
where \( v_{x} \) and \( v_{y} \) are the x and y components of the velocity or optical flow of
$$ I_{i} \left( {x_{i} ,y_{i} ,t} \right) \quad {\text{and}} \quad i_{x} = \frac{{\partial I_{i} }}{\partial x} ,i_{y} = \frac{{\partial I_{i} }}{\partial y} , i_{t} = \frac{{\partial I_{i} }}{\partial t} . $$
Equation 7 is derived from Eq. 6 as follows:
$$ i_{x} v_{x} + i_{y} v_{y} = - i_{t} , $$
$$ \Delta i \cdot \vec{v} = - i_{t} . $$
This problem is converged into finding the solution of \( \vec{v} \). Optical flow cannot be directly estimated since there are two unknowns in the equation. This problem is known as the aperture problem. Several algorithms for estimating the optical flow have been proposed in literature. In [46], the authors reported four categories of optical flow estimation techniques, namely—differential method, region-based matching, energy-based matching and phase-based techniques.
As mentioned in the previous section, the derivatives of image intensity with respect to both space and time are used in different method. In [45], a method has proposed using the global smoothness concept of discovering the optical flow pattern which results in the Eq. (8). In [33], an image registration technique is proposed, where a good match is found using the spatial intensity gradient of the images. This is an iterative approach to find the optimum disparity vector which is a measure for finding the difference between pixel values in a particular location in two images. In [47], an algorithm is presented to compute the optical flow which avoids the aperture problem. Here, second-order derivatives of the brightness of images are computed to generate the equations for representing optical flow.
A global method of computing the optical flow was proposed in [45, 48]. Here, an additional constraint, i.e., the smoothness of the flow is introduced as a second constraint to the basic equation (Eq. 8) for calculating the optical flow. Thereafter, the resulting equation was solved using an iterative differential approach. In [49]. an integrated classical differential approach is proposed with correlation-based motion detectors. A novel method of computing optical flow using a coupled set of nonlinear diffusion equations is presented here.
In region-based matching, an affinity measure based on region features is used and applied to region tokens [50]. Thereafter, the spatial displacements among centroids of the corresponding regions are used to identify the optical flow. Region-based methods act as an alternative to the differential techniques in those fields where due to the presence of a few number of frames and background noise, differential or numerical methods are not effective [51]. This method reports the velocity, similarity, etc. between the image regions. In [52], Laplacian pyramid is used for region matching; whereas in [53], a sum of squared distance is computed for the same.
In energy-based matching methods, minimization of a global energy function is performed to determine optical flow [54]. The main component of the energy function is a data term which encourages an agreement between a spatial term and frames to enforce the consistency of the flow field. The output energy of the velocity tuned filters is the basis of the methods based on energy [55].
In phase-based techniques, the optical flow is calculated in the frequency domain by applying the local phase correlation to the frames [56]. Unlike energy-based methods, velocity is represented by the outputs of the filter exhibiting the phase behavior. In [57, 58], spatio-temporal filters are used in phase-based techniques.
(c) Histogram representation
In histogram representation, the distribution characteristics of the embedded visual features of object regions are efficiently captured. Intensity histograms are frequently used to represent target objects for visual tracking and object recognition. Mean-shift is a histogram-based methodology for visual tracking which is widely used because it is simple, fast and exhibits superior performance in real time [59]. It adopts a weighted kernel-based color histogram to compute the features of object template and regions [60]. A target candidate is iteratively moved to locate the target object from the present location \( p_{\text{old}}^{ \wedge } \) to the new position \( p_{\text{new}}^{ \wedge } \) based on the following relation:
$$ p_{\text{new}}^{ \wedge } = \frac{{\mathop \sum \nolimits_{i} k(p_{i} - p_{\text{old}}^{ \wedge } )w_{\text{s}} \left( {p_{i} } \right)p_{i} }}{{\mathop \sum \nolimits_{i} k(p_{i} - p_{\text{old}}^{ \wedge } )w_{\text{s}} \left( {p_{i} } \right)}}, $$
where the influence zone is defined by a radically symmetric kernel \( k\left( . \right) \) and the sample weight is represented by \( w_{\text{s}} \left( p \right) \). Usually, histogram back projection is used to determine \( w_{\text{s}} \left( p \right) \).
$$ w_{\text{s}} \left( p \right) = \sqrt {\frac{{d_{\text{m}} \left( {I_{\text{c}} \left( p \right)} \right)}}{{d_{\text{c}} \left( {I_{\text{c}} \left( p \right)} \right)}}} , $$
where \( I_{\text{c}} \left( p \right) \) represents the pixel color and the density estimates of the pixel colors of the target model and target candidate histograms are denoted by \( d_{\text{m }} \;{\text{and}}\; d_{\text{c}} . \)
Intensity histograms are widely used in the tracking algorithms [61, 62]. In object detection and tracking, efficient algorithms like integral image [63] and integral histogram [64] are effectively applied for rectangular shapes. Intensity histograms are failed to compute efficiently from region bounded by uneven shape [65]. The problem due to shape variation with respect to histogram-based tracking method is minimized using a circular or elliptical kernel [66]. The kernel is used to define a target region and a weighted histogram is computed from this. In other words, kernel brings simplicity in tracking the irregular object by enforcing a regularity constraint on it. In the above-mentioned approaches, the spatial information in histograms is not considered; however, spatial data are highly important to track a target object where significant shape variation is observed [67]. The above-mentioned issue is addressed in [68] by introducing the concept of spatiogram or spatial histogram. A spatiogram is the generalized form of a histogram where spatial means and covariance of the histogram bins are defined. Robustness in visual tracking is increased since spatial information assists in capturing richer description about the target.
In histogram models, the selected target histogram at the starting frame is compared with the candidate histograms in the subsequent frames [69] for finding the closest similar pair. The similarity among the histograms is measured by applying the Bhattacharyya Coefficient [70, 71, 72]. The similarity is represented by the following formula
$$ \varphi_{\text{b}} \left( {S_{\text{t}} } \right) = \mathop \sum \limits_{x,y = 1}^{n} \sqrt {\frac{{H_{x} }}{{\mathop \sum \nolimits_{x = 1}^{n} H_{x} }} \times \frac{{H_{y} }}{{\mathop \sum \nolimits_{y = 1}^{n} H_{y} }}} , $$
where the target selected in the initial frame is represented by a bin \( H_{x} \) from the histogram, whereas \( H_{y} \) represents the bin corresponding to the candidate histogram. The target histogram bin index is given by \( x \) and the candidate model histogram bin index is given by \( y \). \( S_{\text{t}} \) represents the generated target state and the Bhattacharyya coefficient is represented by \( \varphi_{\text{b}} \).
(d) Covariance representation
Visual tracking is challenging because there might be a change in appearance of the target due to the illumination changes and variations in view and pose. The above-mentioned appearance models are affected by these variations [73]. Moreover, in histogram approach, there is an exponential growth of the joint representation of various features as the number of features increases [74]. Covariance matrix representation is developed in [74] to record the correlation information of the target appearance. Covariance is used here as a Region Descriptor using the following formula:
$$ I_{\text{f}} \left( {p,q} \right) = \varphi \left( {I,p,q} \right), $$
where a three-dimensional color image or a one-dimensional intensity is represented by \( I \). \( I_{f} \) is the extracted feature image from \( I \). The gradients, color, intensity, etc. mappings are represented by \( \varphi \).
A \( m \times m \) covariance matrix is built from the feature points which denotes the predefined rectangular region R(\( R \subseteq I_{f} \)) by the following equation:
$$ c_{R} = \frac{1}{n - 1}\mathop \sum \limits_{x = 1}^{n} (g_{x} - \mu )\left( {g_{x} - \mu } \right)^{T} , $$
where \( \left\{ {g_{x} } \right\}_{x} = 1 \ldots n \) are the m-dimensional feature points inside the region R and \( \mu \) is the mean of the points.
Using covariance matrices as a region descriptor has several advantages [75]. Multiple features are combined naturally using covariance matrices without normalizing the features. The information inherent within the histogram and the information obtained from the appearance model are both represented by it. The region could be effectively matched with respect to different views and poses by extracting a single covariance matrix from it.
(e) Wavelet filtering-based representation
In wavelet transform, the features can be simultaneously located in both time and the frequency domains. The object regions are filtered out in various directions by this feature [76]. Using Gabor wavelet networks (GWN) [77, 78], a new method is proposed for visual face tracking in [79]. A wavelet representation is formed initially from the face template spanning through a low-dimensional subspace in the image space. Thereafter, the orthogonal projection of the video sequence frames corresponding to the tracked space is done into the image subspace. Thus, a subspace corresponding to the image space is efficiently defined by selectively choosing the Gabor wavelets. 2D Gabor wavelet transform is used in [80] to track an object in a video sequence. The predetermined globally placed selected feature points are used to model the target object by local features. The energy obtained from GWT coefficients of the feature points is considered for stochastically selecting the feature points. The higher the energy values of the points, the higher is the probability of being selected. Local features are defined by the amplitude of the GWT coefficients of the selected feature point.
(f) Active contour representation
Active contour representation has been widely used in literature for tracking non-rigid objects [81, 82, 83, 84, 85]. The object boundary is identified by forming the object contour from a 2D image, having a probability of noisy background [86]. In [87], a signed distance map \( \varphi \), which is also known as level set representation, is represented as follows:
$$ \varphi \left( {x_{i} ,x_{j} } \right) = \left\{ {\begin{array}{*{20}l} {0\left( {x_{i} ,x_{j} } \right)} \hfill & { \in C} \hfill \\ {d\left( {x_{i} ,x_{j} ,C} \right) } \hfill & { \in Z_{\text{o}} } \hfill \\ { - d\left( {x_{i} ,x_{j} ,C} \right)} \hfill & { \in Z_{\text{in}} } \hfill \\ \end{array} } \right. , $$
where the inner and outer regions of the contour are represented by \( Z_{\text{in}} \) and \( Z_{\text{o}} \), respectively. The shortest Euclidian distance from the contour and the point \( \left( {x_{i} ,x_{j} } \right) \) is calculated by the function \( d\left( {x_{i} ,x_{j} ,C} \right) \).
The level set representation is widely used to form a stable numerical solution and its capability to handle the topological changes. In the same study, the evaluation of active contour methods is classified into two categories—edge based and region based. Each of these methods is briefly described in the following section.
In edge-based methods, local information about the contours (e.g., gray-level gradient) is mainly considered. In [88], a snake-based model is proposed which is one of the most widely used edge-based models. Snake model is very effective for a number of visual tracking problems for edge and line detection, subjective contour, motion tracking, stereo matching, etc. A geodesic model is proposed in [89], where more intrinsic geometric image measures are presented compared to classical snake model. The relation between the computation of the minimum distance curve or geodesics and active contours is the basis of this proposed model. In [81], an improved geodesic model is proposed. In this study, active contours are described by level sets and gradient descent method is used for contour optimization.
Edge-based algorithms [90, 91, 92] are simple and effective to determine the contours having salient gradient, but they have their drawbacks. They are susceptible to boundary leakage problems where the object has weak boundaries and sensitive to inherent image noise.
Region-based methods use statistical quantities (e.g., mean, variance and histograms based on pixel values) to segment an image into objects and background regions [93, 94, 95, 96]. Target objects with weak boundaries or without boundaries can be successfully divided despite of the existence of image noise [97]. Region-based model is widely used in Active contour models. In [98], an active contour model is proposed where no well-defined region boundary is present. Techniques like curve-evolution [99], Mumford–Shah function [100] for segmentation and level set [101] are used here. A region competition algorithm is proposed in [102], which is used as a statistical approach to image segmentation. A variation principle-based minimized version of a generalized Bayes/MDL (minimum description length) is used to derive the competition algorithm. A variation calculus problem for the evolution of the object contour was proposed in [103]. The problem was solved using level sets-based hybrid model combining region-based and boundary-based segmentation of the target object. Particle filter [104] is extended to a region-based image representation for video object segmentation in [105]. The particle filter is reformulated considering image partition for particle filter measurement and that results into enrichment of the existing information.
Visual Representation Based on Local Feature
Visual representation using local features encodes the object appearance information using saliency detection and interest points [106]. A brief discussion on the local feature-based visual representation used in several tracking methods is given below.
In local template-based technique, an object template is continuously fitted to a sequence of video frames in template-based object tracking. Establishing a correspondence between the source image and the reference template is the objective of the template-based method [107]. Template-based visual tracking is considered as a kind of nonlinear optimization problem [108, 109]. In the presence of significant inter-frame object motion, tracking method based on nonlinear optimization has its disadvantage of being trapped in local minima. An alternative approach is proposed in [110] where geometric particle filtering is used in template-based visual tracking. A tracking method for human identification and segmentation was proposed in [111]. A hierarchical approach of part-template matching is introduced here, considering the utility of both local part-based and global template human detectors.
In segmentation-based technique, the cues are incorporated in segmentation-based visual representation of object tracking [112]. In a video sequence, segmenting the target region from the background is a challenging task. In computer graphics domain, it is known as video cutout and matting [113, 114, 115, 116]. Two closely related problems of visual tracking are mentioned in [117]—(i) localizing the position of a target where the video has low or moderate resolution (ii) segmentation of the image of the target object where the video has moderate to high resolution. In the same study, a nonparametric k-nearest-neighbor (kNN) statistical model is used to model the dynamic changing appearance of the image regions. Both localization and segmentation problem are solved as a sequential binary classification problem here. One of the most successful representations for image segmentation and object tracking is superpixels [118, 119, 120]. A discriminative appearance model based on superpixel to distinguish the target and the background, having mid-level cues, is proposed in [121]. A confidence map for target background is computed here to formulate the tracking task.
In scale-invariant feature transform (SIFT)-based technique [122, 123, 124], the image information is transformed into scale-invariant features which may be applied for matching different scenes or views related to the target object [125]. A set of image features are generated through SIFT by following four stages of computations namely—extrema detection, keypoint localization, orientation assignment and keypoint descriptor.
In extrema detection stage, all image locations are searched. A Gaussian difference function is used to detect the probable interesting points that remain unperturbed to the orientation and scale.
In keypoint localization stage, the location and scale are determined by fitting a detailed model at each candidate location.
In orientation assignment stage, local image gradient directions are used to assign one or more orientations to each of the key point locations. Image data are transformed based on the assigned orientation, scale and position of the each feature. All future operations are performed on the transformed image data and invariance is provided to these transformations.
In keypoint descriptor stage, the selected scale is used to measure the local image gradients in the region surrounding each keypoint. A significant amount of local distortion in shape and illumination changes is allowed in the transformed representation.
SIFT-based techniques have its wide use in literature because of its invariance to the scene background change during the tracking. A real-time, low-power system based on SIFT algorithm was proposed in [126]. A database of the features of the known objects is maintained and the individual features are matched with it. A modified version of the approximation of nearest neighbor search algorithm based on the K-d tree and BBF algorithm is used here. SIFT- and PowerPC-based infrared (IR) imaging system is used in [127] to automatically recognize the target object in unknown environments. First, the positional interest points and scale are localized for a moving object. Thereafter, the description of the interest points is built. SIFT and Kalman filter are used in [128] to handle occlusion. In an image sequence, the objects are identified using the SIFT algorithm with the help of the extracted invariant features. The presence of occlusion degrades the accuracy of SIFT. Kalman filter [129] is used here to minimize the effect of occlusion because the estimation of the location of the object in the subsequent frame is done based on the location information about the object in the previous frame.
Saliency detection-based method is applicable to individual images if there is a presence of a well-centered single salient object [130]. Two stages of saliency detection are mentioned in the literature [131]. The first stage involves the detection of the most prominent object and the accurate region of the object is segmented in the second stage. These two stages are rarely separated in practice rather they are often overlapped [132, 133]. In [134], a novel method of real-time extraction of saliency features from the video frames is proposed. Conditional random fields (CRF) [135] are combined with the saliency features and thereafter, a particle filter is applied to track the detected object. In [136], the mean-shift tracker in combination with saliency detection is used for object tracking in dynamic scenes. To minimize the interference of the complex background, first a spatial–temporal saliency feature extraction method is proposed. Furthermore, the tracking performance is enhanced by fusing the top-down visual mechanism in the saliency evaluation method. A novel method of detecting the salient object in images is proposed in [137], where the variability is computed statistically by two scatter matrices to measure the variability between the central and the surrounding objects. The pixel centric most salient regions are defined as a salience support region. The saliency of pixel is estimated through its saliency support region to detect variable-sized multiple salient objects in a scene.
Statistical Modeling
The visual tracking methods are continuously subjected to inevitable appearance changes. In statistical modeling, the object detection is performed dynamically [138].Variations in shape, texture and the correlations between them are represented by the statistical model [139]. A statistical model is categorized into three classes [140] namely—generative model, discriminative model and hybrid model.
In visual tracking, the appearance templates are adaptively generated and updated by the generative model [141, 142]. The appearance model of the target is adaptively updated by the online learning strategy embedded in the tracking framework [143].
A framework based on an online EM algorithm to model the change in appearance during tracking is proposed in [144]. In the presence of image outliers, this model provides robustness when used in a motion-based tracking algorithm. In [145, 146] Adaptive Appearance model is incorporated in a particle filter to realize robust visual tracking. An online learning algorithm is proposed in [147] to generate an image-based representation of the video sequences for visual tracking. A probabilistic appearance manifold [148] is constructed here from a generic prior and a video sequence of the object. An adaptive subspace representation of the target object is proposed in [149], where low-dimensional subspace is incrementally learned and updated. A compact representation of the target is provided here instead of representing the same as a set of independent pixels. Appearance changes due to internal or external factors are reflected since the subspace model is continuously updated by the incremental method. In [35], an incremental tensor subspace learning-based algorithm is proposed for visual tracking. The appearance changes of the target are represented by the algorithm through online learning of a low-dimensional eigenspace representation. In [150], Retinex algorithm [151, 152] is combined with the original image and the resultant is defined as weighted tensor subspace (WTS). WTS is adapted to the target appearance changes by an incremental learning algorithm. In [153], a robust tracking algorithm is proposed to combine sparse appearance models and adaptive template update strategy, which is less sensitive to occlusion. A weighted structural local sparse appearance model is adopted in [154], which combines patch-based gray value and histogram-oriented gradient features for the patch dictionary.
Tracking is defined as a classification problem in discriminative methods [155]. The target is discriminated from the background and updated online. Appearance and environmental changes are handled by a binary classifier which is trained to filter out the target from the background [156, 157]. As this method applies a discriminatively trained detector for tracking purposes, this is also called tracking by detection mechanism [158, 159, 160, 161, 162]. Discriminative methods pertain machine learning approaches to distinguish between the object and non-object [163]. To achieve constructive prophetic performances, online variants are proposed to progressively learn discriminative classification features for distinguishing object and non-object. The main problem is a discriminative feature (for, e.g., color, texture, shape, etc.) may be identical along with the varying background [164]. In [165], a discriminative correlation filter-based (DCF) approach is proposed which is used to evaluate the object in the next frame. Hand-crafted appearance features such as HOG [166], color name feature [167] or a combination of both [168] are usually utilized by DCF-based trackers. To remove ambiguity, a deep motion feature is used which differentiates the target based on discriminative motion pattern and leads to successful tracking after occlusion, addressed in [169]. A discriminative scale space tracking approach (DSST), which learns separate discriminative correlation filters for explicit translation and scale evaluation, is proposed in [170]. A support vector machine (SVM) tracking framework and dictionary learning based on discriminative appearance model are reported in [171]. To track arbitrary object in videos, a real-time, online tracking algorithm is proposed based on discriminative model [172].
The generative and discriminative models have complementary strengths and weaknesses, though they have different characteristics. A combination of generative and discriminative model to get the best practices of both domains is proposed in [172]. A new hybrid model is proposed here to classify weakly labeled training data. A multi-conditional learning framework [173] is proposed in [174] for simultaneously clustering, classifying and dimensionality reduction. Favorable properties of both the models are observed in the multi-conditional learning model. In the same study, it is demonstrated that a generalized superior performance is achieved using the hybrid model of the foreground or background pixel classification problem [175].
From the appearance model, stable properties of appearance are identified and motion estimation is done by weighing on them [144]. Next section elaborates briefly about the motion estimation methodologies mentioned in the literatures.
Motion estimation
In motion estimation, motion vectors [176, 177, 178, 179, 180] are determined to represent the transformation through adjacent 2D image frames in a video sequence [181]. Motion vectors are computed in two ways [182]—pixel-based methods or direct method, and feature-based methods or indirect method. In direct methods [183], motion parameters are estimated directly by measuring the contribution of each pixel that results in optimal usage of the available information and image alignment. In indirect methods, features like corner detection are used and the corresponding features between the frames are matched with a statistical function applied over a local or global area [184]. Image areas are identified where a good correspondence is achievable and computation is concentrated in these areas. The initial estimation of the camera geometry is, thus, obtained. The correspondence of the image regions having less information is guided by this geometry.
In visual tracking, motion can be modeled using a particle filter [140] which is considered as a dynamic state estimation problem. Let the parameters for describing the affine motion of an object is represented by \( m_{t} \) and the subsequent observation vectors denoted by \( o_{t} \). The following two rules are recursively applied to estimate the posterior probability
$$ p(m_{t} |o_{{1:t - 1}} ) = {\text{ }}\int {p\left( {m_{t} |m_{{t - 1}} } \right)p\left( {m_{{t - 1}} |o_{{1:t - 1}} } \right)dm_{{t - 1}} ,} $$
$$ p\left( {m_{t} |o_{1:t} } \right) = \frac{{p\left( {o_{t} |m_{t} } \right)p(m_{t} |o_{1:t - 1} )}}{{P(o_{t} |o_{t:t - 1} )}}, $$
where \( m_{1:t} = \left\{ {m_{1} ,m_{2} , \ldots ,m_{t} } \right\} \) represents state vectors at time \( t \) and\( o_{1:t} = \left\{ {o_{1} ,o_{2} , \ldots ,o_{t} } \right\} \) represents the corresponding observatory states.
The motion model describes the transition of states between the subsequent frames and is denoted by \( p\left( {m_{t} |m_{t - 1} } \right) \). The observation model is denoted by \( p\left( {o_{t} |m_{t} } \right) \) which calculates the probability of an observed image frame to be in a particular object class.
Object Localization
The target location is estimated in subsequent frames by the motion estimation process. The target localization or positioning operation is performed by maximum posterior prediction or greedy search, based on motion estimation [185].
A brief description about visual tracking and the associated models is given in the above section. Visual tracking is one of the rapidly growing fields in computer vision. Numerous algorithms are proposed in literature every year. Several measures to evaluate the visual tracking algorithms are briefly described in the below section.
Visual Tracking Performance
The performance measures represent the difference or correspondence between the predicted and actual ground truth annotations. Several performance measures, widely used in visual tracking [186, 187] are—center location error, bounding box overlaps, tracking length, failure rate, area under the lost-track-ratio curve, etc. A brief description of each of these measures is given below.
Center Location Error
The center location error is one of the widely used measures for evaluating the performance of object tracking. The difference between the center of the manually marked ground truth position (\( r_{t}^{G} \)) and the tracked target's center (\( r_{t}^{T} \)) is computed by computing the Euclidean distance between them [188]. The same is formulated as follows.
In a sequence of length \( n \), the state description of the object \( \left( \varphi \right) \) is given by:
$$ \varphi = \left\{ {\left( {r_{t} ,c_{t} } \right)} \right\}\begin{array}{*{20}c} n \\ {t = 1} \\ \end{array} , $$
where the center of the object is denoted by \( r_{t} \in {\mathcal{R}}^{2} \) and \( r_{t} \) represents the object region at time \( t \).
The central error (\( E_{c} ) \) is formulated as follows:
$$ E_{c} \left( {\varphi^{G} ,\varphi^{T} } \right) = \sqrt {\frac{1}{n}} \mathop \sum \limits_{t = 1}^{n} \left| {r_{t}^{G} - r_{t}^{T} } \right|^{2} . $$
Randomness of the output location is frequent when the track of a target object is lost by the tracking algorithm. In such a scenario, it is difficult to measure the accurate tracking performance [188]. The error due to randomness is minimized in [163] where a threshold distance is maintained from the ground truth object and the percentage of frames within this threshold is calculated to estimate the tracking accuracy.
Bounding Box Overlap
In central location error, the pixel difference is measured, but the scale and size of the target object are not reflected [163]. A popular evaluation metric that minimizes the limitation of the central location error is the overlapping score [189, 190]. The overlap of the ground truth region and the predicted target's region is considered as overlap score \( \left( {S_{r} } \right) \) and the same is formulated as below [191].
$$ S_{r} = \frac{{{\text{Area}}\left( {r_{t}^{G} \cap r_{t}^{T} } \right)}}{{{\text{Area}}\left( {r_{t}^{G} \cup r_{t}^{T} } \right)}}, $$
where \( \cup \) and \( \cap \) represent the union and intersection of two boundary region boxes and the region area is represented by the function \( {\text{Area}}() \).
Both position and size of the bounding boxes of ground truth object and predicted target are considered here and as a result, the significant errors due to tracking failures are minimized.
Tracking Length
Tracking length is a measure which is used in literature [192, 193]; it denotes the number of frames successfully tracked from the initialization of the tracker until its first failure. The tracker's failure cases are explicitly addressed here but it is not effective in the presence of a difficult tracking condition at the initialization of the video sequence.
The problem of tracking length is addressed in the failure rate measure [194, 195]. This is a supervised system where the tracker is reinitialized by a human operator once it suffers failure. The system records the number of manual interventions and the same is used as a comparative performance score. The entire video sequence is considered in the performance evaluation and hence, the dependency of the beginning part, unlike the tracking length measure, is diminished.
Area Under the Lost-Track-Ratio Curve
In [196], a hybrid measure is proposed where several measures are combined into a single measure. Based on the overlap measure (\( S_{r} ) \) which is described in the earlier section, the lost-track ratio \( \gamma \) is computed. In a particular frame, the track is considered to be lost when overlap between the ground truth and the estimated target is smaller than a certain threshold value (\( \beta \)), i.e., \( S_{r} \le \beta , \) where \( \beta \in \left( {0,1} \right) \)
Lost-track ratio is represented by the following formula:
$$ \gamma = \frac{{F_{t} }}{F}, $$
where \( F_{t} \) is the number of frames having a lost track and \( F \) is the total number frames belonging to the estimated target trajectory.
The area under the lost-track (\( {\text{AULT}} \)) is formulated as below:
$$ {\text{AULT}} = \Delta \beta \mathop \sum \limits_{\beta = 0}^{1} \gamma \left( \beta \right). $$
In this method a compact measure is presented where a tracker has to take into account two separate tracking aspects.
Visual tracking has its wide application in the literature. Some of the application areas of visual tracking are briefly described in the below section.
Applications of Visual Tracking
Different methods of visual tracking are used in a wide range of application domains. This section is mainly focused around seven application domains of visual tracking—Medical Science, Space Science, Augmented Reality Applications, Posture estimation, Robotics, Education, Sports, Cinematography, Business and Marketing, and Deep Learning Features.
To improve the robot-assisted laparoscopic surgery system, a human machine interface is presented for instrument localization and automated endoscope manipulation [197, 198]. An "Eye Mouse" based on a low-cost tracking system is implemented in [199], which is used to manipulate computer access for people with drastic disabilities. The study of discrimination between bipolar and schizophrenic disorders by using visual motion processing impairment is found in [200]. Three different applications for analyzing the classification rate and accuracy of the tracking system, namely the control of the mobile robot in the maze, the text writing program "EyeWriter" and the computer game, were observed in [201]. A non-invasive, robust visual tracking method for pupils identification in video sequences captured by low-cost equipment is addressed in [202]. A detailed discussion of eye tracking application in medical science is described in [203].
A visual tracking approach based on color is proposed in [204, 205] for astronauts, which presents a numeric analysis of accuracy on a spectrum of astronaut profiles. A sensitivity-based differential Earth mover's distance (DEMD) algorithm of simplex approach is illustrated and empirically substantiated in the visual tracking context [206]. In [207], an object detection and tracking based on background subtraction, optical flow and CAMShift algorithm is presented to track unusual events successfully in video taken by UAV. A visual tracking algorithm based on deep learning and probabilistic model to form Personal Satellite for tracking the astronauts of the space stations in RGB-D videos, reported in [208].
Augmented Reality (AR) Applications
Augmented reality system on color-based and feature-based visual tracking is implemented on a series of applications such as Sixth Sense [209], Markerless vision-based tracking [210], Asiatic skin segmentation [211], Parallel Tracking and Mapping (PTAM) [212], construction site visualization [213], Face augmentation system [214, 215], etc., reported in [216]. A fully mobile hybrid AR system which combines a vision-based trackers with an inertial tracker to develop energy efficient applications for urban environments is proposed in [217]. An image-based localization of mobile devices using an offline data acquisition is reported in [218]. A robust visual tracking AR system for urban environments by utilizing appearance-based line detection and textured 3D models is addressed in [219].
Posture Estimation
This application domain deals with the images involving humans, which covers facial tracking, hand gesture identification, and the whole-body movement tracing. A model-based non-invasive visual hand tracking system, named 'DigitEyes' for high DOF articulated mechanisms, is described in [220]. The three main approaches for analyzing human gesture and whole-body tracking, namely 2D perspective without explicit shape models, 2D perspective with explicit shape models and 3D outlook, were discussed in [221]. A kinematic real-time model for hand tracking and pose evaluation is proposed to lead a robotic arm in gripping gestures [222]. A 3D LKT algorithm based on model for evaluating 3D head postures from discrete 2D visual frames is proposed in [223].
A real-time system for ego-motion estimation on autonomous ground vehicles with stereo cameras using feature detection algorithm is illustrated in [224]. A visual navigation system is proposed in [225] which can be applied to all kinds of robots. In this paper, the authors categorized and illustrated the visual navigation techniques majorly into map-based navigation [226] and mapless navigation [227]. The motionCUT framework is presented in [228] to detect motion in visual scenes generated by moving cameras and the said technique is applied on the humanoid robot iCub for experimental validation. A vision-based tracking methodology using a stereoscopic vision system for mobile robots is introduced in [229].
Visual tracking technology is widely applicable in the field of educational research. To increase the robustness of the visual prompting for a remedial reading system that helps the end users with identification and pronunciation of terms, a reading assistant is presented in [230]. To implement the said system, a GWGazer system is proposed which combines two different methods, namely interaction technique evaluation [231, 232, 233] and observational research [234, 235, 236]. An ESA (empathic software agent) interface using real-time visual tracking to ease empathetic pertinent behavior is applicable in the virtual education environment within a learning community, reported in [237]. An effective approach towards students' visual attention tracking using an eye tracking methodology to solve multiple choice type problems is addressed in [238]. An information encapsulating process of teacher's consciousness towards the student's requirement using visual tracking is presented in [239], which is beneficial for classroom management system. To facilitate computer educational research using eye tracking methods, a gaze estimation methodology is proposed, which keeps record of a person's visual behavior, reported in [240]. A realistic solution of mathematics teaching based on visual tracking is addressed in [241]. A detailed study of visual tracking in computer programming is described in [242].
Visual tracking holds a strong application field towards Sports. There are several approaches under this domain using different models of visual tracking. The precise tracking of the golfer during a conventional golf swing using dynamic modeling is presented in [243]. A re-sampling and re-weighting particle filter method is proposed to track overlapping athletes in a beach volleyball or football sequence using a single camera, reported in [244]. Improvement in performance of the underwater hockey athletes has been addressed in [245] by inspecting their vision behavior during breath holding exercise and eye tracking. A detailed discussion in this domain is presented in [246, 247].
Apart from these, visual tracking can be broadly used in the field of cinematography [248, 249, 250, 251], cranes systems [252, 253], business and marketing [254, 255, 256, 257, 258, 259, 260] and deep learning applications [260, 261, 262, 263, 264, 265].
The traditional visual tracking methods perform competently in well-controlled environments. The image representations used by the trackers may not be sufficient for accurate robust tracking in complex environments. Moreover, the visual tracking problem becomes more challenging due to the presence of occlusion, un-organized background, abrupt fast random motion, dramatic changes in illumination, and significant changes in pose and viewpoints.
Support vector machine (SVM) classifier was fused with optical flow-based tracker in [266] for visual tracking. The classifier helps to detect the location of the object in the next frame even though a certain part of the object is missing. In this method, the next frame is not only matched with the previous frame, but also against all possible patterns learned by the classifier. More precise bounding boxes are identified in [267] using a joint classification–regression random forest model. Here, authors demonstrated that the aspect ratio of the variable bounding boxes was accurately predicted by this model. In [268], a neural network-based tracking system was proposed to describe a collection of tracking structures that enhance the effectiveness and adaptability of a visual tracker. Multinetwork architectures are used here that increase the accuracy and stability of visual tracking.
An extensive bibliographic study has been carried out based on the previously published works listed in Scopus database for the period of last 5 years (2014–2018). Amongst 2453 listed works, 48.9% articles were published in journals and 44.5% in conferences. It is observed that major contributions in this area are from computer science engineers (42%). Medical science and related domains (6%) also have notable contribution in this arena. The leading contributors are from countries like China (57%), USA (12%), UK (5%), etc. Figure 2 clearly depicts the increasing interest in vision tracking in the last few years.
Trends of visual tracking research
The above study clearly shows that, in recent years with the advent of deep learning, the challenging problem to track a moving object with a complex background has made significant progress [269]. Unlike previous trackers, more emphasis is put on unsupervised feature learning. A noteworthy performance improvement in visual tracking is observed with the introduction of deep neural networks (DNN) [269, 270] and convolutional neural networks (CNN) [271, 272, 273, 274, 275]. DNN, especially CNN, demonstrate a strong efficiency in learning feature representations from huge annotated visual data unlike handcrafted features. High-level rich semantic information is carried out by the object classes which assist in categorizing the objects. These features are also tolerant to data corruption. A significant improvement in accuracy is observed in object and saliency detection besides image classification due to the combination of CNNs with the traditional trackers.
An overall study on visual tracking and its performance measures is presented in this study. Object initialization is the first stage of visual tracking. Initialization could be manual or automatic. The object properties like appearance, velocity, location, etc. are represented by observation model or appearance model. Special features like color, gradient, texture, shape, super-pixel, depth, motion, optical flow, etc. are used for robust visual tracking, that describe the appearance model. Appearance modeling consists of visual representation and statistical modeling. In visual representation, various visual features are used to form robust object descriptors; whereas in statistical modeling, a mathematical model for identifying the target object is developed. In the last few decades, a huge number of visual tracking algorithms are proposed in the literature. A comprehensive review of different measures to evaluate the tracking algorithms is presented in this study. Visual tracking is applied in a wide range of applications including medical science, space science, robotics, education, sports, etc. Some of the application areas of visual tracking and related studies in the literature are presented here.
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Sun Y, Meng MQH. Multiple moving objects tracking for automated visual surveillance. In: 2015 IEEE international conference on information and automation. 2015; IEEE. pp. 1617–1621.Google Scholar
Wei W, Yunxiao A. Vision-based human motion recognition: a survey. In: 2009 Second international conference on intelligent networks and intelligent systems. IEEE; 2009. pp. 386–389.Google Scholar
Zha ZJ, Wang M, Zheng YT, Yang Y, Hong R, Chua TS. Interactive video indexing with statistical active learning. IEEE Trans Multimed. 2012;14(1):17–27.CrossRefGoogle Scholar
Ying S, Yang Y. Study on vehicle navigation system with real-time traffic information. In: 2008 International conference on computer science and software engineering. vol. 4. IEEE; 2008. pp. 1079–1082.Google Scholar
Huang K, Petkovsek S, Poudel B, Ning T. A human-computer interface design using automatic gaze tracking. In: 2012 IEEE 11th international conference on signal processing. vol. 3. IEEE; 2012. pp. 1633–1636.Google Scholar
Alenljung B, Lindblom J, Andreasson R, Ziemke T. User experience in social human-robot interaction. In: Rapid automation: concepts, methodologies, tools, and applications. IGI Global; 2019. pp. 1468–1490.Google Scholar
Chincholkar AA, Bhoyar MSA, Dagwar MSN. Moving object tracking and detection in videos using MATLAB: a review. Int J Adv Res Comput Electron. 2014;1(5):2348–5523.Google Scholar
Abdelkader MF, Chellappa R, Zheng Q, Chan AL. Integrated motion detection and tracking for visual surveillance. In: Fourth IEEE International Conference on Computer Vision Systems (ICVS'06). IEEE; 2006. p. 28.Google Scholar
Courtney JD. Automatic video indexing via object motion analysis. Pattern Recogn. 1997;30(4):607–25.CrossRefGoogle Scholar
Chae KH, Moon YS, Ko NY. Visual tracking of objects for unmanned surface vehicle navigation. In: 2016 16th International Conference on Control, Automation and Systems (ICCAS). IEEE; 2016. pp. 335–337.Google Scholar
Phung MD, Tran QV, Hara K, Inagaki H, Abe M. Easy-setup eye movement recording system for human-computer interaction. In: 2008 IEEE international conference on research, innovation and vision for the future in computing and communication technologies. 2008; IEEE. pp. 292–297.Google Scholar
Kavya R. Feature extraction technique for robust and fast visual tracking: a typical review. Int J Emerg Eng Res Technol. 2015;3(1):98–104.Google Scholar
Kang B, Liang D, Yang Z. Robust visual tracking via global context regularized locality-constrained linear coding. Optik. 2019;183:232–40.CrossRefGoogle Scholar
Yilmaz A, Javed O, Shah M. Object tracking: a survey. Acm Comput Surv (CSUR). 2006;38(4):13.CrossRefGoogle Scholar
Jalal, A. S., & Singh, V. (2012). The state-of-the-art in visual object tracking. Informatica, 36(3).Google Scholar
Li X, Hu W, Shen C, Zhang Z, Dick A, Hengel AVD. A survey of appearance models in visual object tracking. ACM Trans Intell Syst Technol (TIST). 2013;4(4):58.Google Scholar
Anuradha K, Anand V, Raajan NR. Identification of human actor in various scenarios by applying background modeling. Multimed Tools Appl. 2019. https://doi.org/10.1007/s11042-019-7443-5.CrossRefGoogle Scholar
Sghaier S, Farhat W, Souani C. Novel technique for 3D face recognition using anthropometric methodology. Int J Ambient Comput Intell (IJACI). 2018;9(1):60–77.CrossRefGoogle Scholar
Zhang Y, Xu X, Liu X. Robust and high performance face detector. arXiv preprint arXiv:1901.02350. 2019.
Surekha B, Nazare KJ, Raju SV, Dey N. Attendance recording system using partial face recognition algorithm. In: Intelligent techniques in signal processing for multimedia security. Springer, Cham; 2017. pp. 293–319.Google Scholar
Chaki J, Dey N, Shi F, Sherratt RS. Pattern mining approaches used in sensor-based biometric recognition: a review. IEEE Sens J. 2019;19(10):3569–80.CrossRefGoogle Scholar
Dey N, Mukherjee A. Embedded systems and robotics with open source tools. USA: CRC Press; 2018.CrossRefGoogle Scholar
Shell HSM, Arora V, Dutta A, Behera L. Face feature tracking with automatic initialization and failure recovery. In: 2010 IEEE conference on cybernetics and intelligent systems. IEEE; 2010. pp. 96–101.Google Scholar
Schmidt J. Automatic initialization for body tracking using appearance to learn a model for tracking human upper body motions. 2008.Google Scholar
Fan L, Wang Z, Cail B, Tao C, Zhang Z, Wang Y et al. A survey on multiple object tracking algorithm. In: 2016 IEEE international conference on information and automation (ICIA). IEEE; 2016. pp. 1855–1862.Google Scholar
Liu S, Feng Y. Real-time fast moving object tracking in severely degraded videos captured by unmanned aerial vehicle. Int J Adv Rob Syst. 2018;15(1):1729881418759108.MathSciNetGoogle Scholar
Lu J, Li H. The Importance of Feature Representation for Visual Tracking Systems with Discriminative Methods. In: 2015 7th International conference on intelligent human-machine systems and cybernetics. vol. 2. IEEE; 2015. pp. 190–193.Google Scholar
Saleemi I, Hartung L, Shah M. Scene understanding by statistical modeling of motion patterns. In: 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE; 2010. pp. 2069–2076.Google Scholar
Zhang K, Liu Q, Yang J, Yang MH. Visual tracking via Boolean map representations. Pattern Recogn. 2018;81:147–60.CrossRefGoogle Scholar
Ernst D, Marée R, Wehenkel L. Reinforcement learning with raw image pixels as input state. In: Advances in machine vision, image processing, and pattern analysis. Springer, Berlin; 2006. pp. 446–454.CrossRefGoogle Scholar
Sahu DK, Jawahar CV. Unsupervised feature learning for optical character recognition. In: 2015 13th International conference on document analysis and recognition (ICDAR). IEEE; 2015. pp. 1041–1045.Google Scholar
Silveira G, Malis E. Real-time visual tracking under arbitrary illumination changes. In: 2007 IEEE conference on computer vision and pattern recognition. IEEE; 2007. pp. 1–6.Google Scholar
Lucas BD, Kanade T. An iterative image registration technique with an application to stereo vision. 1981.Google Scholar
Ho J, Lee KC, Yang MH, Kriegman D. Visual tracking using learned linear subspaces. In: CVPR (1). 2004. pp. 782–789.Google Scholar
Li X, Hu W, Zhang Z, Zhang X, Luo G. Robust visual tracking based on incremental tensor subspace learning. In: 2007 IEEE 11th international conference on computer vision. IEEE; 2007. pp. 1–8.Google Scholar
Wen J, Li X, Gao X, Tao D. Incremental learning of weighted tensor subspace for visual tracking. In: 2009 IEEE international conference on systems, man and cybernetics. IEEE; 2009. pp. 3688–3693.Google Scholar
Hu W, Li X, Zhang X, Shi X, Maybank S, Zhang Z. Incremental tensor subspace learning and its applications to foreground segmentation and tracking. Int J Comput Vis. 2011;91(3):303–27.zbMATHCrossRefGoogle Scholar
Yang S, Xie Y, Li P, Wen H, Luo H, He Z. Visual object tracking robust to illumination variation based on hyperline clustering. Information. 2019;10(1):26.CrossRefGoogle Scholar
Dey N. Uneven illumination correction of digital images: a survey of the state-of-the-art. Optik. 2019;183:483–95.CrossRefGoogle Scholar
Wang T, Gu IY, Shi P. Object tracking using incremental 2D-PCA learning and ML estimation. In: 2007 IEEE international conference on acoustics, speech and signal processing-ICASSP'07. vol. 1. IEEE; 2007. pp. I–933.Google Scholar
Li X, Hu W, Zhang Z, Zhang X, Zhu M, Cheng J. Visual tracking via incremental log-euclideanriemannian subspace learning. In: 2008 IEEE conference on computer vision and pattern recognition. IEEE; 2008. pp. 1–8.Google Scholar
Wang H, Suter D, Schindler K, Shen C. Adaptive object tracking based on an effective appearance filter. IEEE Trans Pattern Anal Mach Intell. 2007;29(9):1661–7.CrossRefGoogle Scholar
Allili MS, Ziou D. Object of interest segmentation and tracking by using feature selection and active contours. In: 2007 IEEE conference on computer vision and pattern recognition. IEEE; 2007. pp. 1–8.Google Scholar
Akpinar S, Alpaslan FN. Video action recognition using an optical flow based representation. In: Proceedings of the international conference on image processing, computer vision, and pattern recognition (IPCV) (p. 1). The Steering Committee of the World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp). 2014.Google Scholar
Horn BK, Schunck BG. Determining optical flow. Artif Intell. 1981;17(1–3):185–203.CrossRefGoogle Scholar
Barron JL, Fleet DJ, Beauchemin SS. Performance of optical flow techniques. Int J Comput Vis. 1994;12(1):43–77.CrossRefGoogle Scholar
Uras S, Girosi F, Verri A, Torre V. A computational approach to motion perception. Biol Cybern. 1988;60(2):79–87.CrossRefGoogle Scholar
Camus T. Real-time quantized optical flow. Real-Time Imaging. 1997;3(2):71–86.CrossRefGoogle Scholar
Proesmans M, Van Gool L, Pauwels E, Oosterlinck A. Determination of optical flow and its discontinuities using non-linear diffusion. In: European Conference on Computer Vision. Springer, Berlin; 1994. pp. 294–304.CrossRefGoogle Scholar
Fuh CS, Maragos P. Region-based optical flow estimation. In: Proceedings CVPR'89: IEEE computer society conference on computer vision and pattern recognition. IEEE; 1989. pp. 130–135.Google Scholar
O'Donovan P. Optical flow: techniques and applications. Int J Comput Vis. 2005;1–26.Google Scholar
Anandan P. A computational framework and an algorithm for the measurement of visual motion. Int J Comput Vis. 1989;2(3):283–310.CrossRefGoogle Scholar
Singh A. An estimation-theoretic framework for image-flow computation. In: Proceedings third international conference on computer vision. IEEE; 1990. pp. 168–177.Google Scholar
Li Y, Huttenlocher DP. Learning for optical flow using stochastic optimization. In: European conference on computer vision. Springer, Berlin; 2008. pp. 379–391.Google Scholar
Barniv Y. Velocity filtering applied to optical flow calculations. 1990.Google Scholar
Argyriou V. Asymmetric bilateral phase correlation for optical flow estimation in the frequency domain. arXiv preprint arXiv:1811.00327. 2018.
Buxton BF, Buxton H. Computation of optic flow from the motion of edge features in image sequences. Image Vis Comput. 1984;2(2):59–75.CrossRefGoogle Scholar
Fleet DJ, Jepson AD. Computation of component image velocity from local phase information. Int J Comput Vis. 1990;5(1):77–104.CrossRefGoogle Scholar
Lee JY, Yu W. Visual tracking by partition-based histogram backprojection and maximum support criteria. In: 2011 IEEE International Conference on Robotics and Biomimetics. IEEE; 2011. pp. 2860–2865.Google Scholar
Zhi-Qiang H, Xiang L, Wang-Sheng Y, Wu L, An-Qi H. Mean-shift tracking algorithm with improved background-weighted histogram. In: 2014 Fifth international conference on intelligent systems design and engineering applications. IEEE; 2014. pp. 597–602.Google Scholar
Birchfield S. Elliptical head tracking using intensity gradients and color histograms. In: Proceedings. 1998 IEEE Computer Society conference on computer vision and pattern recognition (Cat. No. 98CB36231). IEEE; 1998. pp. 232–237.Google Scholar
Comaniciu D, Ramesh V, Meer P. Real-time tracking of non-rigid objects using mean shift. In: Proceedings IEEE conference on computer vision and pattern recognition. CVPR 2000 (Cat. No. PR00662). vol. 2. IEEE; 2000. pp. 142–149.Google Scholar
Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. CVPR. 2001;1(1):511–8.Google Scholar
Porikli F. Integral histogram: a fast way to extract histograms in cartesian spaces. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05). Vol. 1. IEEE; 2005. pp. 829–836.Google Scholar
Parameswaran V, Ramesh V, Zoghlami I. Tunable kernels for tracking. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06). Vol. 2. IEEE; 2006. pp. 2179–2186.Google Scholar
Fan Z, Yang M, Wu Y, Hua G, Yu T. Efficient optimal kernel placement for reliable visual tracking. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06). Vol. 1. IEEE; 2006. pp. 658–665.Google Scholar
Nejhum SS, Ho J, Yang MH. Visual tracking with histograms and articulating blocks. In: 2008 IEEE conference on computer vision and pattern recognition. IEEE; 2008. pp. 1–8.Google Scholar
Birchfield ST, Rangarajan S. Spatiograms versus histograms for region-based tracking. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05). Vol. 2. IEEE; 2005. pp. 1158–1163.Google Scholar
Zhao A. Robust histogram-based object tracking in image sequences. In: 9th Biennial conference of the Australian pattern recognition society on digital image computing techniques and applications (DICTA 2007), IEEE; 2007. pp. 45–52.Google Scholar
Djouadi A, Snorrason O, Garber FD. The quality of training sample estimates of the bhattacharyya coefficient. IEEE Trans Pattern Anal Mach Intell. 1990;12(1):92–7.CrossRefGoogle Scholar
Kailath T. The divergence and Bhattacharyya distance measures in signal selection. IEEE Trans Commun Technol. 1967;15(1):52–60.CrossRefGoogle Scholar
Aherne FJ, Thacker NA, Rockett PI. The Bhattacharyya metric as an absolute similarity measure for frequency coded data. Kybernetika. 1998;34(4):363–8.MathSciNetzbMATHGoogle Scholar
Wu Y, Wang J, Lu H. Real-time visual tracking via incremental covariance model update on Log-Euclidean Riemannian manifold. In: 2009 Chinese conference on pattern recognition. IEEE; pp. 1–5.Google Scholar
Tuzel O, Porikli F, Meer P. Region covariance: a fast descriptor for detection and classification. In: European conference on computer vision. Springer, Berlin; 2006. pp. 589–600.CrossRefGoogle Scholar
Porikli F, Tuzel O, Meer P. Covariance tracking using model update based on lie algebra. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06). Vol. 1. IEEE; 2006. pp. 728–735.Google Scholar
Duflot LA, Reisenhofer R, Tamadazte B, Andreff N, Krupa A. Wavelet and shearlet-based image representations for visual servoing. Int J Robot Res. 2018; 0278364918769739.Google Scholar
Krueger V, Sommer G. Efficient head pose estimation with Gabor wavelet networks. In: BMVC. pp. 1–10.Google Scholar
Krüger V, Sommer G. Gabor wavelet networks for object representation. In: Multi-image analysis. Springer, Berlin; 2001. pp. 115–128.CrossRefGoogle Scholar
Feris RS, Krueger V, Cesar RM Jr. A wavelet subspace method for real-time face tracking. Real-Time Imaging. 2004;10(6):339–50.CrossRefGoogle Scholar
He C, Zheng YF, Ahalt SC. Object tracking using the Gabor wavelet transform and the golden section algorithm. IEEE Trans Multimed. 2002;4(4):528–38.CrossRefGoogle Scholar
Paragios N, Deriche R. Geodesic active contours and level sets for the detection and tracking of moving objects. IEEE Trans Pattern Anal Mach Intell. 2000;22(3):266–80.CrossRefGoogle Scholar
Cremers D. Dynamical statistical shape priors for level set-based tracking. IEEE Trans Pattern Anal Mach Intell. 2006;28(8):1262–73.CrossRefGoogle Scholar
Vaswani N, Rathi Y, Yezzi A, Tannenbaum A. Pf-mt with an interpolation effective basis for tracking local contour deformations. IEEE Trans. Image Process. 2008;19(4):841–57.zbMATHCrossRefGoogle Scholar
Sun X, Yao H, Zhang S. A novel supervised level set method for non-rigid object tracking. In: CVPR 2011. IEEE; 2011. pp. 3393–3400.Google Scholar
Musavi SHA, Chowdhry BS, Bhatti J. Object tracking based on active contour modeling. In: 2014 4th International conference on wireless communications, vehicular technology, information theory and aerospace and electronic systems (VITAE). IEEE; 2014. pp. 1–5.Google Scholar
Hu W, Zhou X, Li W, Luo W, Zhang X, Maybank S. Active contour-based visual tracking by integrating colors, shapes, and motions. IEEE Trans Image Process. 2013;22(5):1778–92.MathSciNetzbMATHCrossRefGoogle Scholar
Kass M, Witkin A, Terzopoulos D. Snakes: active contour models. Int J Comput Vis. 1988;1(4):321–31.zbMATHCrossRefGoogle Scholar
Caselles V, Kimmel R, Sapiro G. Geodesic active contours. Int J Comput Vis. 1997;22(1):61–79.zbMATHCrossRefGoogle Scholar
Hore S, Chakraborty S, Chatterjee S, Dey N, Ashour AS, Van Chung L, Le DN. An integrated interactive technique for image segmentation using stack based seeded region growing and thresholding. Int J Electr Comput Eng. 2016;6(6):2088–8708.Google Scholar
Ashour AS, Samanta S, Dey N, Kausar N, Abdessalemkaraa WB, Hassanien AE. Computed tomography image enhancement using cuckoo search: a log transform based approach. J Signal Inf Process. 2015;6(03):244.Google Scholar
Araki T, Ikeda N, Dey N, Acharjee S, Molinari F, Saba L, et al. Shape-based approach for coronary calcium lesion volume measurement on intravascular ultrasound imaging and its association with carotid intima-media thickness. J Ultrasound Med. 2015;34(3):469–82.CrossRefGoogle Scholar
Tuan TM, Fujita H, Dey N, Ashour AS, Ngoc VTN, Chu DT. Dental diagnosis from X-ray images: an expert system based on fuzzy computing. Biomed Signal Process Control. 2018;39:64–73.CrossRefGoogle Scholar
Samantaa S, Dey N, Das P, Acharjee S, Chaudhuri SS. Multilevel threshold based gray scale image segmentation using cuckoo search. arXiv preprint arXiv:1307.0277. 2013.
Rajinikanth V, Dey N, Satapathy SC, Ashour AS. An approach to examine magnetic resonance angiography based on Tsallis entropy and deformable snake model. Futur Gener Comput Syst. 2018;85:160–72.CrossRefGoogle Scholar
Kumar R, Talukdar FA, Dey N, Ashour AS, Santhi V, Balas VE, Shi F. Histogram thresholding in image segmentation: a joint level set method and lattice boltzmann method based approach. In: Information technology and intelligent transportation systems. Springer, Cham; 2017. pp. 529–539.Google Scholar
Srikham M. Active contours segmentation with edge based and local region based. In: Proceedings of the 21st international conference on pattern recognition (ICPR2012). IEEE; 2012. pp. 1989–1992.Google Scholar
Chan TF, Vese LA. Active contours without edges. IEEE Trans Image Process. 2001;10(2):266–77.zbMATHCrossRefGoogle Scholar
Feng H, Castanon DA, Karl WC. A curve evolution approach for image segmentation using adaptive flows. In: Proceedings eighth IEEE international conference on computer vision. ICCV 2001. Vol. 2. IEEE; 2001. pp. 494–499.Google Scholar
Tsai A, Yezzi A, Willsky AS. Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification. 2001.zbMATHCrossRefGoogle Scholar
Osher S, Sethian JA. Fronts propagating with curvature-dependent speed: algorithms based on Hamilton–Jacobi formulations. J Comput Phys. 1988;79(1):12–49.MathSciNetzbMATHCrossRefGoogle Scholar
Zhu SC, Yuille A. Region competition: unifying snakes, region growing, and Bayes/MDL for multiband image segmentation. IEEE Trans Pattern Anal Mach Intell. 1996;9:884–900.Google Scholar
Yilmaz A, Li X, Shah M. Object contour tracking using level sets. In: Asian conference on computer vision. 2004.Google Scholar
Wang F. Particle filters for visual tracking. In: International conference on computer science and information engineering. Springer, Berlin; 2011. pp. 107–112.Google Scholar
Varas D, Marques F. Region-based particle filter for video object segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3470–3477.Google Scholar
Li H, Wang Y. Object of interest tracking based on visual saliency and feature points matching. 2015.Google Scholar
Chantara W, Mun JH, Shin DW, Ho YS. Object tracking using adaptive template matching. IEIE Trans Smart Process Comput. 2015;4(1):1–9.CrossRefGoogle Scholar
Baker S, Matthews I. Lucas-kanade 20 years on: a unifying framework. Int J Comput Vis. 2004;56(3):221–55.CrossRefGoogle Scholar
Benhimane S, Malis E. Homography-based 2d visual tracking and servoing. Int J Robot Res. 2007;26(7):661–76.CrossRefGoogle Scholar
Kwon J, Lee HS, Park FC, Lee KM. A geometric particle filter for template-based visual tracking. IEEE Trans Pattern Anal Mach Intell. 2014;36(4):625–43.CrossRefGoogle Scholar
Lin Z, Davis LS, Doermann D, DeMenthon D. Hierarchical part-template matching for human detection and segmentation. In: 2007 IEEE 11th international conference on computer vision. IEEE; 2007. pp. 1–8.Google Scholar
Ren X, Malik J. Tracking as repeated figure/ground segmentation. In: CVPR. Vol. 1. 2007. p. 7.Google Scholar
Chuang YY, Agarwala A, Curless B, Salesin DH, Szeliski R. Video matting of complex scenes. In: ACM transactions on graphics (ToG). Vol. 21, No. 3. ACM; 2002. pp. 243–248.Google Scholar
Wang J, Bhat P, Colburn RA, Agrawala M, Cohen MF. Interactive video cutout. In: ACM transactions on graphics (ToG). Vol. 24, No. 3. ACM; pp. 585–594.Google Scholar
Li Y, Sun J, Tang CK, Shum HY. Lazy snapping. ACM Trans Graph (ToG). 2004;23(3):303–8.CrossRefGoogle Scholar
Rother C, Kolmogorov V, Blake A. Interactive foreground extraction using iterated graph cuts. ACM Trans Graph. 2004;23:3.CrossRefGoogle Scholar
Lu L, Hager GD. A nonparametric treatment for location/segmentation based visual tracking. In: 2007 IEEE conference on computer vision and pattern recognition. IEEE; pp. 1–8.Google Scholar
Levinshtein A, Stere A, Kutulakos KN, Fleet DJ, Dickinson SJ, Siddiqi K. Turbopixels: fast superpixels using geometric flows. IEEE Trans Pattern Anal Mach Intell. 2009;31(12):2290–7.CrossRefGoogle Scholar
Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell. 2012;34(11):2274–82.CrossRefGoogle Scholar
Hu J, Fan XP, Liu S, Huang L. Robust target tracking algorithm based on superpixel visual attention mechanism: robust target tracking algorithm. Int J Ambient Comput Intell (IJACI). 2019;10(2):1–17.CrossRefGoogle Scholar
Wang S, Lu H, Yang F, Yang MH. Superpixel tracking. In: 2011 International conference on computer vision. IEEE; 2011. pp. 1323–1330.Google Scholar
Dey N, Ashour AS, Hassanien AE. Feature detectors and descriptors generations with numerous images and video applications: a recap. In: Feature detectors and motion detection in video processing. IGI Global; 2017. pp. 36–65.Google Scholar
Hore S, Bhattacharya T, Dey N, Hassanien AE, Banerjee A, Chaudhuri SB. A real time dactylology based feature extractrion for selective image encryption and artificial neural network. In: Image feature detectors and descriptors. Springer, Cham; 2016. pp. 203–226.CrossRefGoogle Scholar
Tharwat A, Gaber T, Awad YM, Dey N, Hassanien AE. Plants identification using feature fusion technique and bagging classifier. In: The 1st international conference on advanced intelligent system and informatics (AISI2015), November 28–30, 2015, Beni Suef, Egypt. Springer, Cham; 2016. pp. 461–471.Google Scholar
Lowe DG. Distinctive image features from scale-invariant keypoints. Int J Comput Vis. 2004;60(2):91–110.CrossRefGoogle Scholar
Wang Z, Xiao H, He W, Wen F, Yuan K. Real-time SIFT-based object recognition system. In: 2013 IEEE international conference on mechatronics and automation. IEEE; 2013; pp. 1361–1366.Google Scholar
Park C, Jung S. SIFT-based object recognition for tracking in infrared imaging system. In: 2009 34th International conference on infrared, millimeter, and terahertz waves; IEEE; 2009. pp. 1–2.Google Scholar
Mirunalini P, Jaisakthi SM, Sujana R. Tracking of object in occluded and non-occluded environment using SIFT and Kalman filter. In: TENCON 2017-2017 IEEE Region 10 Conference. IEEE; 2017. pp. 1290–1295.Google Scholar
Li Q, Li R, Ji K, Dai W. Kalman filter and its application. In: 2015 8th International Conference on Intelligent Networks and Intelligent Systems (ICINIS). IEEE; 2015. pp. 74–77.Google Scholar
Cane T, Ferryman J. Saliency-based detection for maritime object tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2016. pp. 18–25.Google Scholar
Borji A, Cheng MM, Hou Q, Jiang H, Li J. Salient object detection: a survey. arXiv preprint arXiv:1411.5878. 2014.
Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 1998;11:1254–9.CrossRefGoogle Scholar
Liu T, Yuan Z, Sun J, Wang J, Zheng N, Tang X, Shum HY. Learning to detect a salient object. IEEE Trans Pattern Anal Mach Intell. 2011;33(2):353–67.CrossRefGoogle Scholar
Zhang G, Yuan Z, Zheng N, Sheng X, Liu T. Visual saliency based object tracking. In: Asian conference on computer vision. 2009; Springer, Berlin. pp. 193–203.CrossRefGoogle Scholar
Taycher L, Shakhnarovich G, Demirdjian D, Darrell T. Conditional random people: tracking humans with crfs and grid filters (No. MIT-CSAIL-TR-2005-079). Massachusetts Inst of Tech Cambridge Computer Science and Artificial Intelligence Lab. 2005.Google Scholar
Jeong J, Yoon TS, Park JB. Mean shift tracker combined with online learning-based detector and Kalman filtering for real-time tracking. Expert Syst Appl. 2017;79:194–206.CrossRefGoogle Scholar
Xu L, Zeng L, Duan H, Sowah NL. Saliency detection in complex scenes. EURASIP J Image Video Process. 2014;2014(1):31.CrossRefGoogle Scholar
Liu Q, Zhao X, Hou Z. Survey of single-target visual tracking methods based on online learning. IET Comput Vis. 2014;8(5):419–28.CrossRefGoogle Scholar
Bacivarov I, Ionita M, Corcoran P. Statistical models of appearance for eye tracking and eye-blink detection and measurement. IEEE Trans Consum Electron. 2008;54(3):1312–20.CrossRefGoogle Scholar
Dou J, Qin Q, Tu Z. Robust visual tracking based on generative and discriminative model collaboration. Multimed Tools Appl. 2017;76(14):15839–66.CrossRefGoogle Scholar
Kawamoto K, Yonekawa T, Okamoto K. Visual vehicle tracking based on an appearance generative model. In: The 6th international conference on soft computing and intelligent systems, and the 13th international symposium on advanced intelligence systems. IEEE; 2012. pp. 711–714.Google Scholar
Chakraborty B, Bhattacharyya S, Chakraborty S. Generative model based video shot boundary detection for automated surveillance. Int J Ambient Comput Intell (IJACI). 2018;9(4):69–95.CrossRefGoogle Scholar
Remya KV, Vipin Krishnan CV. Survey of generative and discriminative appearance models in visual object tracking. Int J Adv Res Ideas Innov Technol. 2018;4(1). www.IJARIIT.com.
Jepson AD, Fleet DJ, El-Maraghi TF. Robust online appearance models for visual tracking. IEEE Trans Pattern Anal Mach Intell. 2003;25(10):1296–311.CrossRefGoogle Scholar
Zhou SK, Chellappa R, Moghaddam B. Visual tracking and recognition using appearance-adaptive models in particle filters. IEEE Trans Image Process. 2004;13(11):1491–506.CrossRefGoogle Scholar
Gao M, Shen J, Jiang J. Visual tracking using improved flower pollination algorithm. Optik. 2018;156:522–9.CrossRefGoogle Scholar
Yang H, Shao L, Zheng F, Wang L, Song Z. Recent advances and trends in visual tracking: a review. Neurocomputing. 2011;74(18):3823–31.CrossRefGoogle Scholar
Lee KC, Ho J, Yang MH, Kriegman D. Video-based face recognition using probabilistic appearance manifolds. In: IEEE computer society conference on computer vision and pattern recognition. Vol. 1. IEEE Computer Society; 1999. pp. I–313.Google Scholar
Ross DA, Lim J, Lin RS, Yang MH. Incremental learning for robust visual tracking. Int J Comput Vision. 2008;77(1–3):125–41.CrossRefGoogle Scholar
Funt BV, Ciurea F, McCann JJ. Retinex in matlab tm. J Electron Imaging. 2004;13(1):48–58.CrossRefGoogle Scholar
Ju MH, Kang HB. Illumination invariant face tracking and recognition. 2008.Google Scholar
Jia X, Lu H, Yang MH. Visual tracking via adaptive structural local sparse appearance model. In: 2012 IEEE Conference on computer vision and pattern recognition. IEEE. 2012. pp. 1822–1829.Google Scholar
Dou Jianfang, Qin Qin, Tu Zimei. Robust visual tracking based on generative and discriminative model collaboration. Multimed Tools Appl. 2016. https://doi.org/10.1007/s11042-016-3872-6.CrossRefGoogle Scholar
Zhang K, Zhang L, Yang MH. Real-time compressive tracking. In: European conference on computer vision. Springer, Berlin; 2012. pp. 864–877.CrossRefGoogle Scholar
Zhou T, Liu F, Bhaskar H, Yang J. Robust visual tracking via online discriminative and low-rank dictionary learning. IEEE Trans Cybern. 2018;48(9):2643–55.CrossRefGoogle Scholar
Fan H, Xiang J, Li G, Ni F. Robust visual tracking via deep discriminative model. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE; 2017. pp. 1927–1931.Google Scholar
Babenko B, Yang MH, Belongie S. Robust object tracking with online multiple instance learning. IEEE Trans Pattern Anal Mach Intell. 2011;33(8):1619–32.CrossRefGoogle Scholar
Hare S, Saffari A, Struck PHT. Structured output tracking with kernels. In: IEEE international conference on computer vision. IEEE; 2012. pp. 263–270.Google Scholar
Avidan S. Support vector tracking. In: Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001. Vol. 1. IEEE; 2001. pp. I–I.Google Scholar
Grabner H, Leistner C, Bischof H. Semi-supervised on-line boosting for robust tracking. In: European conference on computer vision. Springer, Berlin; 2008. pp. 234-247.Google Scholar
Saffari A, Leistner C, Santner J, Godec M, Bischof H. On-line random forests. In: 2009 IEEE 12th international conference on computer vision workshops, ICCV workshops. IEEE; 2009. pp. 1393–1400.Google Scholar
Henriques JF, Caseiro R, Martins P, Batista J. Exploiting the circulant structure of tracking-by-detection with kernels. In: European conference on computer vision. Springer, Berlin; 2012. pp. 702–715.CrossRefGoogle Scholar
Li X, Liu Q, He Z, Wang H, Zhang C, Chen WS. A multi-view model for visual tracking via correlation filters. Knowl-Based Syst. 2016;113:88–99.CrossRefGoogle Scholar
Bolme DS, Beveridge JR, Draper BA, Lui YM. Visual object tracking using adaptive correlation filters. In: 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE; 2010. pp. 2544–2550.Google Scholar
Danelljan M, Häger G, Khan F, Felsberg M. Accurate scale estimation for robust visual tracking. In: British machine vision conference, Nottingham, September 1–5, 2014. BMVA Press.Google Scholar
Danelljan M, Shahbaz Khan F, Felsberg M, Van de Weijer J. Adaptive color attributes for real-time visual tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. pp. 1090–1097.Google Scholar
Li Y, Zhu J. A scale adaptive kernel correlation filter tracker with feature integration. In: European conference on computer vision. Springer, Cham; 2014. pp. 254–265.CrossRefGoogle Scholar
Danelljan M, Bhat G, Gladh S, Khan FS, Felsberg M. Deep motion and appearance cues for visual tracking. Pattern Recogn Lett. 2019;124:74–81.CrossRefGoogle Scholar
Danelljan M, Häger G, Khan FS, Felsberg M. Discriminative scale space tracking. IEEE Trans Pattern Anal Mach Intell. 2017;39(8):1561–75.CrossRefGoogle Scholar
Duffner S, Garcia C. Using discriminative motion context for online visual object tracking. IEEE Trans Circuits Syst Video Technol. 2016;26(12):2215–25.CrossRefGoogle Scholar
Ulusoy I, Bishop CM. Generative versus discriminative methods for object recognition. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05). Vol. 2. IEEE; 2005. pp. 258–265.Google Scholar
McCallum A, Pal C, Druck G, Wang X. Multi-conditional learning: generative/discriminative training for clustering and classification. In: AAAI. 2006. pp. 433–439.Google Scholar
Kelm BM, Pal C, McCallum A. Combining generative and discriminative methods for pixel classification with multi-conditional learning. In: 18th International conference on pattern recognition (ICPR'06). Vol. 2. IEEE; 2006. pp. 828–832.Google Scholar
Blake A, Rother C, Brown M, Perez P, Torr P. Interactive image segmentation using an adaptive GMMRF model. In: European conference on computer vision. Springer, Berlin. 2004. pp. 428–441.Google Scholar
Acharjee S, Dey N, Biswas D, Das P, Chaudhuri SS. A novel Block Matching Algorithmic Approach with smaller block size for motion vector estimation in video compression. In: 2012 12th International conference on intelligent systems design and applications (ISDA). IEEE; 2012. pp. 668–672.Google Scholar
Acharjee S, Biswas D, Dey N, Maji P, Chaudhuri SS. An efficient motion estimation algorithm using division mechanism of low and high motion zone. In: 2013 International mutli-conference on automation, computing, communication, control and compressed sensing (iMac4s). IEEE; 2013. pp. 169–172.Google Scholar
Acharjee S, Ray R, Chakraborty S, Nath S, Dey N. Watermarking in motion vector for security enhancement of medical videos. In: 2014 International conference on control, instrumentation, communication and computational technologies (ICCICCT). IEEE; 2014. pp. 532–537.Google Scholar
Acharjee S, Chakraborty S, Karaa WBA, Azar AT, Dey N. Performance evaluation of different cost functions in motion vector estimation. Int J Service Sci Manag Eng Technol (IJSSMET). 2014;5(1):45–65.Google Scholar
Acharjee S, Chakraborty S, Samanta S, Azar AT, Hassanien AE, Dey N. Highly secured multilayered motion vector watermarking. In: International conference on advanced machine learning technologies and applications. Springer, Cham; 2014. pp. 121–134.Google Scholar
Acharjee S, Pal G, Redha T, Chakraborty S, Chaudhuri SS, Dey N. Motion vector estimation using parallel processing. In: International Conference on Circuits, Communication, Control and Computing. IEEE; 2014. pp. 231–236.Google Scholar
Rawat P, Singhai J. Review of motion estimation and video stabilization techniques for hand held mobile video. Sig Image Proc Int J (SIPIJ). 2011;2(2):159–68.Google Scholar
Irani M, Anandan P. About direct methods. In: International workshop on vision algorithms. Springer, Berlin; 1999. pp. 267–277.CrossRefGoogle Scholar
Torr PH, Zisserman A. Feature based methods for structure and motion estimation. In: International workshop on vision algorithms. Springer, Berlin; 1999. pp. 278–294.CrossRefGoogle Scholar
Fiaz M, Mahmood A, Jung SK. Tracking noisy targets: a review of recent object tracking approaches. arXiv preprint arXiv:1802.03098. 2018.
Kristan M, Matas J, Leonardis A, Felsberg M, Cehovin L, Fernandez G, et al. The visual object tracking vot2015 challenge results. In: Proceedings of the IEEE international conference on computer vision workshops. 2015. pp. 1–23.Google Scholar
Čehovin L, Leonardis A, Kristan M. Visual object tracking performance measures revisited. IEEE Trans Image Process. 2016;25(3):1261–74.MathSciNetzbMATHGoogle Scholar
Wu Y, Lim J, Yang MH. Online object tracking: a benchmark. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2013. pp. 2411–2418.Google Scholar
Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual object classes (voc) challenge. Int J Comput Vis. 2010;88(2):303–38.CrossRefGoogle Scholar
Hare S, Golodetz S, Saffari A, Vineet V, Cheng MM, Hicks SL, Torr PH. Struck: structured output tracking with kernels. IEEE Trans Pattern Anal Mach Intell. 2016;38(10):2096–109.CrossRefGoogle Scholar
Fang Y, Yuan Y, Li L, Wu J, Lin W, Li Z. Performance evaluation of visual tracking algorithms on video sequences with quality degradation. IEEE Access. 2017;5:2430–41.CrossRefGoogle Scholar
Kwon J, Lee KM. Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping Montecarlo sampling. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE; 2009. pp. 1208–1215.Google Scholar
Yang F, Lu H, Yang MH. Robust superpixel tracking. IEEE Trans Image Prcess. 2014;23(4):1639–51.MathSciNetzbMATHCrossRefGoogle Scholar
Kristan M, Kovacic S, Leonardis A, Pers J. A two-stage dynamic model for visual tracking. IEEE Trans Syst Man Cybern Part B (Cybernetics). 2010;40(6):1505–20.zbMATHCrossRefGoogle Scholar
Kristan M, Pers J, Perse M, Kovacic S, Bon M. Multiple interacting targets tracking with application to team sports. In: ISPA 2005. Proceedings of the 4th international symposium on image and signal processing and analysis. IEEE; 2005. pp. 322–327.Google Scholar
Nawaz T, Cavallaro A. A protocol for evaluating video trackers under real-world conditions. IEEE Trans Image Process. 2013;22(4):1354–61.MathSciNetzbMATHCrossRefGoogle Scholar
Zhang X, Payandeh S. Application of visual tracking for robot-assisted laparoscopic surgery. J Robot Syst. 2002;19(7):315–28.zbMATHCrossRefGoogle Scholar
Dey N, Ashour AS, Shi F, Sherratt RS. Wireless capsule gastrointestinal endoscopy: direction-of-arrival estimation based localization survey. IEEE Rev Biomed Eng. 2017;10:2–11.CrossRefGoogle Scholar
Su MC, Wang KC, Chen GD. An eye tracking system and its application in aids for people with severe disabilities. Biomed Eng Appl Basis Commun. 2006;18(06):319–27.CrossRefGoogle Scholar
Chen Y, Levy DL, Sheremata S, Holzman PS. Bipolar and schizophrenic patients differ in patterns of visual motion discrimination. Schizophr Res. 2006;88(1–3):208–16.CrossRefGoogle Scholar
Raudonis V, Simutis R, Narvydas G. Discrete eye tracking for medical applications. In: 2009 2nd International Symposium on Applied Sciences in Biomedical and Communication Technologies. IEEE; 2009. pp. 1–6.Google Scholar
De Santis A, Iacoviello D. A robust eye tracking procedure for medical and industrial applications. In: Advances in computational vision and medical image processing. Springer, Dordrecht; 2009. pp. 173–185.Google Scholar
Harezlak K, Kasprowski P. Application of eye tracking in medicine: a survey, research issues and challenges. Comput Med Imaging Graph. 2018;65:176–90.CrossRefGoogle Scholar
Lennon J, Atkins E. Color-based vision tracking for an astronaut EVA assist vehicle (No. 2001-01-2135). SAE Technical Paper. 2001.Google Scholar
Borra S, Thanki R, Dey N. Satellite image classification. In: Satellite image analysis: clustering and classification. Springer, Singapore. pp. 53–81.CrossRefGoogle Scholar
Zhao Q, Yang Z, Tao H. Differential earth mover's distance with its applications to visual tracking. IEEE Trans Pattern Anal Mach Intell. 2010;32(2):274–87.CrossRefGoogle Scholar
Kamate S, Yilmazer N. Application of object detection and tracking techniques for unmanned aerial vehicles. Proc Comput Sci. 2015;61:436–41.CrossRefGoogle Scholar
Zhang R, Wang Z, Zhang Y. Astronaut visual tracking of flying assistant robot in space station based on deep learning and probabilistic model. Int J Aerosp Eng. 2018.Google Scholar
Mistry P, Maes P, Chang L. WUW-wear Ur world: a wearable gestural interface. In: CHI'09 extended abstracts on Human factors in computing systems. ACM; 2009. pp. 4111–4116.Google Scholar
Kerdvibulvech C. Markerless vision-based tracking for interactive augmented reality game. Int J Interact Worlds (IJIW'10). 2010.Google Scholar
Kerdvibulvech C. Asiatic skin color segmentation using an adaptive algorithm in changing luminance environment. 2011.Google Scholar
Klein G, Murray D. Parallel tracking and mapping on a camera phone. In: 2009 8th IEEE international symposium on mixed and augmented reality. IEEE; 2009. pp. 83–86.Google Scholar
Woodward C, Hakkarainen M. Mobile mixed reality system for architectural and construction site visualization. In: Augmented reality-some emerging application areas. IntechOpen; 2011.Google Scholar
Dantone M, Bossard L, Quack T, Van Gool L. Augmented faces. In: 2011 IEEE international conference on computer vision workshops (ICCV Workshops). IEEE; 2011. pp. 24–31.Google Scholar
Kerdvibulvech C. Augmented realityapplications using visual tracking. วารสารเทคโนโลยีสารสนเทศลาดกระบัง. 2016;2(1).Google Scholar
Casas S, Olanda R, Dey N. Motion cueing algorithms: a review: algorithms, evaluation and tuning. Int J Virtual Augment Reality (IJVAR). 2017;1(1):90–106.CrossRefGoogle Scholar
Ribo M, Lang P, Ganster H, Brandner M, Stock C, Pinz A. Hybrid tracking for outdoor augmented reality applications. IEEE Comput Graph Appl. 2002;22(6):54–63.CrossRefGoogle Scholar
Klopschitz M, Schall G, Schmalstieg D, Reitmayr G. Visual tracking for augmented reality. In: 2010 International conference on indoor positioning and indoor navigation. IEEE; 2010. pp. 1–4.Google Scholar
Reitmayr G, Drummond T. Going out: robust model-based tracking for outdoor augmented reality. In: ISMAR. Vol. 6. 2006. pp. 109–118.Google Scholar
Rehg JM, Kanade T. Visual tracking of high dof articulated structures: an application to human hand tracking. In: European conference on computer vision. Springer, Berlin; 1994. pp. 35–46.CrossRefGoogle Scholar
Gavrila DM. The visual analysis of human movement: a survey. Comput Vis Image Underst. 1999;73(1):82–98.zbMATHCrossRefGoogle Scholar
Lathuiliere F, Herve JY. Visual hand posture tracking in a gripper guiding application. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065). Vol. 2. IEEE; 2000. pp. 1688–1694.Google Scholar
Chen ZW, Chiang CC, Hsieh ZT. Extending 3D Lucas-Kanade tracking with adaptive templates for head pose estimation. Mach Vis Appl. 2010;21(6):889–903.CrossRefGoogle Scholar
Nistér D, Naroditsky O, Bergen J. Visual odometry for ground vehicle applications. J Field Robot. 2006;23(1):3–20.zbMATHCrossRefGoogle Scholar
Bonin-Font F, Ortiz A, Oliver G. Visual navigation for mobile robots: a survey. J Intell Rob Syst. 2008;53(3):263–96.CrossRefGoogle Scholar
Borenstein J, Koren Y. Real-time obstacle avoidance for fast mobile robots. IEEE Trans Syst Man Cybern. 1989;19(5):1179–87.CrossRefGoogle Scholar
Bernardino A, Santos-Victor J. Visual behaviours for binocular tracking. Robot Auton Syst. 1998;25(3–4):137–46.CrossRefGoogle Scholar
Ciliberto C, Pattacini U, Natale L, Nori F, Metta G. Reexamining lucas-kanade method for real-time independent motion detection: application to the icub humanoid robot. In: 2011 IEEE/RSJ international conference on intelligent robots and systems. IEEE; 2011. pp. 4154–4160.Google Scholar
Das PK, Mandhata SC, Panda CN, Patro SN. Vision based object tracking by mobile robot. Int J Comput Appl. 2012;45(8):40–2.Google Scholar
Sibert JL, Gokturk M, Lavine RA. The reading assistant: eye gaze triggered auditory prompting for reading remediation. In: Proceedings of the 13th annual ACM symposium on user interface software and technology. ACM; 2000. pp. 101-107.Google Scholar
Bolt RA. Eyes at the interface. In: Proceedings of the 1982 conference on Human factors in computing systems. ACM; 1982. pp. 360–362.Google Scholar
Jacob RJ. Eye movement-based human-computer interaction techniques: toward non-command interfaces. Adv Hum Comput Interact. 1993;4:151–90.Google Scholar
Sibert LE, Jacob RJ. Evaluation of eye gaze interaction. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM; 2000. pp. 281–288.Google Scholar
McConkie GW, Zola D. Eye movement techniques in studying differences among developing readers. Center for the study of reading technical report; no. 377. 1986.Google Scholar
O'Regan JK. Eye movements and reading. Rev Oculomot Res. 1990;4:395–453.Google Scholar
Rayner K. Eye movements in reading and information processing: 20 years of research. Psychol Bull. 1998;124(3):372.CrossRefGoogle Scholar
Wang H, Chignell M, Ishizuka M. Empathic tutoring software agents using real-time eye tracking. In: Proceedings of the 2006 symposium on eye tracking research and applications. ACM; 2006. pp. 73–78.Google Scholar
Tsai MJ, Hou HT, Lai ML, Liu WY, Yang FY. Visual attention for solving multiple-choice science problem: an eye-tracking analysis. Comput Educ. 2012;58(1):375–85.CrossRefGoogle Scholar
Dessus P, Cosnefroy O, Luengo V. "Keep Your Eyes on'em all!": a mobile eye-tracking analysis of teachers' sensitivity to students. In: European conference on technology enhanced learning. Springer, Cham; 2016. pp. 72–84.CrossRefGoogle Scholar
Busjahn T, Schulte C, Sharif B, Begel A, Hansen M, Bednarik R, et al. Eye tracking in computing education. In: Proceedings of the tenth annual conference on International computing education research. ACM; 2014. pp. 3–10.Google Scholar
Sun Y, Li Q, Zhang H, Zou J. The application of eye tracking in education. In: International conference on intelligent information hiding and multimedia signal processing. Springer, Cham; 2017. pp. 27–33.Google Scholar
Obaidellah U, Al Haek M, Cheng PCH. A survey on the usage of eye-tracking in computer programming. ACM Comput Surv (CSUR). 2018;51(1):5.CrossRefGoogle Scholar
Smith AW, Lovell BC. Visual tracking for sports applications. 2005.Google Scholar
Mauthner T, Bischof H. A robust multiple object tracking for sport applications. 2007. Google Scholar
Battal Ö, Balcıoğlu T, Duru AD. Analysis of gaze characteristics with eye tracking system during repeated breath holding exercises in underwater hockey elite athletes. In: 2016 20th National Biomedical Engineering Meeting (BIYOMUT). IEEE; 2016. pp. 1–4.Google Scholar
Kredel R, Vater C, Klostermann A, Hossner EJ. Eye-tracking technology and the dynamics of natural gaze behavior in sports: a systematic review of 40 years of research. Front Psychol. 2017;8:1845.CrossRefGoogle Scholar
Discombe RM, Cotterill ST. Eye tracking in sport: a guide for new and aspiring researchers. Sport Exerc Psychol Rev. 2015;11(2):49–58.Google Scholar
Mademlis I, Mygdalis V, Nikolaidis N, Pitas I. Challenges in autonomous UAV cinematography: an overview. In 2018 IEEE international conference on multimedia and expo (ICME). IEEE; 2018. pp. 1–6.Google Scholar
Passalis N, Tefas A, Pitas I. Efficient camera control using 2D visual information for unmanned aerial vehicle-based cinematography. In: 2018 IEEE international symposium on circuits and systems (ISCAS). IEEE; 2018. pp. 1–5.Google Scholar
Hubbard AW, Seng CN. Visual movements of batters. Res Q Am Assoc Health Phys Educ Recreat. 1954;25(1):42–57.Google Scholar
Zachariadis O, Mygdalis V, Mademlis I, Nikolaidis N, Pitas I. 2D visual tracking for sports UAV cinematography applications. In: 2017 IEEE global conference on signal and information processing (GlobalSIP). IEEE; 2017. pp. 36–40.Google Scholar
Ramli L, Mohamed Z, Abdullahi AM, Jaafar HI, Lazim IM. Control strategies for crane systems: a comprehensive review. Mech Syst Signal Process. 2017;95:1–23.CrossRefGoogle Scholar
Peng KCC, Singhose W, Bhaumik P. Using machine vision and hand-motion control to improve crane operator performance. IEEE Trans Syst Man Cybern Part A Syst Hum. 2012;42(6):1496–503.CrossRefGoogle Scholar
Wedel M, Pieters R. A review of eye-tracking research in marketing. In: Review of marketing research. Emerald Group Publishing Limited; 2008. pp. 123–147.Google Scholar
Koller M, Salzberger T, Brenner G, Walla P. Broadening the range of applications of eye-tracking in business research. Analise Porto Alegre. 2012;23(1):71–7.Google Scholar
Zamani H, Abas A, Amin MKM. Eye tracking application on emotion analysis for marketing strategy. J Telecommun Electron Comput Eng (JTEC). 2016;8(11):87–91.Google Scholar
Wedel M, Pieters R. Eye tracking for visual marketing. Found Trends Market. 2008;1(4):231–320.CrossRefGoogle Scholar
dos Santos RDOJ, de Oliveira JHC, Rocha JB, Giraldi JDME. Eye tracking in neuromarketing: a research agenda for marketing studies. Int J Psychol Stud. 2015;7(1):32.CrossRefGoogle Scholar
Boraston Z, Blakemore SJ. The application of eye-tracking technology in the study of autism. J Physiol. 2007;581(3):893–8.CrossRefGoogle Scholar
Babenko B, Yang MH, Belongie S. Visual tracking with online multiple instance learning. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE;2009. pp. 983–990.Google Scholar
Hu D, Zhou X, Yu X, Hou Z. Study on deep learning and its application in visual tracking. In: 2015 10th International conference on broadband and wireless computing, communication and applications (BWCCA). IEEE; 2015. pp. 240–246.Google Scholar
Liu W, Wang Z, Liu X, Zeng N, Liu Y, Alsaadi FE. A survey of deep neural network architectures and their applications. Neurocomputing. 2017;234:11–26.CrossRefGoogle Scholar
Lan K, Wang DT, Fong S, Liu LS, Wong KK, Dey N. A survey of data mining and deep learning in bioinformatics. J Med Syst. 2018;42(8):139.CrossRefGoogle Scholar
Dey N, Ashour AS, Borra S. (Eds.). Classification in bioapps: automation of decision making. Vol. 26. Springer; 2017.Google Scholar
Schulter S, Leistner C, Wohlhart P, Roth PM, Bischof H. Accurate object detection with joint classification-regression random forests. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. pp. 923–930.Google Scholar
Anguita D, Parodi G, Zunino R. Neural structures for visual motion tracking. Mach Vis Appl. 1995;8(5):275–88.CrossRefGoogle Scholar
Zhang, J., Yang, L., & Wu, X. (2016, October). A survey on visual tracking via convolutional neural networks. In 2016 2nd IEEE International Conference on Computer and Communications (ICCC) (pp. 474-479). IEEE.Google Scholar
Sultana M, Mahmood A, Javed S, Jung SK. Unsupervised deep context prediction for background estimation and foreground segmentation. Mach Vis. Appl. 2019;30(3):375–95.Google Scholar
Hu L, Hong C, Zeng Z, Wang X. Two-stream person re-identification with multi-task deep neural networks. Mach Vis Appl. 2018;29(6):947–54.CrossRefGoogle Scholar
Li Z, Dey N, Ashour AS, Cao L, Wang Y, Wang D, et al. Convolutional neural network based clustering and manifold learning method for diabetic plantar pressure imaging dataset. J Med Imaging Health Inf. 2017;7(3):639–52.CrossRefGoogle Scholar
Wang Y, Chen Y, Yang N, Zheng L, Dey N, Ashour AS, et al. Classification of mice hepatic granuloma microscopic images based on a deep convolutional neural network. Appl Soft Comput. 2019;74:40–50.CrossRefGoogle Scholar
Wang D, Li Z, Dey N, Ashour AS, Moraru L, Biswas A, Shi F. Optical pressure sensors based plantar image segmenting using an improved fully convolutional network. Optik. 2019;179:99–114.CrossRefGoogle Scholar
Hu S, Liu M, Fong S, Song W, Dey N, Wong R. Forecasting China future MNP by deep learning. In: Behavior engineering and applications. Springer, Cham. 2018. pp. 169–210.Google Scholar
Zhuo L, Jiang L, Zhu Z, Li J, Zhang J, Long H. Vehicle classification for large-scale traffic surveillance videos using convolutional neural networks. Mach Vis Appl. 2017;28(7):793–802.CrossRefGoogle Scholar
Dey N, Fong S, Song W, Cho K. Forecasting energy consumption from smart home sensor network by deep learning. In: International Conference on Smart Trends for Information Technology and Computer Communications. Springer, Singapore. 2017. pp. 255–265.Google Scholar
© Springer Nature Singapore Pte Ltd 2020
1.Department of Information TechnologyTechno International New Town (Formerly known as Techno India College of Technology)KolkataIndia
2.A.K. Choudhury School of Information TechnologyUniversity of CalcuttaKolkataIndia
3.Department of Physics, Faculty of SciencesUniv. "Dunarea de Jos"GalatiRomania
4.Information Technology Department, Faculty of Computers and InformationCairo UniversityGizaEgypt
Dutta, A., Mondal, A., Dey, N. et al. SN COMPUT. SCI. (2020) 1: 57. https://doi.org/10.1007/s42979-019-0059-z
DOI https://doi.org/10.1007/s42979-019-0059-z
Publisher Name Springer Singapore | CommonCrawl |
DCDS Home
Well-posedness for vanishing viscosity solutions of scalar conservation laws on a network
November 2017, 37(11): 5943-5977. doi: 10.3934/dcds.2017258
Asymptotic large time behavior of singular solutions of the fast diffusion equation
Kin Ming Hui 1, and Soojung Kim 2,,
Institute of Mathematics, Academia Sinica, Taipei, Taiwan
Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, China
* Corresponding author: Soojung Kim
Received December 2016 Revised June 2017 Published July 2017
We study the asymptotic large time behavior of singular solutions of the fast diffusion equation
$u_t=Δ u^m$
$({\mathbb R}^n\setminus\{0\})×(0, ∞)$
in the subcritical case
$0<m<\frac{n-2}{n}$
$n≥3$
. Firstly, we prove the existence of the singular solution
$u$
of the above equation that is trapped in between self-similar solutions of the form of
$t^{-α} f_i(t^{-β}x)$
$i=1, 2$
, with the initial value
$u_0$
satisfying
$A_1|x|^{-γ}≤ u_0≤ A_2|x|^{-γ}$
for some constants
$A_2>A_1>0$
$\frac{2}{1-m}<γ<\frac{n-2}{m}$
, where
$β:=\frac{1}{2-γ(1-m)}$, $α:=\frac{2\beta-1}{1-m}, $
and the self-similar profile
$f_i$
satisfies the elliptic equation
$Δ f^m+α f+β x· \nabla f=0 \,\,\,\,\,\,\mbox{ in ${\mathbb R}^n\setminus\{0\}$}$
with $\lim_{|x|\to0}|x|^{\frac{ α}{ β}}f_i(x)=A_i$ and $\lim_{|x|\to∞}|x|^{\frac{n-2}{m}}{f_i}(x)= D_{A_i} $ for some constants $D_{A_i}>0$. When $\frac{2}{1-m} < γ < n$, under an integrability condition on the initial value $u_0$ of the singular solution $u$, we prove that the rescaled function
$\tilde u(y, τ):= t^{\, α} u(t^{\, β} y, t),\,\,\,\,\,\, { τ:=\log t}, $
converges to some self-similar profile $f$ as $τ\to∞$.
Keywords: Existence, large time behavior, fast diffusion equation, singular solution, self-similar solution.
Mathematics Subject Classification: Primary: 35B35, 35B44, 35K55, 35K65.
Citation: Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5943-5977. doi: 10.3934/dcds.2017258
D. G. Aronson, The porous medium equation, Nonlinear diffusion problems, (Montecatini Terme, 1985), 1-46, Lecture Notes in Math., 1224, Springer, Berlin, 1986. doi: 10.1007/BFb0072687. Google Scholar
A. Blanchet, M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Asymptotics of the fast diffusion equation via entropy estimates, Arch. Ration. Mech. Anal., 191 (2009), 347-385. doi: 10.1007/s00205-008-0155-z. Google Scholar
M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez, Sharp rates of decay of solutions to the nonlinear fast diffusion equation via functional inequalities, Proc. Natl. Acad. Sci. USA, 107 (2010), 16459-16464. doi: 10.1073/pnas.1003972107. Google Scholar
E. Chasseigne and J. L. Vázquez, Theory of extended solutions for fast-diffusion equations in optimal classes of data. Radiation from singularities, Arch. Ration. Mech. Anal., 164 (2002), 133-187. doi: 10.1007/s00205-002-0210-0. Google Scholar
P. Daskalopoulos and C. E. Kenig, Degenerate Diffusion: Initial Value Problems and Local Regularity Theory, EMS Tracts in Mathematics, 1. European Mathematical Society (EMS), Zürich, 2007. doi: 10.4171/033. Google Scholar
P. Daskalopoulos, J. King and N. Sesum, Extinction profile of complete non-compact solutions to the Yamabe flow, arXiv: 1306.0859. Google Scholar
P. Daskalopoulos, M. del Pino and N. Sesum, Type Ⅱ ancient compact solutions to the Yamabe flow, J. Reine Angew. Math., (2015), http://dx.doi.org/10.1515/crelle-2015-0048 in press. doi: 10.1515/crelle-2015-0048. Google Scholar
P. Daskalopoulos and N. Sesum, On the extinction profile of solutions to fast diffusion, J. Reine Angew. Math., 622 (2008), 95-119. doi: 10.1515/CRELLE.2008.066. Google Scholar
P. Daskalopoulos and N. Sesum, The classification of locally conformally flat Yamabe solitons, Adv. Math., 240 (2013), 346-369. doi: 10.1016/j.aim.2013.03.011. Google Scholar
M. Fila, J. L. Vázquez, M. Winkler and E. Yanagida, Rate of convergence to Barenblatt profiles for the fast diffusion equation, Arch. Ration. Mech. Anal., 204 (2012), 599-625. doi: 10.1007/s00205-011-0486-z. Google Scholar
M. Fila and M. Winkler, Optimal rates of convergence to the singular Barenblatt profile for the fast diffusion equation, Proc. Roy. Soc. Edinburgh Sect. A, 146 (2016), 309-324. doi: 10.1017/S0308210515000554. Google Scholar
M. Fila and M. Winkler, Rate of convergence to separable solutions of the fast diffusion equation, Israel J. Math., 213 (2016), 1-32. doi: 10.1007/s11856-016-1319-4. Google Scholar
M. Fila and M. Winkler, Slow growth of solutions of superfast diffusion equations with unbounded initial data, J. London Math. Soc.(2), 95 (2017), 659-683. doi: 10.1112/jlms.12029. Google Scholar
M. A. Herrero and M. Pierre, The Cauchy problem for $u_t = \Delta u^m$ when $0 < m < 1$, Trans. Amer. Math. Soc., 291 (1985), 145-158. doi: 10.1090/S0002-9947-1985-0797051-0. Google Scholar
S.Y. Hsu, Asymptotic profile of solutions of a singular diffusion equation as $t \to∞$, Nonlinear Anal., 48 (2002), 781-790. doi: 10.1016/S0362-546X(00)00214-5. Google Scholar
S. Y. Hsu, Singular limit and exact decay rate of a nonlinear elliptic equation, Nonlinear Anal., 75 (2012), 3443-3455. doi: 10.1016/j.na.2012.01.009. Google Scholar
S. Y. Hsu, Existence and asymptotic behaviour of solutions of the very fast diffusion equation, Manuscripta Math., 140 (2013), 441-460. doi: 10.1007/s00229-012-0576-8. Google Scholar
K. M. Hui, On some Dirichlet and Cauchy problems for a singular diffusion equation, Differential Integral Equations, 15 (2002), 769-804. Google Scholar
K. M. Hui, Singular limit of solutions of the very fast diffusion equation, Nonlinear Anal., 68 (2008), 1120-1147. doi: 10.1016/j.na.2006.12.009. Google Scholar
K. M. Hui, Asymptotic behaviour of solutions of the fast diffusion equation near its extinction time, J. Math. Anal. Appl., 454 (2017), 695-715. doi: 10.1016/j.jmaa.2017.05.006. Google Scholar
T. Kato, Perturbation Theory for Linear Operators, 2nd ed., Grundlehren Math. Wiss. 132, Springer-Verlag, Berlin, New York, 1976. Google Scholar
O. A. Ladyzenskaya, V. A. Solonnikov and N. N. Uraltceva, Linear and Quasilinear Equations of Parabolic Type, (Russian) Transl. Math. Mono. vol. 23, Amer. Math. Soc., Providence, R. I., U. S. A., 1968. Google Scholar
S. J. Osher and J. V. Ralston, L1 stability of traveling waves with applications to convective porous media flow, Comm. Pure Appl. Math., 35 (1982), 737-749. doi: 10.1002/cpa.3160350602. Google Scholar
M. del Pino and M. Sáez, On the extinction profile for solutions of $u_t=\Delta u^{\frac{N-2}{N+2}}$, Indiana Univ. Math. J., 50 (2001), 611-628. doi: 10.1512/iumj.2001.50.1876. Google Scholar
J. L. Vázquez, Nonexistence of solutions for nonlinear heat equations of fast-diffusion type, J. Math. Pures Appl.(9), 71 (1992), 503-526. Google Scholar
J. L. Vázquez, Smoothing and Decay Estimates for Nonlinear Diffusion Equations. Equations of Porous Medium Type, Oxford Lecture Series in Mathematics and its Applications 33, Oxford University Press, Oxford, 2006. doi: 10.1093/acprof:oso/9780199202973.001.0001. Google Scholar
J. L. Vázquez and M. Winkler, The evolution of singularities in fast diffusion equations: Infinite time blow-down, SIAM J. Math. Anal., 43 (2011), 1499-1535. doi: 10.1137/100809465. Google Scholar
R. Ye, Global existence and convergence of Yamabe flow, J. Differential Geom., 39 (1994), 35-50. doi: 10.4310/jdg/1214454674. Google Scholar
Shota Sato, Eiji Yanagida. Forward self-similar solution with a moving singularity for a semilinear parabolic equation. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 313-331. doi: 10.3934/dcds.2010.26.313
Shota Sato, Eiji Yanagida. Singular backward self-similar solutions of a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 897-906. doi: 10.3934/dcdss.2011.4.897
Kin Ming Hui, Jinwan Park. Asymptotic behaviour of singular solution of the fast diffusion equation in the punctured euclidean space. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5473-5508. doi: 10.3934/dcds.2021085
Cong He, Hongjun Yu. Large time behavior of the solution to the Landau Equation with specular reflective boundary condition. Kinetic & Related Models, 2013, 6 (3) : 601-623. doi: 10.3934/krm.2013.6.601
Marco Cannone, Grzegorz Karch. On self-similar solutions to the homogeneous Boltzmann equation. Kinetic & Related Models, 2013, 6 (4) : 801-808. doi: 10.3934/krm.2013.6.801
Razvan Gabriel Iagar, Ana Isabel Muñoz, Ariel Sánchez. Self-similar blow-up patterns for a reaction-diffusion equation with weighted reaction in general dimension. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2022003
Qiaolin He. Numerical simulation and self-similar analysis of singular solutions of Prandtl equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 101-116. doi: 10.3934/dcdsb.2010.13.101
Kin Ming Hui, Sunghoon Kim. Existence of Neumann and singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems, 2015, 35 (10) : 4859-4887. doi: 10.3934/dcds.2015.35.4859
Kin Ming Hui. Existence of self-similar solutions of the inverse mean curvature flow. Discrete & Continuous Dynamical Systems, 2019, 39 (2) : 863-880. doi: 10.3934/dcds.2019036
Bendong Lou. Self-similar solutions in a sector for a quasilinear parabolic equation. Networks & Heterogeneous Media, 2012, 7 (4) : 857-879. doi: 10.3934/nhm.2012.7.857
Marek Fila, Michael Winkler, Eiji Yanagida. Convergence to self-similar solutions for a semilinear parabolic equation. Discrete & Continuous Dynamical Systems, 2008, 21 (3) : 703-716. doi: 10.3934/dcds.2008.21.703
Jie Zhao. Large time behavior of solution to quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1737-1755. doi: 10.3934/dcds.2020091
Weronika Biedrzycka, Marta Tyran-Kamińska. Self-similar solutions of fragmentation equations revisited. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 13-27. doi: 10.3934/dcdsb.2018002
K. T. Joseph, Philippe G. LeFloch. Boundary layers in weak solutions of hyperbolic conservation laws II. self-similar vanishing diffusion limits. Communications on Pure & Applied Analysis, 2002, 1 (1) : 51-76. doi: 10.3934/cpaa.2002.1.51
Jochen Merker, Aleš Matas. Positivity of self-similar solutions of doubly nonlinear reaction-diffusion equations. Conference Publications, 2015, 2015 (special) : 817-825. doi: 10.3934/proc.2015.0817
Adrien Blanchet, Philippe Laurençot. Finite mass self-similar blowing-up solutions of a chemotaxis system with non-linear diffusion. Communications on Pure & Applied Analysis, 2012, 11 (1) : 47-60. doi: 10.3934/cpaa.2012.11.47
Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei. Existence and uniqueness of singular solution to stationary Schrödinger equation with supercritical nonlinearity. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4613-4626. doi: 10.3934/dcds.2013.33.4613
Bhargav Kumar Kakumani, Suman Kumar Tumuluri. Asymptotic behavior of the solution of a diffusion equation with nonlocal boundary conditions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 407-419. doi: 10.3934/dcdsb.2017019
Zoran Grujić. Regularity of forward-in-time self-similar solutions to the 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2006, 14 (4) : 837-843. doi: 10.3934/dcds.2006.14.837
Joana Terra, Noemi Wolanski. Large time behavior for a nonlocal diffusion equation with absorption and bounded initial data. Discrete & Continuous Dynamical Systems, 2011, 31 (2) : 581-605. doi: 10.3934/dcds.2011.31.581
HTML views (77)
Kin Ming Hui Soojung Kim | CommonCrawl |
Matthew McAteer
Deep Learning Concepts every practicioner should know
A deep dive into the important 'deep' learning concepts
October 24, 2019 | UPDATED October 26, 2019
This post is Part 3 of the 4-part Machine Learning Research Interview Handbook series (you can see the rest of the series here).
Deep Learning is the subfield within machine learning involving the use of nested models for more complext tasks. This becomes especially relevant in fields such as reinforcement learning.
Machine Learning in General
Can you state Tom Mitchell's definition of learning and discuss T, P and E?
What different types of tasks can be encountered in Machine Learning?
What are supervised, unsupervised, semi-supervised, self-supervised, multi-instance learning, and reinforcement learning?
Loosely how can supervised learning be converted into unsupervised learning and vice-versa?
Consider linear regression. What are T, P and E?
Derive the normal equation for linear regression.
What do you mean by affine transformation? Discuss affine vs. linear transformation.
Discuss training error, test error, generalization error, overfitting, and underfitting.
Compare representational capacity vs. effective capacity of a model.
What is VC dimension?
What are nonparametric models? What is nonparametric learning?
What is an ideal model? What is Bayes error? What is/are the source(s) of Bayes error occur?
What is the "no free lunch" theorem in connection to Machine Learning?
What is regularization? Intuitively, what does regularization do during the optimization procedure?
What is weight decay? What is it added?
What is a hyperparameter? How do you choose which settings are going to be hyperparameters and which are going to be learned?
Why is a validation set necessary?
What are the different types of cross-validation? When do you use which one?
What are point estimation and function estimation in the context of Machine Learning? What is the relation between them?
What is the maximal likelihood of a parameter vector θ\thetaθ? Where does the log come from?
Prove that for linear regression MSE can be derived from maximal likelihood by proper assumptions.
Why is maximal likelihood the preferred estimator in ML?
Under what conditions do the maximal likelihood estimator guarantee consistency?
What is cross-entropy of loss?
What is the difference between loss function, cost function and objective function?
Optimization procedures
What is the difference between an optimization problem and a Machine Learning problem?
How can a learning problem be converted into an optimization problem?
What is empirical risk minimization? Why the term empirical? Why do we rarely use it in the context of deep learning?
Name some typical loss functions used for regression. Compare and contrast.
What is the 0–1 loss function? Why can't the 0–1 loss function or classification error be used as a loss function for optimizing a deep neural network?
Sequence Modeling
Write the equation describing a dynamical system. Can you unfold it? Now, can you use this to describe a RNN?
What determines the size of an unfolded graph?
What are the advantages of an unfolded graph?
What does the output of the hidden layer of a RNN at any arbitrary time ttt represent?
Are the output of hidden layers of RNNs lossless? If not, why?
RNNs are used for various tasks. From a RNNs point of view, what tasks are more demanding than others?
Discuss some examples of important design patterns of classical RNNs.
Write the equations for a classical RNN where hidden layer has recurrence. How would you define the loss in this case? What problems you might face while training it?
What is backpropagation through time?
Consider a RNN that has only output to hidden layer recurrence. What are its advantages or disadvantages compared to a RNN having only hidden to hidden recurrence?
What is Teacher forcing? Compare and contrast with BPTT.
What is the disadvantage of using a strict teacher forcing technique? How to solve this?
Explain the vanishing/exploding gradient phenomenon for recurrent neural networks.
Why don't we see the vanishing/exploding gradient phenomenon in feedforward networks?
What is the key difference in architecture of LSTMs/GRUs compared to traditional RNNs?
What is the difference between LSTM and GRU?
Explain Gradient Clipping.
Adam and RMSProp adjust the size of gradients based on previously seen gradients. Do they inherently perform gradient clipping? If no, why?
Discuss RNNs in the context of Bayesian Machine Learning.
Can we do Batch Normalization in RNNs? If not, what is the alternative?
Autoencoders
What is an Autoencoder? What does it "auto-encode"?
What were Autoencoders traditionally used for? Why there has been a resurgence of Autoencoders for generative modeling?
What is recirculation?
What loss functions are used for Autoencoders?
What is a linear autoencoder? Can it be optimal (lowest training reconstruction error)? If yes, under what conditions?
What is the difference between Autoencoders and PCA?
What is the impact of the size of the hidden layer in Autoencoders?
What is an undercomplete Autoencoder? Why is it typically used for?
What is a linear Autoencoder? Discuss it's equivalence with PCA. Which one is better in reconstruction?
What problems might a nonlinear undercomplete Autoencoder face?
What are overcomplete Autoencoders? What problems might they face? Does the scenario change for linear overcomplete autoencoders?
Discuss the importance of regularization in the context of Autoencoders.
Why does generative autoencoders not require regularization?
What are sparse autoencoders?
What is a denoising autoencoder? What are its advantages? How does it solve the overcomplete problem?
What is score matching? Discuss it's connections to DAEs.
Are there any connections between Autoencoders and RBMs?
What is manifold learning? How are denoising and contractive autoencoders equipped to do manifold learning?
What is a contractive autoencoder? Discuss its advantages. How does it solve the overcomplete problem?
Why is a contractive autoencoder named so?
What are the practical issues with CAEs? How to tackle them?
What is a stacked autoencoder? What is a deep autoencoder? Compare and contrast.
Compare the reconstruction quality of a deep autoencoder vs. PCA.
What is predictive sparse decomposition?
Discuss some applications of Autoencoders.
Representation Learning
What is representation learning? Why is it useful?
What is the relation between Representation Learning and Deep Learning?
What is one-shot and zero-shot learning? Give examples.
What trade offs does representation learning have to consider?
What is greedy layer-wise unsupervised pretraining (GLUP)? Why greedy? Why layer-wise? Why unsupervised? Why pretraining?
What were/are the purposes of the above technique? (deep learning problem and initialization)
Why does unsupervised pretraining work?
When does unsupervised training work? Under which circumstances?
Why might unsupervised pretraining act as a regularizer?
What is the disadvantage of unsupervised pretraining compared to other forms of unsupervised learning?
How do you control the regularizing effect of unsupervised pretraining?
How to select the hyperparameters of each stage of GLUP?
Monte Carlo Methods
What are deterministic algorithms?
What are Las vegas algorithms?
What are deterministic approximate algorithms?
What are Monte Carlo algorithms?
Adversarial Attacks
Discuss state-of-the-art attack and defense techniques for adversarial models.
Can you state Tom Mitchell's definition of learning and discuss TTT, PPP and EEE?
Mitchell (1997) provides the definition: "A computer program is said to learn from experience EEE with respect to some class of tasks TTT and performance measure PPP, if its performance at tasks in TTT, as measured by PPP, improves with experience EEE."
What can be different types of tasks encountered in Machine Learning?
This is a non-exhaustive list. If you want more examples, look at the list of categories of SOTA research at https://paperswithcode.com/sota.
Semantic Segmentation
Super Resolution
Image Generation, Text Generation
Domain Adaptation
DeNoising
Style Transfer
Density estimation or probability mass function estimation
Supervised learning: Training a model from input data and its corresponding labels.
Self-supervised learning (supervised learning sub-type): Automatically labelling the training data during labelling. The datasets do not need to be manually labelled by human, though they can. Methods include (among others) labelling by finding and exploiting the relations (or correlations) between different input signals (input coming e.g. from different sensor modalities).
Semi-supervised learning (supervised learning sub-type): Training a model on data where some of the training examples have labels but others don't. One technique for semi-supervised learning is to infer labels for the unlabeled examples, and then to train on the inferred labels to create a new model. Semi-supervised learning can be useful if labels are expensive to obtain but unlabeled examples are plentiful.
Multi-instance learning (supervised learning sub-type): Instead of receiving a set of instances which are individually labeled, the learner receives a set of labeled bags, each containing many instances. In the simple case of multiple-instance binary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either 1) induce a concept that will label individual instances correctly or 2) learn how to label bags without inducing the concept.
Unsupervised learning: Training a model to find patterns in a dataset, typically an unlabeled dataset.
Reinforcement learning: A machine learning approach to maximize an ultimate reward through feedback (rewards and punishments) after a sequence of actions. For example, the ultimate reward of most games is victory. Reinforcement learning systems can become expert at playing complex games by evaluating sequences of previous game moves that ultimately led to wins and sequences that ultimately led to losses. Reinforcement learning is learning what to do---how to map situations to actions---so as to maximize a numerical reward signal
Any unsupervised algorithm can be converted into a supervised algorithm by either adding
Supervised Loss to the Unsupervised Objective OR
Supervision as constraints to the original objective
Both are isomorphic to a certain degree, that is, there is very little difference in whether you add it as a loss or add it as a constraint. Such classes of algorithms that mix unsupervised objective with a supervised loss/constraint are called semi-supervised algorithms (also a way to convert from supervised to unsupervised).
Such an algorithm is useful when you have DDD and LLL, where DDD is set of instances and LLL is a subset of DDD which has labels available. Often the size of L in these cases are much smaller than DDD.
An example of such an algorithm is ICML 2001 paper by Kiri Wagstaff et. al. called Constrained K-Means Clustering with Background Knowledge, where they modify the unsupervised K-means algorithm by adding supervised constraints. These constraints tell the algorithm, which two points must belong to a cluster and which two don't. They observe that they get improved clustering from such supervision.
Consider linear regression. What are TTT, PPP and EEE?
Task TTT: predicting yyy from xxx by outputting y^=wTx\hat{y} = \mathbf{w}^T\mathbf{x}y^=wTx.
Performance PPP: computing the mean squared error of the model on the test set.
MSEtest=1m∣∣y^test−ytest∣∣22MSE_{test}=\frac{1}{m}||\hat{y}^{test}-y^{test}||^2_2MSEtest=m1∣∣y^test−ytest∣∣22
Experience EEE: The training set containing the data XtrainX^{\text{train}}Xtrain and the labels ytrainy^{\text{train}}ytrain.
Here is one (but not the only) strategy for deriving the normal equation.
Linear function: hθ(x)=θ0+θ1x1+θ2x2=θTxh_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2=\theta^Txhθ(x)=θ0+θ1x1+θ2x2=θTx
Least Squares Cost Function: J(θ)=12∑i=1n(hθ(x(i))−y(i))2J(\theta)=\frac{1}{2}\sum_{i=1}^n(h_\theta(x^{(i)})-y^{(i)})^2J(θ)=21∑i=1n(hθ(x(i))−y(i))2
X=[−(x(1))T−−(x(2))T−...−(x(n))T−]y=[y(1)y(2)...y(n)]Xθ−y=[(x(1))Tθ−y(1)(x(2))Tθ−y(2)...(x(n))Tθ−y(n)]=[(hθ(x(1))−y(1)hθ(x(2))−y(2)...hθ(x(1))−y(n)]\mathbf{X}=\begin{bmatrix}- (x^{(1)})^T - \\- (x^{(2)})^T - \\ ...\\- (x^{(n)})^T -\end{bmatrix} \\ \mathbf{y}=\begin{bmatrix} y^{(1)} \\y^{(2)} \\... \\y^{(n)}\end{bmatrix} \\ \mathbf{X}\theta-\mathbf{y}=\begin{bmatrix} (x^{(1)})^T\theta-y^{(1)} \\(x^{(2)})^T\theta-y^{(2)} \\... \\(x^{(n)})^T\theta-y^{(n)}\end{bmatrix} \\ =\begin{bmatrix} (h_\theta(x^{(1)})-y^{(1)} \\h_\theta(x^{(2)})-y^{(2)} \\... \\h_\theta(x^{(1)})-y^{(n)}\end{bmatrix}X=⎣⎢⎢⎡−(x(1))T−−(x(2))T−...−(x(n))T−⎦⎥⎥⎤y=⎣⎢⎢⎡y(1)y(2)...y(n)⎦⎥⎥⎤Xθ−y=⎣⎢⎢⎡(x(1))Tθ−y(1)(x(2))Tθ−y(2)...(x(n))Tθ−y(n)⎦⎥⎥⎤=⎣⎢⎢⎡(hθ(x(1))−y(1)hθ(x(2))−y(2)...hθ(x(1))−y(n)⎦⎥⎥⎤
For a vector zzz, we have that: zTz=∑izi2z^Tz=\sum_i z^2_izTz=∑izi2
12(Xθ−y)T(Xθ−y)=12∑i=1n(hθ(x(i))−y(i))2=J(θ)\frac{1}{2}(\mathbf{X}\theta-\mathbf{y})^T(\mathbf{X}\theta-\mathbf{y})=\frac{1}{2}\sum_{i=1}^n(h_\theta(x^{(i)})-y^{(i)})^2 =J(\theta)21(Xθ−y)T(Xθ−y)=21∑i=1n(hθ(x(i))−y(i))2=J(θ)
We know that:
$\frac{\partial f(A)}{\partial A^T}=(\frac{\partial f(A)}{\partial A})^T \\ \frac{\partial \mathbf{y}^T\mathbf{A}\mathbf{x}}{\partial \mathbf{x}}=\mathbf{y}^T\mathbf{A} \\ \frac{\partial \mathbf{y}^T\mathbf{A}\mathbf{x}}{\partial \mathbf{y}}=\frac{\partial \mathbf{x}^T\mathbf{A}^T\mathbf{y}}{\partial \mathbf{y}}=\mathbf{x}^T\mathbf{A}^T \\ \frac{\partial \mathbf{x}^T\mathbf{A}\mathbf{x}}{\partial \mathbf{x}}=\mathbf{x}^T\mathbf{A}^T +\mathbf{x}^T\mathbf{A}=\mathbf{x}^T(\mathbf{A}^T +\mathbf{A})\$
To minimize JJJ, let's find its derivatives with respect to θ\thetaθ:
∂J(θ)∂θ=∂12(Xθ−y)T(Xθ−y)∂θ=12∂(θTXTXθ−θTXTy−yTXθ+yTy)∂θ=12∂(θTXTXθ−θTXTy−yTXθ+yTy)∂θ=12(θTXTX+θTXTX−yTX−yTX)=12(XTXθ−2yTX)=XTXθ−XTy=0\frac{\partial J(\theta)}{\partial \theta}= \frac{\partial \frac{1}{2}(\mathbf{X}\theta-\mathbf{y})^T(\mathbf{X}\theta-\mathbf{y})}{\partial \theta}\\ = \frac{1}{2}\frac{\partial (\theta^T\mathbf{X}^T\mathbf{X}\theta-\theta^T\mathbf{X}^T\mathbf{y}-\mathbf{y}^T\mathbf{X}\theta+\mathbf{y}^T\mathbf{y})}{\partial \theta} \\ =\frac{1}{2}\frac{\partial (\theta^T\mathbf{X}^T\mathbf{X}\theta-\theta^T\mathbf{X}^T\mathbf{y}-\mathbf{y}^T\mathbf{X}\theta+\mathbf{y}^T\mathbf{y})}{\partial \theta} \\ =\frac{1}{2} (\theta^T\mathbf{X}^T\mathbf{X}+\theta^T\mathbf{X}^T\mathbf{X}-\mathbf{y}^T\mathbf{X}-\mathbf{y}^T\mathbf{X} )\\ =\frac{1}{2}(\mathbf{X}^T\mathbf{X}\theta-2\mathbf{y}^T\mathbf{X}) \\ =\mathbf{X}^T\mathbf{X}\theta-\mathbf{X}^T\mathbf{y}=0∂θ∂J(θ)=∂θ∂21(Xθ−y)T(Xθ−y)=21∂θ∂(θTXTXθ−θTXTy−yTXθ+yTy)=21∂θ∂(θTXTXθ−θTXTy−yTXθ+yTy)=21(θTXTX+θTXTX−yTX−yTX)=21(XTXθ−2yTX)=XTXθ−XTy=0
Normal Equation: θ=(XTX)−1XTy\theta=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}θ=(XTX)−1XTy
A function fff is linear if f(ax+by)=af(x)+bf(y)f(ax+by)=af(x)+bf(y)f(ax+by)=af(x)+bf(y) for all relevant values of aaa, bbb, xxx and yyy.
A function ggg is affine if g(x)=f(x)+cg(x)=f(x)+cg(x)=f(x)+c for some linear function aaa and constant ccc. Note that we allow c=0c=0c=0, which implies that every linear function is an affine function.
All linear transformations are affine transformations.
Not all affine transformations are linear transformations.
It can be shown that any affine transformation A:U→VA:U \rightarrow VA:U→V can be written as A(x)=Lx)+v0A(x)=Lx)+v_0A(x)=Lx)+v0, where v0v_0v0 is some vector from VVV and L:U→VL:U \rightarrow VL:U→V is a linear transformation.
overfitting: What happens when the gap between the training error and test error is too large
underfitting: What happens when the model is not able to obtain a sufficiently low error value on the training set
training error: When training a machine learning model, we have access to a training set, we can compute some error measure on the training set
generalization error/test error: The expected value of the error on a new input. Here the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice
Visual summary
Representational capacity - the functions which the model can learn; The model specifies which family of functions the learning algorithm can choose from when varying the parameters in order to reduce a training objective.
Effective capacity - in practice, a learning algorithm is not likely to find the best function out of the possible functions it can learn, though it can learn one that performs exceptionally well - those functions that the learning algorithm is capable of finding defines the model's effective capacity.
These additional limitations, such as the imperfection of the optimization algorithm, mean that the learning algorithm's effective capacity may be less than the representational capacity of the model family
The VC dimension measures the capacity of a binary classifier. The VC dimension is defined as being the largest possible value of mmm for which there exists a training set of mmm different xxx points that the classifier can label arbitrarily.
Parametric models: learn a function described by a parameter vector whose size is finite and fixed before any data is observed (linear regression)
Non-parametric models: assume that the data distribution cannot be defined in terms of a finite set of parameters. But they can often be defined by assuming an infinite dimensional θ\thetaθ. Usually we think of θ\thetaθ as a function (nearest neighbor regression)
The ideal model: is an oracle that simply knows the true probability distribution that generates the data.
Even such a model will still incur some error on many problems, because there may still be some noise in the distribution. In the case of supervised learning, the mapping from xxx to yyy may be inherently stochastic, or yyy may be a deterministic function that involves other variables besides those included in xxx.
Bayes error: the lowest possible prediction error that can be achieved and is the same as irreducible error. ; The error incurred by an oracle making predictions from the true distribution p(x,y)p(x, y)p(x,y).
Source(s) of Bayes error: noise in the distribution if the process is random
The no free lunch theorem for machine learning (Wolpert, 1996) states that, averaged over all possible data generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other. The most sophisticated algorithm we can conceive of has the same average performance (over all possible tasks) as merely predicting that every point belongs to the same class.
Fortunately, these results hold only when we average over all possible data generating distributions. **If we make assumptions about the kinds of probability distributions we encounter in real-world applications, then we can design learning algorithms that perform well on these distributions.
This means that the goal of machine learning research is not to seek a universal learning algorithm or the absolute best learning algorithm. Instead, our goal is to understand what kinds of distributions are relevant to the "real world" that an AI agent experiences, and what kinds of machine learning algorithms perform well on data drawn from the kinds of data generating distributions we care about.
Regularization is any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error.
We regularize a model that learns a function f(x;θ)f(x;\theta)f(x;θ) by adding a penalty called a regularizer to the cost function. Expressing preferences for one function over another implicitly and explicitly is a more general way of controlling a model's capacity than including or excluding members from the hypothesis space.
Weight decay is an additional term that causes the weights to exponentially decay to zero.
To perform linear regression with weight decay, we minimize a sum comprising both the mean squared error on the training and a criterion J(w)J(w)J(w) that expresses a preference for the weights to have smaller squared L2L_2L2 norm. Specifically,
J(w)=MSEtrain+λwTwJ(w) = MSE_{train} + \lambda \mathbf{w}^T\mathbf{w}J(w)=MSEtrain+λwTw
Minimizing J(w)J(w)J(w) results in a choice of weights that make a tradeoff between fitting the training data and being small. This gives us solutions that have a smaller slope, or put weight on fewer of the features.
Hyperparameter: Most machine learning algorithms have several settings that we can use to control the behavior of the learning algorithm.
The values of hyperparameters are not adapted by the learning algorithm itself
Sometimes a setting is chosen to be a hyperparameter that the learning algorithm does not learn because it is difficult to optimize or it is not appropriate to learn that hyperparameter on the training set. This applies to all hyperparameters that control model capacity. If learned on the training set, such hyperparameters would always choose the maximum possible model capacity, resulting in overfitting.
Let's assume that you are training a model whose performance depends on a set of hyperparameters. In the case of a neural network, these parameters may be for instance the learning rate or the number of training iterations.
Given a choice of hyperparameter values, you use the training set to train the model. But, how do you set the values for the hyperparameters? That's what the validation set is for. You can use it to evaluate the performance of your model for different combinations of hyperparameter values (e.g. by means of a grid search process) and keep the best trained model.
But, how does your selected model compares to other different models? Is your neural network performing better than, let's say, a random forest trained with the same combination of training/test data? You cannot compare based on the validation set, because that validation set was part of the fitting of your model. You used it to select the hyperparameter values!
The test set allows you to compare different models in an unbiased way, by basing your comparisons in data that were not use in any part of your training/hyperparameter selection process.
kkk-fold cross-validation randomly divides the data into k blocks of roughly equal size. Each of the blocks is left out in turn and the other k−1k-1k−1 blocks are used to train the model. The held out block is predicted and these predictions are summarized into some type of performance measure (e.g. accuracy, root mean squared error (RMSE), etc.). The kkk estimates of performance are averaged to get the overall resampled estimate. kkk is 101010 or sometimes 555. Why? I have no idea. When kkk is equal to the sample size, this procedure is known as Leave-One-Out CV.
Repeated k-fold CV does the same as above but more than once. For example, five repeats of 10-fold CV would give 50 total resamples that are averaged. Note this is not the same as 50-fold CV.
Leave Group Out cross-validation (LGOCV), aka Monte Carlo CV, randomly leaves out some set percentage of the data BBB times. It is similar to min-training and hold-out splits but only uses the training set.
The bootstrap takes a random sample with replacement from the training set BBB times. Since the sampling is with replacement, there is a very strong likelihood that some training set samples will be represented more than once. As a consequence of this, some training set data points will not be contained in the bootstrap sample. The model is trained on the bootstrap sample and those data points not in that sample are predicted as hold-outs.
Point estimation refers to obtaining estimates for specific data points (such as estimating yyy from xxx, or estimating yyy from a distribution). In machine learning, this is the main prediction task. Machine learning models do this by approximating the function that maps xxx to yyy. I.e., a model approximates the function f(x)=yf(x) = yf(x)=y
The MLE for a vector parameter θ=[θ1,θ2,...,θp]T\theta = [\theta_1, \theta_2, ... , \theta_p]^Tθ=[θ1,θ2,...,θp]T is defined as the value that maximizes the likelihood function p(x;θ)p(x; \theta)p(x;θ), i.e.,
θ^ML=argmaxθp(x;θ)\hat{\theta}_{\text{ML}} = \text{arg}\max_{\theta} p(x; \theta)θ^ML=argmaxθp(x;θ)
In practise the MLE is found from the root of the log-likelihood function ∂ln∂θ=0\frac{\partial \ln}{\partial \theta}=0∂θ∂ln=0
If multiple solutions exist, then the one that maximizes the likelihood function is the MLE.
Probabilistic assumption:
Assume that the target variables and the inputs are related via the equation:y(i)=θTx(i)+ϵ(i)y(i)= \theta Tx(i)+ \epsilon(i)y(i)=θTx(i)+ϵ(i)
where ϵ(i)\epsilon^{(i)}ϵ(i) is an error term that captures either unmodeled effects (such as if there are some features very pertinent to predicting housing price, but that we'd left out of the regression), or random noise.
Assume ϵ(i)\epsilon^{(i)}ϵ(i) are distributed IID (independently and identically distributed) according to a Gaussian distribution (also called a Normal distribution) mean zero and some variance σ2\sigma^2σ2
The density of ϵ(i)\epsilon^{(i)}ϵ(i) is: p(ϵ(i))=12πσexp(−(ϵ(i))22σ2)p(\epsilon^{(i)})=\frac{1}{\sqrt{2\pi}\sigma}\exp \left(-\frac{(\epsilon^{(i)})^2}{2\sigma^2}\right)p(ϵ(i))=2πσ1exp(−2σ2(ϵ(i))2)
This implies that: p(y(i)∣x(i);θ)=12πσexp(−(y(i)−θTx(i))22σ2)p(y^{(i)}|x^{(i)};\theta)=\frac{1}{\sqrt{2\pi}\sigma}\exp \left(-\frac{(y^{(i)}-\theta^Tx^{(i)})^2}{2\sigma^2}\right)p(y(i)∣x(i);θ)=2πσ1exp(−2σ2(y(i)−θTx(i))2)
Likelihood function: L(θ)=L(θ∣X,y)=p(y∣X;θ)L(\theta)=L(\theta|\mathbf{X},\mathbf{y})=p(\mathbf{y}|\mathbf{X};\theta)L(θ)=L(θ∣X,y)=p(y∣X;θ)
p(y∣X;θ)p(y|X;\theta)p(y∣X;θ): This quantity is typically viewed a function of yyy (and perhaps XXX), for a fixed value of θ\thetaθ.
By the independence assumption on the ϵ(i)\epsilon^{(i)}ϵ(i)'s (and hence also the y(i)y(i)y(i)'s given the x(i)x(i)x(i)'s), this can also be written:
L(θ)=∏i=1np(y(i)∣x(i);θ)=∏i=1n12πσexp(−(y(i)−θTx(i))22σ2)L(\theta)= \prod_{i=1}^n p(y^{(i)}|x^{(i)};\theta) \\ =\prod_{i=1}^n \frac{1}{\sqrt{2\pi}\sigma}\exp \left(-\frac{(y^{(i)}-\theta^Tx^{(i)})^2}{2\sigma^2}\right)L(θ)=∏i=1np(y(i)∣x(i);θ)=∏i=1n2πσ1exp(−2σ2(y(i)−θTx(i))2)
The principal of maximum likelihood says that we should choose θ\thetaθ so as to make the data as high probability as possible →\rightarrow→ maximize L(θ)L(\theta)L(θ).
Instead of maximizing L(θ)L(\theta)L(θ), we can also maximize any strictly increasing function of L(θ)→L(\theta) \rightarrowL(θ)→ log likelihood ℓ(θ)\ell(\theta)ℓ(θ):
ℓ(θ)=logL(θ)=log∏i=1np(y(i)∣x(i);θ)=∑i=1nlog12πσexp(−(y(i)−θTx(i))22σ2)=nlog12πσ−12σ2∑i=1n(y(i)−θTx(i))2\ell(\theta)=\log L(\theta)=\log \prod_{i=1}^n p(y^{(i)}|x^{(i)};\theta) \\ =\sum_{i=1}^n \log \frac{1}{\sqrt{2\pi}\sigma}\exp \left(-\frac{(y^{(i)}-\theta^Tx^{(i)})^2}{2\sigma^2}\right) \\ = n\log\frac{1}{\sqrt{2\pi}\sigma}-\frac{1}{2\sigma^2} \sum_{i=1}^n (y^{(i)}-\theta^Tx^{(i)})^2ℓ(θ)=logL(θ)=log∏i=1np(y(i)∣x(i);θ)=∑i=1nlog2πσ1exp(−2σ2(y(i)−θTx(i))2)=nlog2πσ1−2σ21∑i=1n(y(i)−θTx(i))2
Hence, maximizing ℓ(θ)\ell(\theta)ℓ(θ) gives the same answer as minimizing
12∑i=1n(hθ(x(i))−y(i))2=J(θ)\frac{1}{2}\sum_{i=1}^n(h_\theta(x^{(i)})-y^{(i)})^2 =J(\theta)21∑i=1n(hθ(x(i))−y(i))2=J(θ)
To summarize: Under the previous probabilistic assumptions on the data, least-squares regression corresponds to finding the maximum likelihood estimate of θ\thetaθ. Note also that, in our previous discussion, our final choice of θ\thetaθ did not depend on what was σ2\sigma^2σ2, and indeed we'd have arrived at the same result even if σ2\sigma^2σ2 were unknown.
The main appeal of the maximum likelihood estimator is that it can be shown to be the best estimator asymptotically, as the number of examples m→∞m \rightarrow \inftym→∞, in terms of its rate of convergence as mmm increases
Under appropriate conditions, the maximum likelihood estimator has the properties of:
consistency: as the number of training examples approaches infinity, the maximum likelihood estimate of a parameter converges to the true value of the parameter. θ^→θn→∞\hat{\theta} \rightarrow \theta^{n \rightarrow \infty}θ^→θn→∞.
efficiency: the Cramér-Rao lower bound (Rao, 1945; Cramér, 1946) shows that no consistent estimator has a lower mean squared error Var(θn)\text{Var}(\theta^n)Var(θn) than the maximum likelihood estimator
When the number of examples is small enough to yield overfitting behavior, regularization strategies such as weight decay may be used to obtain a biased version of maximum likelihood that has less variance when training data is limited.
There are two conditions:
Condition #1: The true distribution pdatap_{\text{data}}pdata must lie within the model family pmodel(⋅;θ)p_{\text{model}}(·; \theta)pmodel(⋅;θ). Otherwise, no estimator can recover pdatap_{\text{data}}pdata.
Condition #2: The true distribution pdatap_{\text{data}}pdata must correspond to exactly one value of θ\thetaθ. Otherwise, maximum likelihood can recover the correct pdatap_{\text{data}}pdata, but will not be able to determine which value of θ\thetaθ was used by the data generating processing.
What is cross-entropy loss?
Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label. So predicting a probability of .012 when the actual observation label is 1 would be bad and result in a high loss value. A perfect model would have a log loss of 0.
Cross-entropy and log loss are slightly different depending on context, but in machine learning when calculating error rates between 0 and 1 they resolve to the same thing.
def CrossEntropy(yHat, y):
if y == 1:
return -log(yHat)
return -log(1 - yHat)
In binary classification, where the number of classes MMM equals 222, cross-entropy can be calculated as:
−(ylog(p)+(1−y)log(1−p))-{(y\log(p) + (1 - y)\log(1 - p))}−(ylog(p)+(1−y)log(1−p))
If M>2M>2M>2 (i.e. multiclass classification), we calculate a separate loss for each class label per observation and sum the result.
−∑c=1Myo,clog(po,c)-\sum_{c=1}^My_{o,c}\log(p_{o,c})−∑c=1Myo,clog(po,c)
These are not very strict terms and they are highly related. However:
Loss function is usually a function defined on a data point, prediction and label, and measures the penalty. For example:
square loss l(f(xi∣θ),yi)=(f(xi∣θ)−yi)2l(f(x_i|\theta),y_i) = \left (f(x_i|\theta)-y_i \right )^2l(f(xi∣θ),yi)=(f(xi∣θ)−yi)2, used in linear regression
hinge loss l(f(xi∣θ),yi)=max(0,1−f(xi∣θ)yi)l(f(x_i|\theta), y_i) = \max(0, 1-f(x_i|\theta)y_i)l(f(xi∣θ),yi)=max(0,1−f(xi∣θ)yi), used in SVM
0/1 loss l(f(xi∣θ),yi)=1 ⟺ f(xi∣θ)≠yil(f(x_i|\theta), y_i) = 1 \iff f(x_i|\theta) \neq y_il(f(xi∣θ),yi)=1⟺f(xi∣θ)=yi, used in theoretical analysis and definition of accuracy
Cost function is usually more general. It might be a sum of loss functions over your training set plus some model complexity penalty (regularization). For example:
Mean Squared Error MSE(θ)=1N∑i=1N(f(xi∣θ)−yi)2MSE(\theta) = \frac{1}{N} \sum_{i=1}^N \left (f(x_i|\theta)-y_i \right )^2MSE(θ)=N1∑i=1N(f(xi∣θ)−yi)2
SVM cost function SVM(θ)=∥θ∥2+C∑i=1NξiSVM(\theta) = \|\theta\|^2 + C \sum_{i=1}^N \xi_iSVM(θ)=∥θ∥2+C∑i=1Nξi (there are additional constraints connecting ξi\xi_iξi with CCC and with training set)
Objective function is the most general term for any function that you optimize during training. For example, a probability of generating training set in maximum likelihood approach is a well defined objective function, but it is not a loss function nor cost function (however you could define an equivalent cost function). For example:
MLE is a type of objective function (which you maximize)
Divergence between classes can be an objective function but it is barely a cost function, unless you define something artificial, like 1-Divergence, and name it a cost
In short, a loss function is a part of a cost function, which is a type of objective function.
The difference is very slim between machine learning (ML) and optimization theory.
In ML the idea is to learn a function that minimizes an error or one that maximizes reward over punishment.
Yes a lot of learning can be seen as optimization. In fact learning is an optimization problem.
For example, if you want to learn to play Chess, you will probably suck at first and then the idea (objective) is to be less bad. So you need to learn a series of actions that should hopefully minimize loses over wins.
In the process you are basically optimizing your chances of winning and that is what we call learning.
All supervised learning methods try to solve:
w^=argmax∑inL(xi,yi;w)\hat{w} = \arg \max \sum_{i}^{n} L(x_i, y_i; w)w^=argmax∑inL(xi,yi;w)
Where w^=optimal parameters\hat{w} = \text{optimal parameters}w^=optimal parameters, n=number of training pairsn=\text{number of training pairs}n=number of training pairs, x=inputx=\text{input}x=input, y=desired outputy=\text{desired output}y=desired output, w=model parametersw=\text{model parameter}sw=model parameters and L=lossL= \text{loss}L=loss, cost or objective function.
This is from optimization theory and the same formulation can be used for basic curve fitting in maths just as it can also be used for complex problems like image and speech recognition.
The goal for ML is similarly to optimize the performance of a model given an objective and the training data.
However, the manner in which ML does optimization is somewhat weak especially considering that the majority of ML models are supervised and require differentiable objective functions.
Many complex problems are not differentiable or are hard to cast in a differentiable manner, especially the so called AI-complete problems such as vision and natural language understanding (NLU).
If the wider artificial intelligence (AI) research community thinks that backpropagation (backprop) + stochastic gradient descent (SGD) algorithms and their variants will lead us to artificial general intelligence (AGI) then they are gravely mistaken.
Of course we can solve some interesting problems by training models with backprop + SGD (or other variants) as evident in the recent progress ML has enjoyed over the past few years but we should, sooner or later, realize that those models are not going to turn into an AGI model, we need better learning (optimization) algorithms, preferably those not limited to differentiable objectives, in order to move further.
Empirical risk minimization refers techniques used to define the theoretical performance bounds of a learning algorithm. The core idea is that we cannot know exactly how well an algorithm will work in practice (the true "risk") because we don't know the true distribution of data that the algorithm will work on, but we can instead measure its performance on a known set of training data (the "empirical" risk).
The ultimate goal of a learning algorithm is to find a hypothesis h∗h^{*}h∗ among a fixed class of functions H\mathcal{H}H for which the risk R(h)R(h)R(h) is minimal:
h∗=argminh∈HR(h)h^{*} =\arg \min_{h \in {\mathcal {H}}}R(h)h∗=argminh∈HR(h).
In general, the risk R(h)R(h)R(h) cannot be computed because the distribution P(x,y)P(x,y)P(x,y) is unknown to the learning algorithm (this situation is referred to as agnostic learning). However, we can compute an approximation, called empirical risk, by averaging the loss function on the training set:
Remp(h)=1n∑i=1nL(h(xi),yi)R_{\text{emp}}(h)={\frac{1}{n}}\sum_{i=1}^{n}L(h(x_{i}),y_{i})Remp(h)=n1∑i=1nL(h(xi),yi).
The empirical risk minimization principle states that the learning algorithm should choose a hypothesis h^\hat{h}h^ which minimizes the empirical risk:
h^=argminh∈HRemp(h)\hat{h} = \arg \min_{h\in {\mathcal {H}}}R_{\text{emp}}(h)h^=argminh∈HRemp(h)
Thus the learning algorithm defined by the ERM principle consists in solving the above optimization problem.
The main reason it's rarely used in the context of deep learning is computational complexity.
Empirical risk minimization for a classification problem with a 0-1 loss function is known to be an NP-hard problem, even for such a relatively simple class of functions as linear classifiers. t can be solved efficiently when the minimal empirical risk is zero (i.e. data is linearly separable), but this is not always verifiable with the types of problems deep learning is applied to.
In practice, machine learning algorithms cope with that either by employing a convex approximation to the 0-1 loss function (like hinge loss for SVM), which is easier to optimize, or by imposing assumptions on the distribution P(x,y)P(x,y)P(x,y) (and thus stop being agnostic learning algorithms to which the above result applies).
1. Mean Square Error, Quadratic loss, L2L_2L2 Loss Mean Square Error (MSE) is the most commonly used regression loss function. MSE is the sum of squared distances between our target variable and predicted values.
MSE=∑i=1n(yi−yip)2n\text{MSE}=\frac{\sum_{i=1}^{n} (y_i - y_i^p)^2}{n}MSE=n∑i=1n(yi−yip)2
Below is a plot of an MSE function where the true target value is 100, and the predicted values range between -10,000 to 10,000. The MSE loss (Y-axis) reaches its minimum value at prediction (X-axis) = 100. The range is 0 to ∞\infty∞.
Plot of MSE Loss (Y-axis) vs. Predictions (X-axis)
2. Mean Absolute Error, L1L_1L1 Loss Mean Absolute Error (MAE) is another loss function used for regression models. MAE is the sum of absolute differences between our target and predicted variables. So it measures the average magnitude of errors in a set of predictions, without considering their directions. (If we consider directions also, that would be called Mean Bias Error (MBE), which is a sum of residuals/errors). The range is also 0 to ∞\infty∞.
MAE=∑i=1n∣yi−yip∣2n\text{MAE}=\frac{\sum_{i=1}^{n} |y_i - y_i^p |^2}{n}MAE=n∑i=1n∣yi−yip∣2
Plot of MAE Loss (Y-axis) vs. Predictions (X-axis)
2.5. MSE vs. MAE (L2L_2L2 loss vs L1L_1L1 loss) In short, using the squared error is easier to solve, but using the absolute error is more robust to outliers. But let's understand why!
3. Huber Loss, Smooth Mean Absolute Error Huber loss is less sensitive to outliers in data than the squared error loss. It's also differentiable at 0. It's basically absolute error, which becomes quadratic when error is small. How small that error has to be to make it quadratic depends on a hyperparameter, 𝛿 (delta), which can be tuned. Huber loss approaches MAE when δ∼0\delta \sim 0δ∼0 and MSE when δ∼∞\delta \sim \inftyδ∼∞ (large numbers.)
Lδ(y,f(x))={(y−f(x))2/2for ∣y−f(x)∣≤δδ∣y−f(x)∣−δ2/2otherwise.L_\delta(y, f(x)) = \left\{\begin{matrix} (y-f(x))^2/2 & \text{for }|y-f(x)| \leq \delta \\ \delta |y-f(x)| - \delta^2/2 & \text{otherwise.} \end{matrix}\right.Lδ(y,f(x))={(y−f(x))2/2δ∣y−f(x)∣−δ2/2for ∣y−f(x)∣≤δotherwise.
Plot of Hoss Loss (Y-axis) vs. Predictions (X-axis). True value = 0
The choice of delta is critical because it determines what you're willing to consider as an outlier. Residuals larger than delta are minimized with L1L_1L1 (which is less sensitive to large outliers), while residuals smaller than delta are minimized "appropriately" with L2L_2L2.
3.5. Why use Huber Loss?
One big problem with using MAE for training of neural nets is its constantly large gradient, which can lead to missing minima at the end of training using gradient descent. For MSE, gradient decreases as the loss gets close to its minima, making it more precise.
Huber loss can be really helpful in such cases, as it curves around the minima which decreases the gradient. And it's more robust to outliers than MSE. Therefore, it combines good properties from both MSE and MAE. However, the problem with Huber loss is that we might need to train hyperparameter delta which is an iterative process.
4. Log-Cosh Loss Log-cosh is another function used in regression tasks that's smoother than L2L_2L2. Log-cosh is the logarithm of the hyperbolic cosine of the prediction error.
L(y,yp)=∑i=1nlog(cosh(yip−yi))L(y,y^p)=\sum_{i=1}^{n} \log(\cosh(y_i^p - y_i))L(y,yp)=∑i=1nlog(cosh(yip−yi))
Plot of Log-cosh Loss (Y-axis) vs. Predictions (X-axis). True value = 0
Advantage: log(cosh(x))\log(\cosh(x))log(cosh(x)) is approximately equal to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that logcosh works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction. It has all the advantages of Huber loss, and it's twice differentiable everywhere, unlike Huber loss.
5. Quantile Loss
In most of the real world prediction problems, we are often interested to know about the uncertainty in our predictions. Knowing about the range of predictions as opposed to only point estimates can significantly improve decision making processes for many business problems.
Quantile loss functions turn out to be useful when we are interested in predicting an interval instead of only point predictions. Prediction interval from least square regression is based on an assumption that residuals (y−y^)(y - \hat{y})(y−y^) have constant variance across values of independent variables. We can not trust linear regression models which violate this assumption. We can not also just throw away the idea of fitting linear regression model as baseline by saying that such situations would always be better modeled using non-linear functions or tree based models. This is where quantile loss and quantile regression come to rescue as regression based on quantile loss provides sensible prediction intervals even for residuals with non-constant variance or non-normal distribution.
Let's see a working example to better understand why regression based on quantile loss performs well with heteroscedastic data.
Quantile regression vs. Ordinary Least Square regression
Left: Linear relationship b/w X1 and Y. With constant variance of residuals. Right: Linear relationship b/w X2 and Y but variance of Y increases with X2. (Heteroscedasticity)
Orange line represents OLS estimates for both cases
Quantile Regression. Dotted lines represent regression based 0.05 and 0.95 quantile loss functions
Understanding the quantile loss function
Quantile-based regression aims to estimate the conditional "quantile" of a response variable given certain values of predictor variables. Quantile loss is actually just an extension of MAE (when quantile is 50th percentile, it's MAE).
The idea is to choose the quantile value based on whether we want to give more value to positive errors or negative errors. Loss function tries to give different penalties to overestimation and underestimation based on the value of chosen quantile (γ\gammaγ). For example, a quantile loss function of γ=0.25\gamma = 0.25γ=0.25 gives more penalty to overestimation and tries to keep prediction values a little below median
Lγ(y,yp)=∑i=yi<yip(γ−1)⋅∣yi−yip∣+∑i=yi≥yip(γ)⋅∣yi−yip∣L_\gamma(y,y^p)=\sum_{i=y_i < y_i^p} (\gamma - 1) \cdot |y_i - y_i^p| + \sum_{i=y_i \geq y_i^p} (\gamma) \cdot |y_i - y_i^p|Lγ(y,yp)=∑i=yi<yip(γ−1)⋅∣yi−yip∣+∑i=yi≥yip(γ)⋅∣yi−yip∣
γ\gammaγ is the required quantile and has value between 0 and 1.
Plot of Quantile Loss (Y-axis) vs. Predictions (X-axis). True value of Y = 0
We can also use this loss function to calculate prediction intervals in neural nets or tree based models. Below is an example of Sklearn implementation for gradient boosted tree regressors.
Prediction Intervals using Quantile loss (Gradient Boosting Regressor) from Scikit Learn
Above figure shows 90% prediction interval calculated using quantile loss function available in GradientBoostingRegression of sklearn library. The upper bound is constructed γ=0.95\gamma = 0.95γ=0.95 and lower bound using γ=0.05\gamma = 0.05γ=0.05.
Comparison Study:
A nice comparison simulation is provided in "Gradient boosting machines, a tutorial". To demonstrate the properties of the all the above loss functions, they've simulated a dataset sampled from a sinc(x) function with two sources of artificially simulated noise: the gaussian noise component ε∼N(0,σ2)\varepsilon \sim \mathcal{N}(0, \sigma^2)ε∼N(0,σ2) and the impulsive noise component ξ∼Bern(p)\xi \sim \text{Bern}(p)ξ∼Bern(p). The impulsive noise term is added to illustrate the robustness effects. Below are the results of fitting a GBM regressor using different loss functions.
Continuous loss functions: (A) MSE loss function; (B) MAE loss function; (C) Huber loss function; (D) Quantile loss function. Demonstration of fitting a smooth GBM to a noisy sinc(x) data: (E) original sinc(x) function; (F) smooth GBM fitted with MSE and MAE loss; (G) smooth GBM fitted with Huber loss with delta = {4, 2, 1}; (H) smooth GBM fitted with Quantile loss with alpha = {0.5, 0.1, 0.9}.
Some observations from the simulations:
The predictions from model with MAE loss is less affected by the impulsive noise whereas the predictions with MSE loss function is slightly biased due to the caused deviations.
The predictions are little sensitive to value of hyperparameter chosen in case of model with huber loss.
The quantile losses give a good estimation of the corresponding confidence levels.
All the loss functions in single plot.
What is the 0-1 loss function? Why can't the 0-1 loss function or classification error be used as a loss function for optimizing a deep neural network?
Loss functions are for introducing some kind of metric that we can measure the "cost" of an incorrect decision with.
So let's say I have a dataset of 30 objects, I divided them to training/testing\text{training} / \text{testing}training/testing sets like 20/1020 / 1020/10. I will be using 0-1 loss function, so lets say my set of class labels is M and the function looks like this:
L(i, j) = \right \begin{Bmatrix} 0 \qquad i = j \ 1 \qquad i \ne j \end{matrix} \qquad i,j \in M
So I builded some model on my training data, lets say I am using Naive Bayes classifier, and this model classified 7 objects correctly (assigned them the correct class labels) and 3 objects were classified incorrectly.
So my loss function would return "0" 7 times and "1" 3 times - i.e. our model classified 30%30 \%30% of the objects incorrectly.
The 0−10-10−1 loss function is non-convex and discontinuous, so (sub)gradient methods cannot be applied. For binary classification with a linear separator, this loss function can be formulated as finding the β\betaβ that minimizes the average value of the indicator function 1(yiβxi≤0)\mathbf{1}(y_i \beta x_i \leq \mathbf{0})1(yiβxi≤0) over all iii samples. This is exponential in the inputs, as since there are two possible values for each pair, there are 2n2^n2n possible configurations to check for n total sample points. This is known to be NP-hard. Knowing the current value of your loss function doesn't provide any clue as to how you should possibly modify your current solution to improve, as you could derive if gradient methods for convex or continuous functions were available.
Let us consider a dynamic system whose present state is a function of (depends on) the previous state. It can be expressed compactly, with cycles, as below:
state,st=f(st−1;θ)\text{state}, s_{t} = f( s_{t-1} ; \theta )state,st=f(st−1;θ)
This is a recursive/recurrent definition. The state at time 'ttt', sts_tst is a function (fff) of previous state st−1s_{t-1}st−1, parameterized by θ\thetaθ. This equation needs to be unfolded as follows:
s3=f(s2;θ)=f(f(s1;θ)θ)s_{3} = f( s_{2}; \theta ) = f( f( s_{1}; \theta ) \theta )s3=f(s2;θ)=f(f(s1;θ)θ)
Consider a little more complex system, whose state not only depends on the previous state, but also on an external signal 'xxx'.
st=f(st−1,xt;θ)s_{t} = f( s_{t-1}, x_{t}; \theta )st=f(st−1,xt;θ)
The system takes a signal xtx_txt as input during time step 'ttt', updates it's state based on the influence of xtx_txt and the previous state, st−1s_{t-1}st−1. Now, let's try to think of such a system as a neural network. The state of the system can be seen as the hidden units of a neural network. The signal 'xxx' at each time step, can be seen as a sequence of inputs given to the neural network, one input per time step. At this point, you should know that we are using the term "time step" interchangeably with steps in a sequence.
The state of such a neural network can be represented with hidden layer hth_tht, as follows:
ht=f(ht−1,xt;θ)h_{t} = f( h_{t-1}, x_{t}; \theta )ht=f(ht−1,xt;θ)
The neural network we just described, has recursion/recurrence in it. Let's start calling it an RNN. It can be compactly represented as a cyclic circuit diagram, where the input signal is processed with a delay. This cyclic circuit can be unfolded into a graph as follows:
Notice the repeated appliation of function 'f'.
Let gtg_tgt be the function that represents the unfolded graph after 'ttt' time steps.
Now we can express the hidden state at time 'ttt', two ways:
as a recurrence relation, as we have seen before: ht=f(ht−1,xt;θ)h_t=f(h_t-1,x_t;\theta)ht=f(ht−1,xt;θ)
as an unfolded graph: ht=gt(xt,xt−1,...x1)h_t=g_t(x_t,x_{t-1},...x_1)ht=gt(xt,xt−1,...x1)
Suriyadeepan Ram's Blog gives excellent examples of extending this to forward propagation through RNNs, Markov properties, converting vectors to sequences, Seq2Seq, bidirectional RNNs, and deep RNNs.
'unfolding' is dependent on the length of the input sequence. To understand this, suppose you want to lay down the exact computations that are happening in an RNN, in that case, you have to 'unfold' your network and the size of your 'unfolded' graph would depend on the size of the input sequence. For more information refer to this page. It says that "By unrolling we simply mean that we write out the network for the complete sequence. For example, if the sequence we care about is a sentence of 5 words, the network would be unrolled into a 5-layer neural network, one layer for each word."
The unfolding process introduces two major advantages:
Regardless of sequence length, learned model has same input size (because it is specified in terms of transition from one state to another state rather than specified in terms of a variable length history of states)
Possible to use same function fff with same parameters at every step
What does the output of the hidden layer of a RNN at any arbitrary time t represent?
The output of the hidden layer at time ttt (hth_tht) represents the outputs of a function on the input xxx at time ttt and the ouput of the hidden layer from time t−1t-1t−1 (ht−1h_{t-1}ht−1)
Like standard backpropagation, [backpropagation through time] consists of a repeated application of the chain rule. The subtlety is that, for recurrent networks, the loss function depends on the activation of the hidden layer not only through its influence on the output layer, but also through its influence on the hidden layer at the next timestep.
— Supervised Sequence Labelling with Recurrent Neural Networks, 2008
Compared to architectures like 1D Convolutional layers, looking back and looking ahead are tasks that are more demanding for RNNs (this may require constructing a Bi-directional RNN).
RNNs are limited in how far back they can remember information from. While LSTMs are an improvement, remembering information from more than 10,000 time steps back is extremely difficult. This is made more difficult if the time series data is extremely varied (and could subject the RNN to gradient instability earlier in training).
Another issue of RNNs is that they are not hardware friendly. It takes a lot of resources we do not have to train these network fast. Also it takes much resources to run these model in the cloud, and given that the demand for speech-to-text is growing rapidly, the cloud is not scalable.
RNNs and LSTM are difficult to train because they require memory-bandwidth-bound computation, which is the worst nightmare for hardware designer and ultimately limits the applicability of neural networks solutions. In short, LSTM require 4 linear layer (MLP layer) per cell to run at and for each sequence time-step. Linear layers require large amounts of memory bandwidth to be computed, in fact they cannot use many compute unit often because the system has not enough memory bandwidth to feed the computational units. And it is easy to add more computational units, but hard to add more memory bandwidth (note enough lines on a chip, long wires from processors to memory, etc). As a result, RNN/LSTM and variants are not a good match for hardware acceleration.
RNN Design Pattern
One-to-one (Tx=Ty=1T_x=T_y=1Tx=Ty=1) Traditional neural network
One-to-many Tx=1,Ty>1T_x=1,T_y>1Tx=1,Ty>1 Music generation
Many-to-one Tx>1,Ty=1T_x>1,T_y=1Tx>1,Ty=1 Sentiment classification
Many-to-many Tx=TyT_x=T_yTx=Ty Name entity recognition
Many-to-many Tx≠TyTx \neq TyTx=Ty Machine translation
Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. They are typically as follows:
For each timestep ttt, the activation a<t>a^{< t >}a<t> and the output y<t>y^{< t >}y<t> are expressed as follows:
a<t>=g1(Waaa<t−1>+Waxx<t>+ba)andy<t>=g2(Wyaa<t>+by)\boxed{a^{< t >}=g_1(W_{aa}a^{< t-1 >}+W_{ax}x^{< t >}+b_a)}\quad \\ \text{and} \\ \quad \boxed{y^{< t >}=g_2(W_{ya}a^{< t >}+b_y)}a<t>=g1(Waaa<t−1>+Waxx<t>+ba)andy<t>=g2(Wyaa<t>+by)
where Wax,Waa,Wya,ba,byW_{ax}, W_{aa}, W_{ya}, b_a, b_yWax,Waa,Wya,ba,by are coefficients that are shared temporally and g1,g2g_1, g_2g1,g2 activation functions.
the loss function L\mathcal{L}L of all time steps is defined based on the loss at every time step as follows: L(y^,y)=∑t=1TyL(y^<t>,y<t>)\boxed{\mathcal{L}(\widehat{y},y)=\sum_{t=1}^{T_y}\mathcal{L}(\widehat{y}^{< t >},y^{< t >})}L(y,y)=t=1∑TyL(y<t>,y<t>)
One challenge of this loss is that it is difficult to capture long term dependencies because of multiplicative gradients that can be exponentially decreasing/increasing with respect to the number of layers
Backpropagation is done at each point in time in the update process for an RNN. At timestep TTT, the derivative of the loss LLL with respect to weight matrix WWW is expressed as follows:
∂L(T)∂W=∑t=1T∂L(T)∂W∣(t)\boxed{\frac{\partial \mathcal{L}^{(T)}}{\partial W}=\sum_{t=1}^T\left.\frac{\partial\mathcal{L}^{(T)}}{\partial W}\right|_{(t)}}∂W∂L(T)=t=1∑T∂W∂L(T)∣∣∣∣(t)
An RNN that has only output-to-hidden layer recurrence is what's known as a one-to-many RNN. Such RNNs may take in an input at only the beginning time-step, and for the rest o the network use only its own outputs as sources for subsequent hidden layers. Such networks are extremely useful for generating sequences from a very small or short sequence. However, compared to hidden-to-hidden recurrence, they may forget data from time-steps further in the past (i.e., more gradient instability). As time goes on, more of the data the network learns from may consist of the network's own outputs (hence more limited in applications).
Teacher forcing is a strategy for training recurrent neural networks that uses model output from a prior time step as an input. Teacher forcing works by using the actual or expected output from the training dataset at the current time step y(t)y(t)y(t) as input in the next time step X(t+1)X(t+1)X(t+1), rather than the output generated by the network.
When we have both hidden-to-hidden and output-to-hidden feedbacks in recurrent neural networks, we can use both back propagation through time (BPTT) and teacher forcing learning methods.The main advantage of teacher forcing is to parallelize training of different time steps, where BPTT cannot exploit such parallelization.
Unfortunately, too strict teacher forcing can result in problems in generation as small prediction error compound in the conditioning context. This can lead to poor prediction performance as the RNN's conditioning context (the sequence of previously generated samples) diverge from sequences seen during training
The disadvantage of strict teacher forcing arises if the network is going to be later used in a closed-loop mode, with the network outputs (or samples from the output distribution) fed back as input. In this case, the fed-back inputs that the network sees during training could be quite dfferent from the kind of inputs that it will see at test time.
In short, strict teacher forcing can result in poor performance on test or validation sequences.
To train an RNN, people usually use backpropagation through time (BPTT), which means that you choose a number of time steps NN, and unroll your network so that it becomes a feedforward network made of NN duplicates of the original network, while each of them represents the original network in another time step.
unrolling an RNN
So BPTT is just unrolling your RNN, and then using backpropagation to calculate the gradient (as one would do to train a normal feedforward network).
Because our feedforward network was created by unrolling, it is NNN times as deep as the original RNN. Thus the unrolled network is often very deep.
In deep feedforward neural networks, backpropagation has "the unstable gradient problem", as Michael Nielsen explains in the chapter Why are deep neural networks hard to train? (in his book Neural Networks and Deep Learning):
[…] the gradient in early layers is the product of terms from all the later layers. When there are many layers, that's an intrinsically unstable situation. The only way all layers can learn at close to the same speed is if all those products of terms come close to balancing out.
I.e. the earlier the layer, the longer the product becomes, and the more unstable the gradient becomes. (For a more rigorous explanation, see this answer.)
Another issue is that the product that gives the gradient contains many instances of the same term. I.e., The product that gives the gradient includes the weights of every later layer.
So in a normal feedforward neural network, this product for the dddth-to-last layer might look like:
w1⋅α1⋅w2⋅α2⋅ ⋯ ⋅wd⋅αdw_1\cdot\alpha_{1}\cdot w_2\cdot\alpha_{2}\cdot\ \cdots\ \cdot w_d\cdot\alpha_{d}w1⋅α1⋅w2⋅α2⋅ ⋯ ⋅wd⋅αd
Nielsen explains that (with regard to absolute value) this product tends to be either very big or very small (for a large ddd).
But in an unrolled RNN, this product would look like:
w⋅α1⋅w⋅α2⋅ ⋯ ⋅w⋅αdw\cdot\alpha_{1}\cdot w\cdot\alpha_{2}\cdot\ \cdots\ \cdot w\cdot\alpha_{d}w⋅α1⋅w⋅α2⋅ ⋯ ⋅w⋅αd
as the unrolled network is composed of duplicates of the same network.
Whether we are dealing with numbers or matrices, the appearance of the same term ddd times means that the product is much more unstable (as the chances are much smaller that "all those products of terms come close to balancing out").
And so the product (with regard to absolute value) tends to be either exponentially small or exponentially big (for a large ddd).
In other words, the fact that the unrolled RNN is composed of duplicates of the same network makes the unrolled network's "unstable gradient problem" more severe than in a normal deep feedforward network.
In non-recurrent feedforward networks, it is possible to produce unstable gradient problems with a sufficiently large network (and one without regularization). That being said, this is rarely the case, and the number of layers gradients have to be passed through in a very deep MLP is typically far less than the number of time-steps for which gradients in an RNN need to be computed.
The main reasons for the vanishing/exploding gradient problem are the following traits of BPTT in RNNs:
An unrolled RNN tends to be a very VERY deep network.
In an unrolled RNN the gradient in an early layer is a product that (also) contains many instances of the same term.
All RNNs have feedback loops in the recurrent layer. This lets them maintain information in 'memory' over time. But, it can be difficult to train standard RNNs to solve problems that require learning long-term temporal dependencies. This is because the gradient of the loss function decays exponentially with time (called the vanishing gradient problem).
LSTM networks are a type of RNN that uses special units in addition to standard units. LSTM units include a 'memory cell' that can maintain information in memory for long periods of time. A set of gates is used to control when information enters the memory, when it's output, and when it's forgotten. This architecture lets them learn longer-term dependencies. GRUs are similar to LSTMs, but use a simplified structure. They also use a set of gates to control the flow of information, but they don't use separate memory cells, and they use fewer gates.
Structural Differences
While both GRUs and LSTMs contain gates, the main difference between these two structures lies in the number of gates and their specific roles. The role of the Update gate in the GRU is very similar to the Input and Forget gates in the LSTM. However, the control of new memory content added to the network differs between these two.
In the LSTM, while the Forget gate determines which part of the previous cell state to retain, the Input gate determines the amount of new memory to be added. These two gates are independent of each other, meaning that the amount of new information added through the Input gate is completely independent of the information retained through the Forget gate.
As for the GRU, the Update gate is responsible for determining which information from the previous memory to retain and is also responsible for controlling the new memory to be added. This means that the retention of previous memory and addition of new information to the memory in the GRU is NOT independent.
Another key difference between the structures is the lack of the cell state in the GRU, as mentioned earlier. While the LSTM stores its longer-term dependencies in the cell state and short-term memory in the hidden state, the GRU stores both in a single hidden state. However, in terms of effectiveness in retaining long-term information, both architectures have been proven to achieve this goal effectively.
Speed Differences
GRUs are faster to train as compared to LSTMs due to the fewer number of weights and parameters to update during training. This can be attributed to the fewer number of gates in the GRU cell (two gates) as compared to the LSTM's three gates.
In the code walkthrough further down in this article, we'll be directly comparing the speed of training an LSTM against a GRU on the exact same task.
The accuracy of a model, whether it is measured by the margin of error or proportion of correct classifications, is usually the main factor when deciding which type of model to use for a task. Both GRUs and LSTMs are variants of RNNS and can be plugged in interchangeably to achieve similar results.
Gradient clipping is a technique used to cope with the exploding gradient problem sometimes encountered when performing backpropagation. By capping the maximum value for the gradient, this phenomenon is controlled in practice.
Adam and RMSProp make multiplicative adjustments to gradients, but this is not the same as Gradient clipping. Gradient clipping involves setting predetermined cutoffs beyond which the gradients cannot go.
Traditional RNNs and Similarity to Bayes Rule In essence, Bayesian means probabilistic. The specific term exists because there are two approaches to probability (bayesian and frequentist). Bayesians think of it as a measure of belief, so that probability is subjective and refers to the future. Specificaly, Bayesians use Bayes' Rule to update a probability based on a predictor prior probability, likelihood, and class prior probability using the sum rule (p(X)=∑Yp(X,Y)p(X)= \sum_Y p(X, Y)p(X)=∑Yp(X,Y)) and product rule (p(X,Y)=p(Y∣X)p(X)p(X, Y) = p(Y | X)p(X)p(X,Y)=p(Y∣X)p(X)) of probability. Changing probabilities with new updates through time is similar to what RNNs do.
For example, if we want to predict a given category ccc of data XXX (contains data points x1,....,xnx_1, ...., x_nx1,....,xn)that will be chosen from a sample next (for example, ccc could be one of a variety of words from a dictionary), after already having done nnn samples (or having passed through nnn words already), we could use Bayes rule as follows:
P(c∣x))Posterior Probability=P(x∣c)likelihoodP(c)Class Prior ProbabilityP(x)predictor prior probability\underset{\text{Posterior Probability}}{P(c | x))} = \frac{\overset{\text{likelihood}}{P(x|c)} \overset{\text{Class Prior Probability}}{P(c)}}{\underset{\text{predictor prior probability}}{P(x)}}Posterior ProbabilityP(c∣x))=predictor prior probabilityP(x)P(x∣c)likelihoodP(c)Class Prior Probability
p(c∣X)=p(x1∣c)×p(x2∣c)×...×p(xn∣c)×p(c)p(c | X) = p(x_1 | c) \times p(x_2 | c) \times ... \times p(x_n | c) \times p(c)p(c∣X)=p(x1∣c)×p(x2∣c)×...×p(xn∣c)×p(c)
In short, the way classical RNNs update (and by extension LSTMs and GRUs) is very similar to how Bayes' rule would be applied to time-series data.
Bayesian RNNs
It is also possible to update RNNs through variational Bayes. This is discussed further in a paper out of DeepMind: "Bayesian Recurrent Neural Networks"
No, you cannot use Batch Normalization on a recurrent neural network, as the statistics are computed per batch, this does not consider the recurrent part of the network. Weights are shared in an RNN, and the activation response for each "recurrent loop" might have completely different statistical properties
Other techniques similar to Batch Normalization that take these limitations into account have been developed, for example Layer Normalization. There are also reparametrizations of the LSTM layer that allow Batch Normalization to be used, for example as described in Recurrent Batch Normalization by Coijmaans et al. 2016.
Some DL frameworks make it seem like you can use batch normalization in RNNs, but these typically involve reparametrized LSTMs and not vanilla recurrent networks (which need modifications or a different form of batch normalization)
In machine learning, Autoencoding is reducing the dimensions of the dataset while learning how to ignore noise. An Autoencoder is an unsupervised artificial neural network used for learning. Specifically, it learns how to compress data then reconstructing it back to a representation close to the original form.
Autoencoders were traditionally used for pre-training of Artificial Neural Networks. According to the history provided in Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ballard, "Modular learning in neural networks," Proceedings AAAI (1987). It's not clear if that's the first time auto-encoders were used, however; it's just the first time that they were used for the purpose of pre-training ANNs.
Generative models typicaly fall into either unconditional models (sampling xxx from a probability distribution p(x)p(x)p(x)) or conditional models (sampling xxx from a probability distribution p(x∣y)p(x | y)p(x∣y)). In both cases, being able to generate complex outputs from a learnable function (or learnable latent variables) is extremely useful. Autoencoders are useful for generating these lower-dimensional representations from which diverse-yet-realistic outputs can be obtained.
Unlike general feedforward networks, autoencoders may also be trained using recirculation (Hinton and McClelland, 1988), a learning algorithm based on comparing the activations of the network on the original input to the activations on the reconstructed input. Recirculation is regarded as more biologically plausible than back-propagation but is rarely used for machine learning applications. Specific recirculation algorithms vary between some autoencoder types
Many autoencoders work well with loss functions like the mean squared error loss function. Adding regularization to the loss is useful for giving the autoencoder other properties besides the ability to directly copy its input to its output. Many other regression loss functions also work well with autoencoders.
Cross-entropy and other classification loss functions are generaly poor choices for most autoencoders.
In its simplest form, the autoencoder is a three layers net, i.e. a neural net with one hidden layer. The input and output are the same, and we learn how to reconstruct the input.
Such an architecture can be optimal (i.e., lowest training error for a given hidden layer size) if it has the following properties:
Tied Weights
Weight Orthognoality
Feature non-correlation
Unit norm of the weights being 1
PCA is restricted to a linear map, while auto encoders can have nonlinear enoder/decoders.
A single layer auto encoder with linear transfer function is nearly equivalent to PCA, where nearly means that the WWW found by an autoencoder and PCA won't be the same—but the subspace spanned by the respective WWW's will.
We can an autoencoder very powerful by increasing the size of the hidden layers. Increasing these hyperparameters will let the autoencoder learn more complex codings. But, we should be careful to not make it too powerful, otherwise the autoencoder will simply learn to copy its inputs to the output, without learning any meaningful representation. It will just mimic the identity function. The autoencoder will reconstruct the training data perfectly, but it will be overfitting without being able to generalize to new instances, which is not what we want. This is why we use a "sandwich" architecture with hidden layer sizes smaller than the inputs or outputs (also known as a "bottleneck" architecture).
An undercomplete autoencoder has no explicit regularization term - we simply train our model according to the reconstruction loss. Thus, our only way to ensure that the model isn't memorizing the input data is the ensure that we've sufficiently restricted the number of nodes in the hidden layer(s). As such, it is typically used in dimensionality reduction cases.
If too much of the input to the undercomplete autoencoder is random noise or irrelevant data, the autoencoder might not be able to learn any meaningful representations of the data.
In overcomplete autoencoders, where the dimension of the latent representation is greater than the input. In these cases, even a linear encoder and linear decoder can learn to copy the input to the output without learning anything useful about the data distribution.
We can regularize the autoencoder by using a sparsity constraint such that only a fraction of the nodes would have nonzero values, called active nodes. In particular, we add a penalty term to the loss function such that only a fraction of the nodes become active. This forces the autoencoder to represent each input as a combination of small number of nodes, and demands it to discover interesting structure in the data. This method works even if the code size is large, since only a small subset of the nodes will be active at any time.
The generative section of a generative autoencoder (i.e., the decoder) mimics the process by which the input data was created. Rather than reconstructing input data as closely as possible, the training process forces the generative autoencoder to produce outputs that fit within a distribution of the traning data features. As such, regularization through sparsity constraints is redundant.
A sparse autoencoder is simply an autoencoder whose training criterion involves a sparsity penalty. In most cases, we would construct our loss function by penalizing activations of hidden layers so that only a few nodes are encouraged to activate when a single sample is fed into the network. We can construct our sparsity penalty with techniques like L1L_1L1 regularization or KL-divergence.
The intuition behind this method is that fewer nodes activating while still keeping the autoencoder's performance during training would guarantee that the autoencoder is actually learning latent representations instead of redundant information in our input data.
Denoising autoencoders purposefully corrupt the data they are trained on by randomly setting some of the input values to zero. This makes them more robust against noise and very useful for feature selection and extraction.
With the overcomplete problem, an autoencoder risks learning the "identity function" or "null function". This means the output and input are identical. Randomly setting some of the inputs to 0 (usually around 50%, or sometimes as low as 30% depending on the amount of data) makes it much harder to learn an exact, uncompressed representation of all the input data.
Score matching (SM) is an alternative to the maximum likelihood principle suitable for unnormalized probability density models whose partition function is intractable. Score matching searches for parameters that are more robust to small-noise perturbations of the training data.
Pascal Vincent from the University of Montreal proved that raining an denoising autoencoder defined is equivalent to performing score matching (explicit or implicit) with a described energy function on Parzen density estimate qσq_\sigmaqσ. Such a training would typically use stochastic gradient descent, whereby samples from qσq_\sigmaqσ are obtained by corrupting samples from the input dataset. This can all be carried out with a variety of optimization objective formulations. In other words, this paper demonstrates the equivalence between SM & certain DAEs.
Restricted Boltzman Machines (RBMs) share a similar idea as autoencoders, but use a stochastic approach. Instead of deterministic (e.g. logistic or ReLU) it uses stochastic units with particular (usually binary of Gaussian) distribution. The learning procedure consists of several steps of Gibbs sampling (propagate: sample hiddens given visibles; reconstruct: sample visibles given hiddens; repeat) and adjusting the weights to minimize reconstruction error.
The intuition behind RBMs is that there are some visible random variables (e.g. film reviews from different users) and some hidden variables (like film genres or other internal features), and the task of training is to find out how these two sets of variables are actually connected to each other
Both models have their use cases, pros and cons, but probably the most important properties are:
Autoencoders are simplest ones. They are intuitively understandable, easy to implement and to reason about (e.g. it's much easier to find good meta-parameters for them than for RBMs).
RBMs are generative. That is, unlike autoencoders that only discriminate some data vectors in favour of others, RBMs can also generate new data with given joined distribution. They are also considered more feature-rich and flexible.
Even though input data for AI tasks such as image, text, audio reside in a high dimensional space (e.g. 28×2828 \times 2828×28 black and white image has 784784784 degrees of freedom yielding 27842^{784}2784 possible images), most uniformly sampled output (for instance sampling from those 27842^{784}2784 possible images) would not be naturally occurring images.
The basic idea of the manifold hypothesis is that there exists a lower dimensional manifold in which these naturally occurring images actually lie. So the model learning task becomes learning to output representations that map the naturally occurring images in the high dimensional input space to the low dimensional manifold. The idea is that the small variations of the naturally occurring images (e.g. rotations) etc are mapping to corresponding changes in the learned representation (see figure above) in the low dimensional manifold. PCA is an example of a manifold mapping algorithm where the manifold is linear. Autoencoders are inspired by the manifold hypothesis and learn lower dimensional representations of high dimensional data. Even though autoencoders are known to perform dimensionality reduction, the manifold view gives a deeper understanding of this mapping. Both denoising and contractive autoencoders, by resisting learning non-representative noise, are equiped to learn lower-dimensional latent representations that approximate such a manifold.
Contractive autoencoders are a type of regularized autoencoder. Contractive autoencoders penalize large derivatives in the encoding hhh of input data xxx, with a pentalty term λ\lambdaλ (∇x\nabla_x∇x is a scaling factor).
Ω(h,x)=λ∑i∥∇xhi∥2\Omega(h, x) = \lambda \sum_i \| \nabla_xh_i \|^2Ω(h,x)=λ∑i∥∇xhi∥2
Optimal values have a very small (specifically, 0) derivative. In gradient descent, we take large steps when our loss function's derivative is large, and small steps when our loss function's derivative is small. Contractive autoencoders are similar; they grow your loss function when your derivative is large, encouraging your model to take larger steps during gradient descent.
L(x,g(f(x)))+Ω(h,x)L(x, g(f(x))) + \Omega(h, x)L(x,g(f(x)))+Ω(h,x)
Penalizing large derivatives in the encoding (like the kinds that would appear in learning noise that deviates from the latent representation) reduces the likelihood of learning an "identity function" of the input data.
The name "contractive" comes from the fact that the CAE is encouraged to map a neighborhood of input points to a smaller neighborhood of output points.
Since the goal of CAEs is robustness of representation (i.e., feature extraction), having training data that's too uniform risks undoing the regularizing safeguards of the architecture. This can be averted with larger amounts of training data.
When training a contractive autoencoder, contractive loss usually needs to be defined separately. An example of a CAE implementation (in Keras) is as follows:
from keras.layers import Input, Dense
from keras.models import Model
import keras.backend as K
lam = 1e-4
inputs = Input(shape=(N,))
encoded = Dense(N_hidden, activation='sigmoid', name='encoded')(inputs)
outputs = Dense(N, activation='linear')(encoded)
model = Model(input=inputs, output=outputs)
def contractive_loss(y_pred, y_true):
mse = K.mean(K.square(y_true - y_pred), axis=1)
W = K.variable(value=model.get_layer('encoded').get_weights()[0]) # N x N_hidden
W = K.transpose(W) # N_hidden x N
h = model.get_layer('encoded').output
dh = h * (1 - h) # N_batch x N_hidden
# N_batch x N_hidden * N_hidden x 1 = N_batch x 1
contractive = lam * K.sum(dh**2 * K.sum(W**2, axis=1), axis=1)
return mse + contractive
model.compile(optimizer='adam', loss=contractive_loss)
model.fit(X, X, batch_size=N_batch, nb_epoch=5)
"Stacking" is to literally feed the output of one autoencoder to the input of the next autoencoder. As the name suggests, a stacked autoencoder is a bunch of smaller autoencoders stacked on top of each other. Stacked Denoising Autoencoders are a tool for unsupervised/semisupervised learning.
Any deep network is created by stacking layers. It's true that if there were no non-linearities in the layers you could collapse the entire network to a single layer, but there are non-linearities and you can't. "Stacking" isn't generally used to describe connecting simple layers, but that's what it is, and stacking autoencoders — or other blocks of layers — is just a way of making more complex networks.
There are a lot of comparisons to be made between autoencoders and PCA is essentially a linear transformation but Auto-encoders are capable of modelling complex non linear functions. PCA features are totally linearly uncorrelated with each other since features are projections onto the orthogonal basis, but autoencoded features might have correlations since they are just trained for accurate reconstruction.
A single layered autoencoder with a linear activation function is very similar to PCA. However, a deep autoencoder can easily beat the reconstruction quality of PCA (though it's important to make sure regularization is used so the reconstruction isn't just a complex identity function).
From Yann LeCunn's lab at NYU
One problem with traditional sparse coding is that inference is somewhat slow. Give an input vector, finding the corresponding code vector requires an L2/L1L_2/L_1L2/L1 optimization. Having to do this for every patch in an image would preclude the use of sparse coding for high-speed image recognition.
Predictive Sparse Decomposition (PSD) alleviates this problem by training a simple, feed-forward encoder module to predict an approximation to the optimal sparse code. The factor graph is shown below.
This could be seen as a kind of auto-encoder. The energy function has an additional term, which measure the discrepancy between the predicted code and actual code. The encoder can take several function forms. In the simplest instance, it is simply a linear transform, followed by a sigmoid non-linearity, and a trainable diagonal gain matrix.
The animation below shows the filters being learned by the PSD learning procedure, when trained on 12x12 pixel patches from natural images.
One popular application of autoencoding is Data denoising, which is doable on data ranging from image inputs to audio data.
Another application is Dimensionality reduction. Working with high dimensional data presents lots of challenges, one of which is visualization. Autoencoders are usually used as a preprocessing stage to visualization methods such as t-SNE.
Other examples of applications include data compression, feature extraction, image generation, and sequence-to-sequence prediction
Representation learning is learning representations of input data typically by transforming it or extracting features from it(by some means), that makes it easier to perform a task like classification or prediction. There are various ways of learning different representations. For instance,
in the case of probabilistic models, the goal is to learn a representation that captures the probability distribution of the underlying explanatory features for the observed input. Such a learnt representation can then be used for prediction.
in deep learning, the representations are formed by composition of multiple non-linear transformations of the input data with the goal of yielding abstract and useful representations for tasks like classification, prediction etc.
Focussing specifically on deep learning, representation learning is the consequence of the function a model learns where the learning is captured in the parameters of the model, as the function transforms input to output, during training. Representation learning here is referring to the nature/characteristics of the transformed input - not the model parameters/ function that is causal to it. The casual role is played both by the architecture of the model, and the learned parameters (e.g. does a parameter play a role in representing part or all of the input etc.) in mapping input to output.
Deep learning is but one of the many ways to learn representations and "depth" in deep learning just happens to be one of the many factors to learning a good representation, even though it is an important one.
In deep learning, "depth" refers to a hierarchical organization of explanatory features. As humans we describe the world using a hierarchy of concepts, with more abstract concepts layered on top of less abstract ones. Similarly, deep learning models learn functions that transform input to output using a composition of non-linear functions stacked in layers, where the output of layers form hierarchy of distributed representations with increasing levels of abstraction as input flows through them.
In addition to outputting progressive levels of abstract feature representations, deep learning architectures also enable feature reuse. Just as features are reused to represent different input regions in distributed representation, depth allows for feature reuse across layers by the multiple circuit paths in the computational graph from input to output through the nodes in the layers of the network.
They are both classification problems but with different requirements on how many training examples you get. Normally, you get many training examples (e.g. thousands or more) per category before you are tested on things that you haven't seen before.
In one-shot learning (first defined in "One-Shot learning of object categories"), you get only 1 or a few training examples in some categories. For example, in some classification tasks, there may be little-to-no data or the number of each category may be severely imbalanced (such as in medical diagnoses, where healthy examples may be far more common than a rare disease). For machine learning areas like semi-supervised classification, ,some of the data may be only partly labelled. The model will need to label new incoming data, and correctly use this newly labelled data to learn features of the category more. This whole process is dependent on how well the model can learn the category features in the beginning with few examples to draw on.
In zero-shot learning (first defined in "Zero-Shot Learning with Semantic Output Codes" ), you are not presented with every class label in training. So in some categories, you get 0 training examples. One real-world example of this is classification of fMRI data. Because human thoughts can take so many diverse forms, it's more than likely interpreting fMRI patterns would require making sense of previously unseen fMRI patterns.
What trade-offs does representation learning have to consider?
From Deep Learning (by Ian Goodfellow, Yoshua Bengio, and Aaron Courville):
Most representation learning problems face a trade-off between preserving as much information about the input as possible and attaining nice properties (such as independence).
Yoshua Bengio's 2014 paper goes into more detail:
One of the challenges of representation learning that distinguishes it from other machine learning tasks such as classification is the difficulty in establishing a clear objective, or target for training. In the case of classification, the objective is (at least conceptually) obvious, we want to minimize the number of misclassifications on the training dataset. In the case of representation learning, our objective is far-removed from the ultimate objective, which is typically learning a classifier or some other predictor. Our problem is reminiscent of the credit assignment problem encountered in reinforcement learning. We have proposed that a good representation is one that disentangles the underlying factors of variation, but how do we translate that into appropriate training criteria? Is it even necessary to do anything but maximize likelihood under a good model or can we introduce priors such as those enumerated above (possibly data-dependent ones) that help the representation better do this disentangling? This question remains clearly open…
From a practitioners perspective, one can have at least a qualitative sense if not a quantitative one of how good a model's representations are by the different tasks it can be used for. For instance,
the distributed representation of words (word embeddings) output by a simple model like word2vec with just two matrices as it basic architecture, has shown the power of non-local representation learning. Word embeddings have become the de facto representation of words for downstream NLP tasks
The recent BERT model mentioned earlier is another example of how rich distributed representations output by the model can be used for a variety of NLP tasks with very little task specific data and hardly any task specific architecture, to obtain state-of-art results on NLP tasks.
While BERT's use case is one of reusing representations learnt from reconstruction of XXX from a masked version of it, in P(YX,task)P(\frac{Y}{X}, \text{task})P(XY,task), GPT-2 is another recent model that learns representations from a language modeling task and these representations are reused for tasks without any labeled data for tasks that are typically supervised. This is done by cleverly crafting the supervised task as a language modeling task (predicting the next word given current sentence). While the performance of the model is not state-of-art yet on these tasks, it underscores the power and versatility of learnt representations, particularly distributed representations, which is largely responsible for the current successes in NLP.
See Jason Brownlee's blog for more in-depth details
Traditionally, training deep neural networks with many layers was challenging. As the number of hidden layers is increased, the amount of error information propagated back to earlier layers is dramatically reduced. This means that weights in hidden layers close to the output layer are updated normally, whereas weights in hidden layers close to the input layer are updated minimally or not at all. Generally, this problem prevented the training of very deep neural networks and was referred to as the vanishing gradient problem.
An important milestone in the resurgence of neural networking that initially allowed the development of deeper neural network models was the technique of greedy layer-wise pretraining, often simply referred to as "pretraining."
The technique is referred to as "greedy" because the piecewise or layer-wise approach to solving the harder problem of training a deep network. As an optimization process, dividing the training process into a succession of layer-wise training processes is seen as a greedy shortcut that likely leads to an aggregate of locally optimal solutions, a shortcut to a good enough global solution.
Pretraining involves successively adding a new hidden layer to a model and refitting, allowing the newly added model to learn the inputs from the existing hidden layer, often while keeping the weights for the existing hidden layers fixed. This gives the technique the name "layer-wise" as the model is trained one layer at a time.
Broadly, supervised pretraining involves successively adding hidden layers to a model trained on a supervised learning task. Unsupervised pretraining involves using the greedy layer-wise process to build up an unsupervised autoencoder model, to which a supervised output layer is later added.
What were/are the purposes of the above technique?
We can expect the unsupervised pretraining to be more effective when the initial representation is poor. The use of word embeddings is a great example, where learned word embeddings naturally encode similarity between words. Other factors also matter. For example, unsupervised pretraining is likely to be most useful when the function to be learned is extremely complicated.
In short, the technique above reduces the time and search space needed to find the optimal weights of a deep learning model
Unsupervised pretraining combines two ideas:
The choice of initial parameters of a deep neural network can have a significant regularizing effect;
learning about the input distribution can help with learning about the mapping from inputs to outputs.
But both ideas are not fully understood.
From the representation view, the idea is that some features that are useful for unsupervised task are also useful for supervised tasks. But this is not understood at a mathematical or theoretical level.
Unsupervised pretraining may be appropriate when you have a significantly larger number of unlabeled examples that can be used to initialize a model prior to using a much smaller number of examples to fine tune the model weights for a supervised task.
Pretraining can be used to iteratively deepen a supervised model or an unsupervised model that can be repurposed as a supervised model. Pretraining may be useful for problems with small amounts labeled data and large amounts of unlabeled data.
From the regularizing view, it's possible that pretraining initializes the model in a location that would otherwise be in accessible.
Thinking of this process as a regularizer, i.e. to avoid overfitting, we can expect unsupervised pretraining to be most helpful when there is few labeled data and large number of unlabeled data.
Unsupervised pretraining has a very obvious disadvantage of operating with two separate phases. It has also way more hyperparameters, whose effect may be measured after training but are often difficult to predict ahead.
Another disadvantage of having two separate phases is that each phase has its own hyper parameters. The performance of the second phase usually cannot be predicted during the first phase, so there is a long delay between proposing the hyperparameters and updating them.
It's always difficult to tell what aspects of the pretrained parameters are retained during the supervised training stage, making this kind of intractable. There are two ways to overcome this:
train supervised and unsupervised learning simultaneously.
freeze the parameters for the feature extractors and using supervised learning only to add a classifier on top of them.
we can train unsupervised and supervised learning simultaneously. Through this we can reduce the tuning to a single hyperparameter, usually a coefficient attached to the weight of the unsupervised cost.
We can also use validation set error in the supervised phase to select the hyperparameters of the pretraining phase. In practice, some hyperparameters, like the number of iterations, are more conveniently set during the pretraining phase using early stopping.
In computer science, a deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states.
This is why when we use algorithms like neural networks that are not necessarily deterministic, we make them deterministic by setting a seed value.
An example of a Las Vegas Algorithm would be a Randomized Quicksort, in which the pivot is chosen randomly, and divides the elements into three partitions: elements less than pivot, elements equal to pivot, and elements greater than pivot. The randomized QuickSort requires a lot of resources (and can have veried runtime depending on which pivot element is randomly chosen first) but it will always generate the sorted array as an output.
# Input A is an array of n elements
def randomized_quicksort(A):
Partition A into elements < x, x, and >x # as shown in the figure above.
Execute Quicksort on A[1 to i-1] and A[i+1 to n].
Combine the responses in order to obtain a sorted array.
if n = 1:
return A # A is sorted.
i = random.randrange(1, n) # Will take a random number in the range 1~n
X = A[i] # The pivot element
Las Vegas algorithms arise frequently in search problems. For example, one looking for some information online might search related websites for the desired information. The time complexity thus ranges from getting "lucky" and finding the content immediately, to being "unlucky" and spending large amounts of time. Once the right website is found, then there is no possibility of error.
Approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Many of these algorithms are randomized, but can be seeded with a predetermined seed number for a pseudorandom number generator. Hence, despite the "random sampling" , these are deterministic algorithms that will always produce the same output from the same input.
Monte Carlo Algorithms are randomized algorithms whose output may be incorrect with a certain (typically small) probability. If there is a procedure for verifying whether the answer given by a Monte Carlo algorithm is correct, and the probability of a correct answer is bounded above zero, then with probability one running the algorithm repeatedly while testing the answers will eventually give a correct answer.
A common example of a Monte Carlo Algorithm would be estimating the value of π\piπ from points randomly selected (x,y)(x, y)(x,y) points in a circle contained within a square with sides of length 1. We know that area of the square is 1 unit sq while that of circle is π∗(12)2=π4\pi \ast (\frac{1}{2})^{2} = \frac{\pi}{4}π∗(21)2=4π. Now for a very large number of generated points,
area of the circlearea of the square=no. of points generated inside the circletotal no. of points generated or no. of points generated inside the square\frac{\textrm{area of the circle}}{\textrm{area of the square}} = \frac{\textrm{no. of points generated inside the circle}}{\textrm{total no. of points generated or no. of points generated inside the square}}area of the squarearea of the circle=total no. of points generated or no. of points generated inside the squareno. of points generated inside the circle
that is,
π=4∗no. of points generated inside the circleno. of points generated inside the square\pi = 4 \ast \frac{\textrm{no. of points generated inside the circle}}{\textrm{no. of points generated inside the square}}π=4∗no. of points generated inside the squareno. of points generated inside the circle
The beauty of this algorithm is that we don't need any graphics or simulation to display the generated points. We simply generate random (x, y) pairs and then check if x2+y2⩽1x^{2} + y^{2} \leqslant 1x2+y2⩽1. If yes, we increment the number of points that appears inside the circle. In randomized and simulation algorithms like Monte Carlo, the more the number of iterations, the more accurate the result is. This is why care was take to specify this example as "estimating the value of Pi" and not "calculating the value of Pi". The code looks like the following:
INTERVAL= 1000
circle_points, square_points = 0, 0
# Total Random numbers generated= possible x
# values* possible y values
for i in range(INTERVAL**2):
# Randomly generated x and y values from a
# uniform distribution
# Rannge of x and y values is -1 to 1
rand_x= random.uniform(-1, 1)
rand_y= random.uniform(-1, 1)
# Distance between (x, y) from the origin
origin_dist= rand_x**2 + rand_y**2
# Checking if (x, y) lies inside the circle
if origin_dist<= 1:
circle_points+= 1
square_points+= 1
# Estimating value of pi,
# pi= 4*(no. of points generated inside the
# circle)/ (no. of points generated inside the square)
pi = 4* circle_points/ square_points
## print(rand_x, rand_y, circle_points, square_points, "-", pi)
## print("\n")
print("Final Estimation of Pi=", pi)
Here's another visual example with calculating Euler's constant. Being an irrational number the calculation can never reach e exactly, but it does get closer and closer to the true value.
Las Vegas algorithms can be contrasted with Monte Carlo algorithms, in which the resources used are bounded but the answer may be incorrect with a certain (typically small) probability. By an application of Markov's inequality, a Las Vegas algorithm can be converted into a Monte Carlo algorithm by running it for set time and generating a random answer when it fails to terminate. We can also use the same method to can set the bound on the probability that the Las Vegas algorithm would go over the fixed limit.
Las Vegas Algorithm probabilistic certain
Monte Carlo Algorithm certain probabilistic
If a deterministic way to test for correctness is available, then it is possible to turn a Monte Carlo algorithm into a Las Vegas algorithm. However, it is hard to convert Monte Carlo algorithm to Las Vegas algorithm without a way to test the algorithm. On the other hand, changing Las Vegas algorithm to Monte Carlo algorithm is easy. This can be done by running a Las Vegas algorithm for a specific period of time given by confidence parameter. If the algorithm finds the solution within the time, then it is success and if not then output can simply be "sorry".
What are adversarial examples?
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they're like optical illusions for machines.
Discuss state-of-the-art adversarial attack techniques.
The space of adversarial attacks is much larger than the space of provably robust defences. Some of the more successful attack types include the following (taken from this review paper):
L-BFGS Attack
Fast Gradient Sign Method (FGSM)
One-step Target Class Method (OTCM)
Basic Iterative Method (BIM) and Iterative Least-Likely Class Method (ILLC)
Jacobian-based Saliency Map Attack (JSMA)
DeepFool
CPPN EA Fool
C&W's Attack
Zeroth Order Optimization (ZOO)
Universal Perturbation
One Pixel Attack
Feature Adversary
Hot/Cold method
Natural GAN
Model-based Ensembling Attack
Ground-Truth Attack
GPU Memory leaks
Discuss state-of-the-art defense techniques for adversarial models.
Traditional techniques for making machine learning models more robust, such as weight decay and dropout, generally do not provide a practical defense against adversarial examples. So far, only two methods have provided a significant defense.
Adversarial training: This is a brute force solution where we simply generate a lot of adversarial examples and explicitly train the model not to be fooled by each of them. An open-source implementation of adversarial training is available in the cleverhans library and its use illustrated in the following tutorial.
Defensive distillation: This is a strategy where we train the model to output probabilities of different classes, rather than hard decisions about which class to output. The probabilities are supplied by an earlier model, trained on the same task using hard class labels. This creates a model whose surface is smoothed in the directions an adversary will typically try to exploit, making it difficult for them to discover adversarial input tweaks that lead to incorrect categorization. (Distillation was originally introduced in Distilling the Knowledge in a Neural Network as a technique for model compression, where a small model is trained to imitate a large one, in order to obtain computational savings.)
Yet even these specialized algorithms can easily be broken by giving more computational firepower to the attacker.
Why is it hard to defend against adversarial examples?
Adversarial examples are hard to defend against because it is difficult to construct a theoretical model of the adversarial example crafting process. Adversarial examples are solutions to an optimization problem that is non-linear and non-convex for many ML models, including neural networks. Because we don't have good theoretical tools for describing the solutions to these complicated optimization problems, it is very hard to make any kind of theoretical argument that a defense will rule out a set of adversarial examples.
Adversarial examples are also hard to defend against because they require machine learning models to produce good outputs for every possible input. Most of the time, machine learning models work very well but only work on a very small amount of all the many possible inputs they might encounter.
Most strategies that have been tested so far fail because they is not adaptive: they may block one kind of attack, but they leave another vulnerability open to an attacker who knows about the defense being used. Designing a defense that can protect against a powerful, adaptive attacker is an important research area.
In the ML community, it's important to give credit where credit is due, and not just slap our name on someone else's hard work cough. Here is a list of the references and resources I used for various equations, concepts, explanations, examples, and inspiration for visualizations:
Kuhn, Max. "Comparing Different Species of Cross-Validation." Applied Predictive Modeling, Springer, 3 Dec. 2014.
"KL-Divergence" CS412 Fall 2008. Introduction to Data Warehousing and Data Mining, Department of Computer Science, University of Illinois, 2008.
Erhan, Dumitru, et al. "Why does unsupervised pre-training help deep learning?." Journal of Machine Learning Research 11.Feb (2010): 625-660.
Ramamoorthy, Suriyadeepan. "Unfolding RNNs." Scientia Est Potentia, 2017.
Weller, Adrian. "Directed and Undirected Graphical Models." MLSALT4, Cambridge Machine Learning, 2016.
Dawkins, Paul. "Section 2-5 : Probability." Paul's Online Notes, 2018.
Kristiadi, Agustinus. "Deriving Contractive Autoencoder and Implementing It in Keras." Deriving Contractive Autoencoder and Implementing It in Keras - Agustinus Kristiadi's Blog, 2016.
Agarwal, Shivani, and Lyle Ungar. "Point Estimation." CIS520 Machine Learning | Lectures / Point Estimation, University of Pennsylvania, 10 Sept. 2017.
Papernot, Nicolas, et al. "Distillation as a defense to adversarial perturbations against deep neural networks." 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 2016.
Johnson, Melvin, et al. "Google's multilingual neural machine translation system: Enabling zero-shot translation." Transactions of the Association for Computational Linguistics 5 (2017): 339-351.
Fortunato, Meire, Charles Blundell, and Oriol Vinyals. "Bayesian recurrent neural networks." arXiv preprint arXiv:1704.02798 (2017).
Husain, Hisham, Richard Nock, and Robert C. Williamson. "Adversarial Networks and Autoencoders: The Primal-Dual Relationship and Generalization Bounds." arXiv preprint arXiv:1902.00985 (2019).
Yuan, Xiaoyong, et al. "Adversarial examples: Attacks and defenses for deep learning." IEEE transactions on neural networks and learning systems 30.9 (2019): 2805-2824.
Stephen, Oni. "RNN From First Principles(Bayes Can Do What Rnn Can Do)." Medium, Becoming Human: Artificial Intelligence Magazine, 25 Jan. 2019.
Srihari, Sargur. "Unfolding Computational Graphs." CSE676 - Deep Learning, Department of Computer Science and Engineering, University at Buffalo.
Foster, Ian, and Denis B. Gannon. "Manifold Learning and Deep Autoencoders in Science." Cloud Computing For Science and Engineering, 9 Sept. 2018.
LeCunn, Yann. "Sparse Coding for Feature Learning." "CBLL, Research Projects, Computational and Biological Learning Lab, Courant Institute, NYU", New York University.
Evil, Understanding of self-information, CS StackExchange URL (version: 2017-08-25)
"Index." CS 228 - Probabilistic Graphical Models, Stanford University.
"Contractive Autoencoder." DeepAI, 17 May 2019.
"Gradient Clipping." DeepAI, 17 May 2019.
"Approximation Algorithm." Wikipedia, Wikimedia Foundation, 24 Apr. 2019.
"Autoencoder." Wikipedia, Wikimedia Foundation, 17 Apr. 2019.
"Monte Carlo Algorithm." Wikipedia, Wikimedia Foundation, 28 Mar. 2019.
"Categorical Distribution." Wikipedia, Wikimedia Foundation, 8 July 2019.
"Differential Entropy." Wikipedia, Wikimedia Foundation, 2 Apr. 2019.
"Empirical Risk Minimization." Wikipedia, Wikimedia Foundation, 14 Nov. 2019.
"Graphical Model." Wikipedia, Wikimedia Foundation, 17 Apr. 2019.
"Las Vegas Algorithm." Wikipedia, Wikimedia Foundation, 30 Mar. 2019.
"Logistic Function." Wikipedia, Wikimedia Foundation, 30 Apr. 2019.
"Bernoulli vs Binomial vs Multinoulli vs Multinomial Distributions." Geeky Is Awesome, Wordpress, 1 Dec. 2016.
Hassan, Hatem AbdelMowgoud. "WTH Is an Autoencoder?" Hatem Hassan Blog, 3 Oct. 2019.
Grover, Prince. "5 Regression Loss Functions All Machine Learners Should Know." Medium, Heartbeat, 5 Feb. 2019.
Mantri, Nidhi. "Applications of Autoencoders." OpenGenus IQ: Learn Computer Science, 20 July 2019.
Hui, Jonathan. "Deep Learning - Probability & Distribution." Jonathan Hui Blog, Blogspot, 5 Jan. 2017.
Melvile, JL. "Deriving Embedding Gradients." Sneer Documentation, JL Melvile.
Karpathy, Andrej. "The Unreasonable Effectiveness of Recurrent Neural Networks." Andrej Karpathy Blog, 2015.
Brownlee, Jason. "A Gentle Introduction to Backpropagation Through Time." Machine Learning Mastery, 14 Aug. 2019.
Brownlee, Jason. "How to Use Greedy Layer-Wise Pretraining in Deep Learning Neural Networks." Machine Learning Mastery, 10 Sept. 2019.
Brownlee, Jason. "What Is Teacher Forcing for Recurrent Neural Networks?" Machine Learning Mastery, 14 Aug. 2019.
Brownlee, Jason. "A Gentle Introduction to Estimation Statistics for Machine Learning." Machine Learning Mastery, 8 Aug. 2019.
Brownlee, Jason. "A Gentle Introduction to Exploding Gradients in Neural Networks." Machine Learning Mastery, 14 Aug. 2019.
Graham Kemp, Are random variable and deterministic constant independent?, Math StackExchange URL (version: 2016-09-08)
Nate Eldredge, Independence and conditional independence between random variables, Math StackExchange URL (version: 2018-03-13)
Huang, Andre. "Representation Learning (1) - Greedy Layer-Wise Unsupervised Pretraining." Medium, Medium, 20 Feb. 2018.
Sharma, Neeraj. "Understanding Probabilistic Graphical Models Intuitively." Medium, Medium, 15 Nov. 2015.
Zhou, Syoya. "What Happens in Sparse Autencoder." Medium, Medium, 7 Dec. 2018.
"Loss Functions." Loss Functions - ML Glossary Documentation.
Yan, Nancy. "Nancy's Notes." Nancy's Notes, Nancy Yan, 9 Aug. 2019.
Goodfellow, Ian. "Attacking Machine Learning with Adversarial Examples." OpenAI, OpenAI, 7 Mar. 2019.
Bengio, Yoshua, et al. "Greedy layer-wise training of deep networks." Advances in neural information processing systems. 2007.
Polson, Nicholas G., and Vadim Sokolov. "Deep learning: a Bayesian perspective." Bayesian Analysis 12.4 (2017): 1275-1304.
Wu, Yonghui, et al. "Google's neural machine translation system: Bridging the gap between human and machine translation." arXiv preprint arXiv:1609.08144 (2016).
Serengil, Sefik. "Softplus as a Neural Networks Activation Function." Sefik Ilkin Serengil, 2 Feb. 2019.
Amidi, Afshine, and Shervine Amidi. "Recurrent Neural Networks Cheatsheet Star." CS 230 - Recurrent Neural Networks Cheatsheet, Stanford University.
ffriend, What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders?, Stats StackExchange URL (version: 2014-09-29)
DaemonMaker, What're the differences between PCA and autoencoder?, Stats StackExchange URL (version: 2014-10-16)
Oren Milman, Why do RNNs have a tendency to suffer from vanishing/exploding gradient?, Stats StackExchange URL (version: 2018-10-08)
user139688, Difference between feedback RNN and LSTM/GRU, Stats StackExchange URL (version: 2019-04-10)
Sycorax says Reinstate Monica, What is the origin of the autoencoder neural networks?, Stats StackExchange URL (version: 2018-08-03)
nlml, Loss function for autoencoders, Stats StackExchange URL (version: 2017-08-04)
loco, Is teacher forcing more accurate than using actual model output or just faster?, StackExchange URL (version: 2018-11-01)
Hossein, What is the advantage of using BPTT along with teacher forcing?, Stats StackExchange URL (version: 2017-02-01)
Nuclear Wang, 0-1 Loss Function explanation, Stats StackExchange URL (version: 2017-06-07)
Chris Taylor, Bayesian vs frequentist Interpretations of Probability, Stats StackExchange URL (version: 2015-06-19)
Ankit Goyal, Does the "number of unrollings" of an RNN always have to match the length of the input sequence?, Stats StackExchange URL (version: 2018-01-03)
Xi'an, A Gaussian Mixture Model Is a Universal Approximator of Densities, Stats StackExchange URL (version: 2019-03-01)
Don Walpola, Why is a 0-1 loss function intractable?, Stats StackExchange URL (version: 2018-09-05)
Xi'an, Why is the normal distribution a default choice for a prior over a set of real numbers?, Stats StackExchange URL (version: 2019-01-06)
Wayne, Stacked shallow autoencoders vs. deep autoencoders, Stats StackExchange URL (version: 2019-02-23)
Chan, Fong Chun. "Joint, Marginal, and Conditional Probabilities." Fong Chun Chan's Blog, 2016.
Dertat, Arden. "Applied Deep Learning - Part 3: Autoencoders." Medium, Towards Data Science, 8 Oct. 2017.
Muaz, Urwa. "Autoencoders vs PCA: When to Use Which ?" Medium, Towards Data Science, 25 July 2019.
Ranjan, Chitta. "Build the Right Autoencoder - Tune and Optimize Using PCA Principles. Part I." Medium, Towards Data Science, 23 July 2019.
Hubens, Nathan. "Deep inside: Autoencoders." Medium, Towards Data Science, 10 Apr. 2018.
Monn, Dominic. "Denoising Autoencoders Explained." Medium, Towards Data Science, 18 July 2017.
Phi, Michael. "Illustrated Guide to Recurrent Neural Networks." Medium, Towards Data Science, 1 May 2019.
Cohen, Ori. "PCA vs Autoencoders." Medium, Towards Data Science, 13 May 2018.
Labs, Mate. "Secret Sauce behind the Beauty of Deep Learning: Beginners Guide to Activation Functions." Medium, Towards Data Science, 8 Feb. 2019.
Rodriguez, Jesus. "The Secret Layer Behind Every Successful Deep Learning Model: Representation Learning and Knowledge…" Medium, Towards Data Science, 29 Aug. 2018.
Liang, Yingyu. "Deep Learning Basics Lecture 8: Autoencoder & DBM." COS 495, Princeton University, 2016.
Vincent, Pascal. "A connection between score matching and denoising autoencoders." Neural computation 23.7 (2011): 1661-1674.
Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT press, 2012.
Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
"Design Patterns for Deep Learning Architectures." Overview [Deep Learning Patterns].
"Estimating the Value of Pi Using Monte Carlo." GeeksforGeeks, 24 Apr. 2019.
Jordan, Jeremy. "Introduction to Autoencoders." Jeremy Jordan, Jeremy Jordan, 19 Mar. 2018.
Baldi, Pierre, and Peter Sadowski. "Learning in the machine: Recirculation is random backpropagation." Neural Networks 108 (2018): 479-494.
Srinivasan, Krishnan. "Guide to Autoencoders." Yale Data Science, Yale University, 29 Oct. 2016.
Cited as:
@article{mcateer2019mldli,
title = "Deep Learning Concepts every practicioner should know",
author = "McAteer, Matthew",
journal = "matthewmcateer.me",
year = "2019",
url = "https://matthewmcateer.me/blog/ml-research-interview-dl-concepts/"
If you notice mistakes and errors in this post, don't hesitate to contact me at [contact at matthewmcateer dot me] and I will be very happy to correct them right away! Alternatily, you can follow me on Twitter and reach out to me there.
See you in the next post 😄
I write about AI, Biotech, and a bunch of other topics. Subscribe to get new posts by email!
Send me only ML posts
Excess COVID-19 Deaths
Have 200K+ Americans really died from COVID?
The Math Behind ML (the important stuff)
Important mathematical prerequisites for getting into Machine Learning, Deep Learning, or any of the other space
Matthew McAteer @MatthewMcAteer0
Biologist-turned-ML-Engineer in San Francisco. I blog about machine learning, biotech, distributed systems, dogs, and more.
At least this isn't a full-screen popup
That'd be more annoying. Anyways, subscribe to my newsletter to get new posts by email! I write about AI, Biotech, and a bunch of other topics. | CommonCrawl |
Calculate determinant with induction
I need to prove the following, with induction to every $1 \leq n$: $$D(a_1,...,a_n) = \left| \begin{array}{ccc} a_1+x& a_2 & a_3 & \cdots & a_n \\ a_1& a_2+x & a_3 & \cdots & a_n \\ a_1& a_2 & a_3+x & \cdots & a_n \\ \vdots & \vdots & \vdots & & \vdots \\ a_1& a_2 & a_3 & \cdots & a_n + x \end{array} \right| = x^n + (a_1 + \cdots + a_n)x^{n-1}$$
I played with it a bit and couldn't find a way to prove it.
This is what I did: I assumed that it is correct for $n$, and tried to solve it for $n+1$
$$D(a_1, \ldots , a_n, a_{n+1}) = \left| \begin{array}{ccc} a_1+x& a_2 & a_3 & \cdots & a_{n+1} \\ a_1& a_2+x & a_3 & \cdots & a_{n+1} \\ a_1& a_2 & a_3+x & \cdots & a_{n+1} \\ \vdots & \vdots & \vdots & & \vdots \\ a_1& a_2 & a_3 & \cdots & a_{n+1} + x \end{array} \right| $$
and I did the following operation on the determinant ($R_{n+1} \to R_{n+1} - R_1$) and got:
$$ \left| \begin{array}{ccc} a_1+x& a_2 & a_3 & \cdots & a_{n+1} \\ a_1& a_2+x & a_3 & \cdots & a_{n+1} \\ a_1& a_2 & a_3+x & \cdots & a_{n+1} \\ \vdots & \vdots & \vdots & & \vdots \\ -x& 0 & \cdots & 0 & x \end{array} \right| $$
And I wasn't sure on how to proceed from here, or even if I'm on the right path.
linear-algebra determinant
Michael Hardy
Dan RevahDan Revah
$\begingroup$ Try doing cofactor expansion along the bottom row now $\endgroup$ – Jon Warneke Nov 28 '15 at 17:54
$\begingroup$ @JonWarneke of course I've tried, just couldn't get to the solution. $\endgroup$ – Dan Revah Nov 28 '15 at 17:56
Developing with respect to the last row, after performing those elementary row operations (that don't change the determinant), you get $$ D(a_1,\dots,a_n,a_{n+1})=\\ xD(a_1,\dots,a_n)+(-1)^{(n+1)+1}(-x)\det\begin{bmatrix} a_2 & a_3 & \dots & a_n & a_{n+1} \\ a_2+x & a_3 & \dots & a_n & a_{n+1} \\ a_2 & a_3+x & \dots & a_n & a_{n+1} \\ \vdots & \vdots & \ddots & \vdots & \vdots\\ a_2 & a_3 & \dots & a_n+x & a_{n+1} \end{bmatrix} $$ Doing $n-1$ row swaps, the determinant we need is \begin{multline} \det\begin{bmatrix} a_2+x & a_3 & \dots & a_n & a_{n+1} \\ a_2 & a_3+x & \dots & a_n & a_{n+1} \\ \vdots & \vdots & \ddots & \vdots & \vdots\\ a_2 & a_3 & \dots & a_n+x & a_{n+1} \\ a_2 & a_3 & \dots & a_n & a_{n+1} \end{bmatrix}=\\ \det\begin{bmatrix} a_2+x & a_3 & \dots & a_n & a_{n+1} \\ a_2 & a_3+x & \dots & a_n & a_{n+1} \\ \vdots & \vdots & \ddots & \vdots & \vdots\\ a_2 & a_3 & \dots & a_n+x & a_{n+1} \\ a_2 & a_3 & \dots & a_n & a_{n+1}+x \end{bmatrix}-\\ \det\begin{bmatrix} a_2+x & a_3 & \dots & a_n & 0 \\ a_2 & a_3+x & \dots & a_n & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots\\ a_2 & a_3 & \dots & a_n+x & 0 \\ a_2 & a_3 & \dots & a_n & -x \end{bmatrix}=\\[6px] D(a_2,\dots,a_{n+1})-xD(a_2,\dots,a_n) \end{multline} Therefore $$ D(a_1,\dots,a_n,a_{n+1})= xD(a_1,\dots,a_n)+ x(D(a_2,\dots,a_{n+1})-xD(a_2,\dots,a_n)) $$ By the induction hypothesis, \begin{multline} xD(a_1,\dots,a_n)+ x(D(a_2,\dots,a_{n+1})-xD(a_2,\dots,a_n))=\\ x(x^n+(a_1+\dots+a_n)x^{n-1})+\\ \qquad x(x^n+(a_2+\dots+a_{n+1})x^{n-1}-x^n-(a_2+\dots+a_n)x^{n-1})=\\ x^{n+1}+(a_1+\dots+a_n+a_{n+1})x^n \end{multline}
Note that you have to check the induction basis for $n=1$ and $n=2$, which is easy.
190k1414 gold badges9090 silver badges215215 bronze badges
First, developing with respect to the last column, your $D_{n+1}$ is of the form $Aa_{n+1}+B$. If you put $a_{n+1}=0$, you get $B=xD_n$. Now divide $D_{n+1}$ by $a_{n+1}$ (divide only the entries of the last column) and let $a_{n+1} \to +\infty$. The last column in this new determinant $T$ has now only $1$ as entries. Now in $T$ subtract to the first column $a_1$ times the last column, and do the same for the other columns. Then you find easily that this determinant $T$ is $x^n$, hence $A=x^n$ and $D_{n+1}=x^na_{n+1}+xD_n$, and we are done.
KelennerKelenner
$\begingroup$ Thank you for your answer! what do you mean by $a_{n+1} \to +\infty$? $\endgroup$ – Dan Revah Nov 28 '15 at 18:08
$\begingroup$ I have supposed that the entries are real ; here $a_{n+1}$ is considered as a real variable, that go to $+\infty$. $\endgroup$ – Kelenner Nov 28 '15 at 18:11
$\begingroup$ sorry but I still don't understand what it means 'plus infinity'. I understood most of your answer, you said to divide the last column by $a_{n+1}$ and after doing it you get '1' to be in all of the column except from the last column in the last row, which is $x/a_{n+1}$ and you did that opearation $a_{n+1} \to +\infty$ ? what is it actually saying? $\endgroup$ – Dan Revah Nov 28 '15 at 18:26
$\begingroup$ I do not understand your problem. You have certainly seen for example in $\mathbb{R}$ we have $ \frac{1}{y} \to 0$ if $y\to +\infty$ ? $\endgroup$ – Kelenner Nov 28 '15 at 18:36
$\begingroup$ Yes, but the limit of $(\frac{x}{a_{n+1}})$ if $a_{n+1}\to \infty$ is $=0$. So after having let $a_{n+1}\to \infty$, the new determinant $T$ has a last column with only $1$ as entries. $\endgroup$ – Kelenner Nov 28 '15 at 18:48
Not the answer you're looking for? Browse other questions tagged linear-algebra determinant or ask your own question.
Determinant of a specific circulant matrix, $A_n$
Calculate determinant of Vandermonde using specified steps.
Determinant of circulant matrix
Find the determinant of $n\times n$ matrix
Matrix's determinant
Is there any easiest way to find the determinant?
Determinants and $2\times 2$ minors
determinant computing
Vandermonde-like determinant with first row $1,-a_1,-(a_1)^2 ,\dots$
Determinant with rows $a_1$ to $a_n$ with $-x$ on the diagonal | CommonCrawl |
Network coding for reliable video distribution in multi-hop device-to-device communications
Lei Wang ORCID: orcid.org/0000-0002-4482-89561,
Yu Liu2,
Jia Xu2,
Jun Yin1,
Lijie Xu1 &
Yuwang Yang3
It is becoming more and more popular to share videos among multiple users. However, sharing video in traditional cellular networks will incur high expenses. Device-to-device (D2D) communication is one of the crucial technologies in the fifth-generation network, and it enables the devices to transmit data directly without the relay of base stations. This paper proposes a network-coding-based video distribution scheme for the D2D communication environment. The proposed scheme applies the network coding technology in the H.264 video transmission, which can protect crucial information of the video. This scheme enables the receivers to decode the original video with a high probability, especially in the networks with interferences. Both the simulation results and the actual experimental results show that using network coding technology in video transmission can improve the quality of the received video. Compared with the traditional scheme, the successful decoding rate of the proposed scheme is increased by \(46\%\) in our experimental settings.
Currently, sharing videos among users is becoming a popular application with the development of the fourth-generation network [1, 2]. More and more people prefer watching videos on their mobile devices. In the traditional cellular networks, even if the sender and the receiver are very close to each other, connections between them are established via the relay of the base station, which will cause transmission delay and increase the burden on the base station. Device-to-device(D2D) communication is a crucial supporting technique in the fifth-generation network [3, 4], and it enables the devices to directly communicate with other nearby devices without the relay of the base station, which could significantly increase the bandwidth efficiency of the entire network. D2D communication has attracted much attention [5, 6], and the standardization organization 3rd-Generation Partnership Project(3GPP) has included D2D communication technology in the development framework of the fifth-generation mobile communication system [7].
However, D2D communication is susceptible to interferences from base stations and other devices working in the same frequency band, which will reduce transmission reliability, especially during the video transmission process [8]. Therefore, it has become an essential goal in the D2D communication environment to provide high-quality and stable video streams to mobile users [9]. As a high-resolution video is usually larger in size, it is necessary to compress the video to improve the efficiency of transmission and storage. H.264 is a typical video compression standard with a high data compression ratio. In H.264 video streams, video frames are of different importance, which means that the loss of crucial information will significantly reduce the video quality, especially in the environment with limited bandwidth and poor signal quality. To minimize the negative effect of interferences caused by the instability of the network environment, a robust and stable video transmission scheme in the D2D communication environment is critical to be designed [10]. In recent years, related researches showed that network coding has great potential in improving transmission reliability [11, 12] and throughput [13, 14] in D2D communication.
Ahlswede et al. initially proposed network coding [15], and they proved that network coding could increase reliability and bandwidth efficiency. The principle of network coding is to re-encode the data at the intermediate nodes of the network. Through the re-encoding operation at intermediate nodes, the overall network could obtain some additional performance gain [16,17,18]. Figures 1 and 2 show the operations at intermediate nodes in the traditional store-and-forward scheme and the network-coding-based scheme, respectively. In these figures, node A needs to send a packet a to both node B and node C. Similarly, node B needs to send a packet b to both node A and node C. In the traditional communication scheme, the intermediate node C needs to store and forward the packets a and b in sequence, so it requires four transmissions in total. In the network coding scheme, node C generates a new packet \((a \oplus b)\) by performing the bitwise logical XOR operation after receiving packets a and b. Then, the new packet \(a \oplus b\) is sent to both A and B in one transmission through the wireless channel. Therefore, three times of transmission are enough. In this example, the network coding scheme requires \(25\%\) fewer transmissions compared to the traditional scheme. Network coding includes linear network coding [19] and nonlinear network coding. Linear network coding includes deterministic linear network coding (DLNC) and random linear network coding (RLNC). DLNC requires the knowledge of global topology, which is challenging to implement in a wireless network, so it is seldom used in practical applications. RLNC is a significant breakthrough in the studies of network coding [20]. In the scheme based on RLNC, the nodes in the network are not required to have the knowledge of global topology at runtime, which implies that RLNC is more suitable for networks in which the topology changes frequently. Therefore, using RLNC is feasible in multi-hop D2D communication networks [21, 22].
Traditional scheme
Network coding scheme
In the traditional wired networks, re-encoding operations at intermediate nodes (such as routers) are not supported, which becomes an obstacle to carry out the advantage of the network coding technique fully. In a multi-hop D2D communication environment, mobile devices can be used as intermediate nodes to perform complicated re-encoding operations and provide direct service to other devices. Therefore, the obstacle in traditional networks no longer exists in D2D communication networks. D2D communication provides an ideal application scenario for network coding technology and takes full advantage of network coding [23]. The combination of the two techniques can improve the overall network performance, and the users can obtain more stable video streams when network coding is applied in video services [6, 24].
The rest of this paper is organized as follows: In Sect. 2, some closely related studies are introduced. In Sect. 3, a network-coding-based video distribution scheme is proposed. In Sect. 4, some results of mathematical analysis are provided for both the proposed scheme and existing schemes. In Sect. 5, the performance of the proposed scheme is evaluated in both the simulated network and the actual experimental testbed. Finally, the conclusion is drawn in Sect. 6.
At present, researchers have carried out many studies on video transmission in different network environments [25,26,27,28,29]. Nguyen et al. [25] proposed a network coding framework for efficient video streaming transmission in peer-to-peer (P2P) networks. Their framework introduced multiple servers as peers for video transmission. The technology of layered network coding is applied to scalable video streams to deal with bandwidth fluctuations on the Internet. The simulation results showed that network coding technology could save significant bandwidth overhead compared with the traditional schemes. Although D2D communication is essentially a kind of P2P communication, it has distinctive characteristics. Therefore, we cannot directly use the technology designed for P2P networks in our work, and we need to design optimal schemes in multi-hop D2D networks.
To meet consumers' demand for high-quality video in wireless networks in crowded spaces and reduce the transmission overhead, Ferreira et al. [26] proposed a real-time streaming media solution based on wireless multicast. It utilizes partial feedback real-time network coding to generate a repaired package that is maximally useful for all receivers based on feedback messages. The scheme can achieve a balance between the timeliness of the packets and the coding overhead. In their scheme, all the data in the video stream has equal coding importance. While in an H.264 video stream, the frames are of different importance. If this property is taken into account during the design of the encoding scheme, the users will be able to obtain better service.
Rhaiem et al. [27] proposed a transmission scheme for the hierarchical transmission of data packets in H.264, which can improve the quality of video streams in mobile ad hoc networks. An Extended Multicast Scalable Video Transmission using Classification Scheduling Algorithms and Network Coding (EMSCNC) over MANET is proposed based on Multicast Scalable Video Transmission (MSVT) [30]. In EMSCNC, the source nodes group the packets and then perform the encoding operation. The intermediate nodes decode the encoded packets generated by the source nodes and then re-encode them according to the hierarchical design of H.264 before forwarding. In this paper, the scheme is proposed for D2D communications to reduce the negative impact caused by interferences, so we addressed the network-coding-based video transmission from a probabilistic analysis perspective. Moreover, we evaluated the feasibility and performance of the proposed scheme in an actual experimental testbed.
Wang et al. [28, 29] studied the application of network coding in wireless sensor networks and WiFi networks. In the literature [28], the authors applied network coding to improve transmission reliability in wireless sensor networks. Although this paper is about video transmission in the D2D communication environment, it is still applicable to enhance the reliability of the transmission process by using network coding. The authors designed and implemented a reliable network-coding-based video conferencing system (NCVCS) [29] to improve the user experience. In NCVCS, an encoding server is introduced as the intermediate node to perform the re-encoding operation, which can improve the utilization rate of network bandwidth. Moreover, NCVCS adopts a unified coding scheme during data frame transmission without providing additional protection to keyframes. In this paper, mobile devices are used as intermediate nodes, and we provide a network coding scheme based on the priority of different frames. Moreover, NCVCS cannot work without the access point, while all the devices in the testbed in this research could directly communicate with each other without the relay of the access point. Although both NCVCS and the proposed scheme in this paper focus on the implementation of network-coding-based transmission, the network architectures of these two schemes are different.
Our contributions can be summarized as follows: We proposed a network-coding-based video distribution scheme in the D2D communication environment, which can protect critical information; we established a probability-based mathematical analysis model for transmission in the D2D environment; we implemented the proposed scheme and evaluated the performance with actual experiments.
Network model
In the traditional unicast environment, when users working in the same cell need to obtain video information from other devices, they need to establish multiple connections with all other devices. When the problem occurs in the D2D environment that introduces intermediate nodes, the users need to establish a D2D connection with a particular device. As shown in Fig. 3, the mobile devices \(S_1\), \(S_2\), and \(S_3\) act as the source nodes, and the mobile device \(S_4\) acts as an intermediate node. In our network model, \(S_1\), \(S_2\), and \(S_3\) send video streams to \(S_4\), respectively. After receiving the video from other devices, \(S_4\) performs the re-encoding operation. The re-encoding operations protect the critical information in the video streams, and then, the generated re-encoded data are sent back to the network. Each device can decode the video stream as long as sufficient encoded data are received.
In the conventional data transmission, the intermediate node only forwards the packets after receiving them from the source node. The network coding technique allows intermediate nodes to perform additional coding operations on the received packets before forwarding. Encoding, re-encoding, and decoding operations of network coding are linear operations performed over a Galois field (GF) with a size of \(2^q\) where q is a natural number. According to previous studies [31, 32], the Galois field GF(256) can provide a good balance between computational efficiency and resource overhead. Therefore, all the coding operations in this paper are based on GF(256).
Equation (1) shows a typical linear coding operation [28, 29]. The coding operation is conducted to obtain a linearly independent combination of original data blocks, as shown in Eq. (1).
$$\begin{aligned} \left( \begin{array}{c} y_1 \\ y_2\\ \vdots \\ y_n \end{array} \right) = \left( \begin{array}{cccc} c_{11} &{}c_{12}&{}\cdots &{} c_{1k}\\ c_{21}&{}c_{22}&{}\cdots &{}c_{2k} \\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ c_{n1}&{}c_{n2}&{}\cdots &{} c_{nk}\\ \end{array} \right) \left( \begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_k \end{array} \right) \end{aligned}$$
In Eq. (1), k stands for the total number of original blocks, n stands for the number of encoded blocks, \(c_{ij}\) refers to a coefficient randomly selected from GF(256), and \(b_i\) refers to the \(i^{th}\) original block. The symbol \(y_i(i\in [1,n])\) stands for an encoded block.
In most RLNC-based schemes [33, 34], the Gauss–Jordan elimination method [35] with excellent decoding efficiency is adopted. The Gauss–Jordan elimination method is a classical decoding method for network coding schemes. Compare to other decoding methods such as matrix inversion method, the Gauss–Jordan elimination method has higher decoding efficiency. In this sense, it could reduce the decoding delay. This method has the following advantages: When the packet is linearly dependent of the existing packets, it does not contain meaningful information. In this case, an all-zero row will appear after matrix transformation during the decoding operation, and then, the linearly dependent packet will be removed. Besides, the receiving node can start decoding after receiving part of the data packet, instead of waiting for k linearly independent encoded packets to arrive, which improves the decoding efficiency and reduces the waiting time of the receiving devices.
Reliability analysis
In the video codec system H.264/AVC(Advanced Video Coding), video frames have different types, such as I-frame, P-frame, and B-frame. The I-frame is intra-coded, which has crucial information about other frames [36]. In a video sequence, the first data frame is always an I-frame, followed by a series of P-frames or B-frames [37]. P-frame and B-frame are generated based on temporal and spatial correlations. However, this structure also makes the video sequence more susceptible to error propagation caused by inter-frame dependency, and corruption of the previous frames may result in incorrect propagation of consecutive frames in the Group of Pictures (GOP) [38]. If there is an error in the I-frame, only a small amount of related video information can be recovered in the GOP, while a broad range of frames is lost. Moreover, since the data length of the I-frame is significantly larger than that of other frames, I-frame is more likely to be lost during transmission. Therefore, it is necessary to protect the I-frame during the transmission of the video stream.
Assume that there are multiple GOPs in a video, and the \(x^{th}\) GOP is represented by G(x); an I-frame belonging to G(x) is represented by I(x); P(x) represents P-frame of the \(x^{th}\) GOP; \(G_s(x)\) refers to the successful delivery rate (SDR) of all frames in \(x^{th}\) group; \(I_s(x)\) and \(P_s(x)\) are the SDRs of I-frame and P-frame, respectively.
Some studies had analyzed the decoding rate of RLNC based on SDR [39, 40]. Therefore, we use the SDR as an indicator to analyze the performance of the proposed scheme. In the transmission process, \(I_l\) refers to the size of the I-frame, \(P_l\) is the size of the P-frame, and the bit error ratio (BER) is \(p_b\). The successful transmission probabilities of I-frame and P-frame are \(I_t\) and \(P_t\), respectively. Although the transmission process of the I-frame and P-frame is independent of each other, their decoding processes are correlated. The SDR of I-frame at the decoding device equals the probability of successful transmission of the I-frame, so \(I_s\) equals \(I_t\). The decoding operation of the P-frame requires the information of the I-frame, so the SDR of P-frame at the decoding device can be expressed by Eq. (2).
$$\begin{aligned} P_s=I_s \times P_t =(1-p_b)^{I_l}\times (1-p_b)^{P_l}=(1-p_b)^{\sum _{i=1}^n I_i(x)+P_i(x)} \end{aligned}$$
In the network-coding-based scheme, \(I_l^{'}\) stands for the size of the I-frame after the re-encoding operation, and \(P_l^{'}\) is the size of the P-frame after the re-encoding operation. The successful transmission probabilities of encoded I-frame and encoded P-frame are \(I_t^{'}\) and \(P_t^{'}\), respectively. The encoded I-frame is divided into n packets, the size of each packet is \(\frac{I_l^{'}}{n}\), and the successful receiving rate of each packet is p.
$$\begin{aligned} p=(1-p_b)^{\frac{I_l^{'}}{n}} \end{aligned}$$
The intermediate node will perform RLNC with \(n_1\) different data packets before transmission. The receiving devices need to receive \(k_1\) linearly independent data packets (\(k_1<n_1\)) to decode and obtain the original data packet. The probability of successfully decoding the I-frame at the receiving devices after encoding is calculated as follows.
$$\begin{aligned} I_s^{'}= & {} \sum _{k_1}^{n_1}\left( \begin{array}{c} n_1 \\ k_1\\ \end{array} \right) p^{k_1}(1-p)^{n_1-k_1} \nonumber \\= & {} \sum _{k_1}^{n_1} \left( \begin{array}{c} n_1 \\ k_1\\ \end{array} \right) (1-p_b)^{\frac{k_1I_l^{'}}{n_1} } \left( 1-(1-p_b)^{\frac{k_1I_l^{'}}{n_1}}\right) ^{n_1-k_1} \end{aligned}$$
When the packet coding is performed on a P-frame, the method based on RLNC is also adopted. P-frame is divided into \(n_2\) packets, the size of each packet is \(\frac{P_l^{'}}{n}\), and the rate of successful receiving each packet is \(p'\). The initial data can be successfully decoded as long as \(k_2\) packets are received. After encoding, the successful receiving rate \(P_t^{'}\) of P-frame transmission is shown in Eq. (5).
$$\begin{aligned} P_t^{'}= & {} \sum _{k_2}^{n_2}\left( \begin{array}{c} n_2 \\ k_2\\ \end{array} \right) {p'}^{k_2}(1-p')^{n_2-k_2}\nonumber \\= & {} \sum _{k_2}^{n_2}\left( \begin{array}{c} n_2 \\ k_2\\ \end{array}\right) \left( 1-p_b\right) ^{\frac{k_2P_l^{'}}{n_2}}\left( 1-\left( 1-p_b\right) ^{\frac{k_2P_l^{'}}{n_2}}\right) ^{n_2-k_2} \end{aligned}$$
The successful decoding rate \(P_s^{'}\) of the encoded P-frame is obtained with Eq. (6).
$$\begin{aligned} P_s^{'} = I_s^{'}\times P_t^{'} \end{aligned}$$
The constraints are set as follows to provide priority protection to I-frame during the encoding process:
$$\begin{aligned} n_2-k_2\le n_1-k_1, n_2\le n_1 \end{aligned}$$
The probability of successfully decoding the GOP at the decoding node after the network coding operation is finally obtained, which is shown in equation.
$$\begin{aligned} G_s^{'} =I_s^{'}\times P_s^{'} \end{aligned}$$
Algorithms in the proposed scheme
In the process of network transmission, to deal with video frame loss and transmission errors, video frames need to be encoded redundantly. Part of the original data can be recovered by processing the redundant data. However, data redundancy is limited. Repeated transmissions will reduce the utilization efficiency of the transmission bandwidth. Therefore, how to obtain optimal transmission with limited redundancy is a problem that we need to address.
Since the length of each frame in the video stream is different, it is necessary to design an appropriate transmission method to maximize the utilization of transmission resources. We designed a dynamic adaptive algorithm to transmit video and improve data transmission efficiency.
In this algorithm, we have chosen an appropriate value as the packet size for one transmission. Therefore, the number of slices changes dynamically with the change of the size of the video frames. We add the slice information and encoding coefficients to the header of each encoded frame, which will be used to assist the decoding operation at receiving devices. RLNC is essentially a linear coding scheme. Therefore, the encoding and decoding operations are to obtain the product of two matrices. The time complexity of multiplying two matrices is \(O(n^3)\), which is a polynomial-time complexity and is feasible to implement in practical applications.
The advantage of network coding comes from the re-encoding operation, which will increase independence among data packets during transmission. However, the re-encoding operation will affect the recovery of video frames at the receiving device. In the conventional replication-based transmission scheme, after the device receives the frame information, it can immediately recover the original video frame.
In the proposed scheme, the packets generated by RLNC are linearly mixed. In the theory of linear network coding, the original data are encoded into \(n(n \ge k)\) linearly independent packets in which any k out of n encoded packets are sufficient to recover the original data. Therefore, only when at least k linearly independent packets are successfully received, can the receivers decode the original data. Therefore, waiting time is necessary for the decoding operation. In all the linearly coding schemes, the waiting time cannot be avoided. In our scheme, during the transmission of a frame of video, the device can fully recover the original data through the decoding operation after receiving k linearly independent packets and does not have to wait for the subsequent data packets of the current video frame. Besides, to ensure the real-time performance of the video, it is not necessary to keep waiting for the old video frame when the new frame arrives.
When a mobile device works as an intermediate node, it overtakes the re-encoding operation, which causes extra energy overhead of batteries. Therefore, the intermediate node will run out of energy first, and then, it will be eliminated from the network. To increase the network lifetime for all the mobile devices, we need to design a strategy to periodically select the devices as intermediate nodes so that the energy overhead could be balanced. The strategy needs to take account of at least two factors, namely location and remaining power. Moreover, the strategy needs to assign weights for these factors.
In this algorithm, each node \(D_i\) broadcasts its status packet \(PS_i\) every ten minutes, which contains its remaining power and location coordinates. When any other node \(D_j\) receives \(PS_i\), it extracts the location and remaining power of node \(D_i\) and then calculates the distance from node \(D_i\) to itself. The priority value (PV) is obtained with equation.
$$\begin{aligned} PV = w_d \times (100-DM \times 20)+w_b \times E \end{aligned}$$
In our scheme, the distance and remaining power are assigned different weights, namely \(w_d\) and \(w_b\). The distance metric DM is divided into five levels, and the remaining power is quantified as E.
In this section, we verify the reliability of the proposed solution through mathematical computation. The influence of video frame length and packet loss rate on the SDR of I frame, P frame, and GOP in the scheme is analyzed, and we compared the results of the proposed scheme with the replication-based transmission scheme [29] and the transmission scheme based on instantly decodable network coding (IDNC) [41, 42].
From Fig. 4, the decoding rates of all schemes are high when the packet loss rate is low. The SDR gradually decreases as the packet loss rate increases, which is consistent with the analysis of Eq. (4). Compared with IDNC-based and replication-based solutions, the RLNC-based scheme's descent rate is more stable. Moreover, in an extreme network environment with a high packet loss rate, the RLNC-based transmission scheme is more stable.
The impact of frame length on the decoding rate of the I-frame is shown in Fig. 5. As the length of keyframes increases, the SDR of I-frames decreases gradually, which is the reason why I-frames are more likely to be lost. According to Eq. (4), when the video frame to be transmitted is large, the RLNC-based scheme can provide a higher frame recovery rate than other schemes.
The impact of the packet loss rate on the SDR of P-frame is shown in Fig. 6. In the figure, the packet loss rate is inversely proportional to SDR. According to our previous analysis of Eqs. (5) and (6), the information contained in keyframes has a significant impact on the decoding of P-frame. Figure 7 shows the impact of keyframe length on the SDR of P-frame. As the length of video frames increases, it is more likely to be lost in the transmission process, and the decoding rate of P frames will be affected accordingly. Figures 8 and 9 show the impact of packet loss rate and keyframe length on the SDR of the whole GOP, respectively, according to Eq. (8).
The impact of loss rate on the SDR of I-frame
The impact of length on the SDR of I-frame
Comparison of the SDR of P-frame in different schemes
The impact of keyframe on the SDR of P-frame
Comparison of the SDR of GOP in different schemes
The impact of keyframe on the SDR of GOP
The experimental scene
In this section, we implement the proposed scheme in the simulation environment. In the simulation, there are 20 nodes. Moreover, to evaluate the feasibility of the proposed scheme, we implemented it in a real-world testbed consisting of 6 mobile devices.
Setting of simulation
The simulation tool we used is OMNeT++5.4. In our simulation scenario, nodes were randomly deployed in a \(1000\,\mathrm{m}\times 800\,\mathrm{m}\) area. The transmission radius of nodes was 800 m. The loss rates of different links were independent of each other. The packet loss rate can be calculated by the BER. We assume that the BER is \(p_b^{'}\), and the loss rate of a packet \(p_k\) with transmission data size \(l_s\) is
$$\begin{aligned} p_k=1-(1-p_b^{'})^{l_s}. \end{aligned}$$
Each simulation experiment lasted for 120 seconds. Unless otherwise stated, we used the parameters shown in Table 1.
Table 1 Parameters
In the simulation, we used the default physical layer and transport layer protocols provided by OMNeT++, and we modified the application layer protocol. We designed a new application module to realize the encoding and decoding functions of RLNC. First, the device sends its data to the device used as an intermediate node. After all data from the participating devices are received, the intermediate device encodes the received data and then broadcasts it back to other devices. Figure 10 shows the experimental scene.
Illustrative results
In the simulation environment, we simulated the data transmission process and analyzed the impact of BER, the number of devices participating in the transmission, and the data redundancy on the SDR. We also compared the performance of our scheme with that of the traditional replication-based scheme and IDNC-based scheme. The experimental results were obtained by averaging ten experimental results in order to reduce the influence of unforeseen factors.
Figure 11 shows the relation between BER and SDR. In this experiment, the generation size k was set at 4, and n was set at 6. According to Fig. 11, as BER gradually increases, SDR of all three schemes decreases. However, the SDR of the RLNC-based scheme is still higher than that of the other two schemes, because some packets are allowed to be lost after the re-encoding operation. RLNC is not sensitive to changes in the packet loss rate, and it has higher stability than the other two schemes. Therefore, it is more suitable for transmission in the networks with extreme channel conditions.
The relation between the number of devices involved in transmission and SDR is shown in Fig. 12. In this simulation, k was set at 4, n was set at 6, and p was set at 0.1. According to Fig. 12, in the RLNC-based scheme, after the transmission, the probability of successfully recovering the original data at the receiving devices is the highest, while that of the traditional replication scheme is the lowest. When there are few devices, the results fluctuate greatly. However, as the number of devices increases, the trend of three curves is gradually stabilized. The decoding rate of the RLNC-based scheme is above 0.95, the successful decoding rate of IDNC is above 0.85, and the traditional transmission scheme only has a decoding rate of around 0.65. The experiment results are consistent with the previous theoretical analysis. Therefore, the performance of the scheme based on RLNC is significantly better than that of the other two schemes.
We also analyzed the impact of the relation between n and k on the performance of RLNC. The experiment was carried out in a network with 41 devices, and the packet loss rate was set at 0.15. The experiments were carried out with the simulation time of 25s, 50s, 75s, 100s, and 125s, respectively. Then, the experiment results were obtained through averaging the experiment results in different simulation times. The redundancy rate was set at \(50\%\). From Fig. 13, as k increases, only the RLNC-based coding scheme maintains a high decoding rate, while the decoding rate of the other two schemes gradually declines, which is because RLNC increases the independence among packets, and it can successfully decode as long as an adequate number of packets are received. In the other two schemes, the probability of receiving duplicated packets or linearly dependent packets is higher than that in the RLNC scheme.
Testbed settings
The model of the mobile device we used for testing is MI 4LTE. The operating system is Android 6.0.1. The frequency of the CPU processor is 2.45 GHz, and the running memory is 2GB. The media access control (MAC) layer protocol we used is IEEE 802.11g. Because the 5G network has not been widely deployed, and there is not any commercial device supporting D2D communications, we used the technology of WIFI direct to implement device-to-device direct connections. The links between different devices are independent of each other. Figures 14 and 15 show the real experiment scenes.
The relation between packet loss rate and SDR of the device
The relation between the number of devices and the SDR of devices
The relation between the redundancy rate and the SDR of devices
The experimental scenario A
The experimental scenario B
The relation between transmission distance and frame recovery rate
The relation between redundant packets and frame recovery rate
PSNR in different schemes
Coding latency
Performance of intermediate node selection strategies
Analysis of experimental results
We evaluate the performance of our scheme first and compare it with the performance of traditional schemes based on replication and IDNC. Then, we use the device's successful recovery rate of the video frames as an evaluation criterion. We also evaluate the coding latency of the proposed scheme on different mobile devices. According to Fig. 16, as the transmission distance increases, the SDR of the video frames decreases in all three schemes. The video frame recovery rate in the RLNC-based scheme is higher than that in the other two schemes, which is consistent with our theoretical analysis and simulation results.
The relation between the number of redundant packets and the recovery rate of frames is shown in Fig. 17. Compared with the other two schemes, the SDR of the RLNC-based scheme is higher. With the increase in redundant data packets, the decoding efficiency is gradually improved.
The peak signal-to-noise ratio (PSNR) is often used to measure the quality of pictures in the video. To evaluate the impact of the proposed scheme on video quality, we randomly selected an H.264 video stream for transmission and compared the video frames obtained after transmission with the original video stream. At the same time, we also conducted experiments with the replication-based and IDNC-based schemes. Figure 18 shows that the video obtained with the RLNC-based scheme has the best quality. In Fig. 18, when the value of PSNR is zero, it means that the corresponding frame is missing; when the value of PSNR is 100, it means the device has completely recovered the corresponding frames.
We know that advantages always go with disadvantages. It is very difficult to have a scheme that can increase performance in one area without sacrificing another. Therefore, it is inevitable to cause coding delay after using network coding. Figures 16 and 17 show that we could obtain better performance after using network coding. Then, we need to discuss whether the coding delay makes a negative impact on user experience. According to the previous section, the time complexity of decoding operation in the network-coding-based scheme is \(O(n^3)\), which is obtained from a theoretical perspective. In this section, we study the computational overhead from a practical perspective. We analyzed the impact of generation size k on coding latency and evaluated the coding efficiency of the proposed scheme on different mobile devices, which is shown in Fig. 19. In Fig. 19, each value is obtained by averaging ten experimental results. The size of the video we used was 1MB. From Fig. 19, with the increase in the value of k, the throughput of the encoding in the device gradually decreases. The complexity of the coding increases as the coding dimension becomes higher. According to Fig. 19, the hardware performance of mobile devices determines the coding latency. For example, the bandwidth overhead required to receive a video (1-minute duration, 1920-by-1080 resolution) is about 10MB. According to Fig. 19, the decoding rate after using network coding is far greater than the required receiving rate. Therefore, the coding latency of network coding does not have an apparent adverse effect on the user experience.
In Algorithm 3, we designed a selection strategy for intermediate nodes. The strategy needs to be evaluated in a network with many nodes. However, in our testbed, there are only six nodes. Therefore, we evaluated the performance of the proposed strategy in the simulated network, which is shown in Fig. 10. Figure 20 shows the relation between running time and the number of working nodes. After using the strategy without intermediate node selection, there is only one intermediate node, and it is assigned manually. After 40 minutes, the intermediate node runs out of energy. Because there is no subsequent node working as an intermediate node, all the nodes in the network stop transmission. In the plain node selection strategy, there will be a new intermediate node after the previous intermediate node exhausts its energy. After 40 minutes, the intermediate nodes gradually leave the network. After using the proposed strategy, the energy overhead of intermediate nodes is balanced so that the video distribution could last for 55 minutes.
In order to improve the quality of video transmission in a multi-hop D2D communication environment, we propose a network-coding-based video distribution scheme. This scheme can provide additional protection to the critical information of video, which can improve the reliability during transmission. According to the experimental results, our scheme has higher stability and better video quality than the other two traditional schemes. Moreover, through the practical evaluation in our testbed, we observed that the coding delay introduced by network coding does not make a negative impact on the user experience during video playback at the receiving nodes.
Availability of data and material
D2D:
Device-to-device
3GPP:
3rd-generation partnership project
RLNC:
Random linear network coding
P2P:
EMSCNC:
Extended Multicast Scalable Video Transmission using Classification Scheduling Algorithms and Network Coding
MSVT:
Multicast scalable video transmission
NCVCS:
Network coding-based video conferencing system
GF:
Galois field
GOP:
Group of pictures
AVC:
Advanced video coding
SDR:
Successful decoding rate
IDNC:
Instantly decodable network coding
BER:
PSNR:
Peak signal-to-noise ratio
Mega byte
L. Zhou, R.Q. Hu, Y. Qian, H. Chen, Energy-spectrum efficiency tradeoff for video streaming over mobile ad hoc networks. IEEE J. Sel. Areas Commun. 31(5), 981–991 (2013)
C. Concolato, J.F. Le, F. Denoual, F. Mazé, E. Nassor, N. Ouedraogo, J. Taquet, Adaptive streaming of HEVC tiled videos using MPEG-DASH. IEEE Trans. Circuits Syst. Video Technol. 28(8), 1981–1992 (2018)
X. Shen, Device-to-device communication in 5g cellular networks. IEEE Netw. 29(2), 2–3 (2015)
N. Lee, X. Lin, J.G. Andrews, R.W. Heath, Power control for D2d underlaid cellular networks: modeling, algorithms, and analysis. IEEE J. Sel. Areas Commun. 33(1), 1–13 (2015)
S.F. Hasan, 5g communication technology. In: Hasan SF (ed) Emerging trends in communication networks. Springer briefs in electrical and computer engineering, pp. 59–69. Springer, Cham (2014)
N. Vo, T.Q. Duong, H.D. Tuan, A. Kortun, Optimal video streaming in dense 5g networks with D2d communications. IEEE Access 6, 209–223 (2018)
K. Doppler, M. Rinne, C. Wijting, C.B. Ribeiro, K. Hugl, Device-to-device communication as an underlay to LTE-advanced networks. IEEE Commun. Mag. 47(12), (2009)
Y. Yan, B. Zhang, C. Li, Network coding aided collaborative real-time scalable video transmission in D2d communications. IEEE Trans Veh Technol 67(7), 6203–6217 (2018)
P. Ostovari, J. Wu, Robust wireless transmission of scalable coded videos using two-dimensional network coding. Comput. Netw. 95, 115–126 (2016)
L. Zhou, Mobile device-to-device video distribution: theory and application. ACM Trans. Multimedia Comput. Commun. Appl. 12(3), 38–13823 (2016)
P. Pahlevani, M. Hundebøll, M.V. Pedersen, D. Lucani, H. Charaf, F.H. PFitzek, H. Bagheri, M. Katz, Novel concepts for device-to-device communication using network coding. IEEE Commun. Mag. 52(4), 32–39 (2014)
J. Yin, Y. Yang, L. Wang, X. Yan, A reliable data transmission scheme based on compressed sensing and network coding for multi-hop-relay wireless sensor networks. Comput. Electr. Eng. 56, 366–384 (2016)
E. Datsika, A. Antonopoulos, N. Zorba, C. Verikoukis, Cross-network performance analysis of network coding aided cooperative outband D2d communications. IEEE Trans. Wireless Commun. 16(5), 3176–3188 (2017)
Y. Wu, W. Liu, S. Wang, W. Guo, X. Chu, Network coding in device-to-device (D2d) communications underlaying cellular networks. In: 2015 IEEE international conference on communications (ICC), pp. 2072–2077 (2015)
R. Ahlswede, C. Ning, S.R. Li, R.W. Yeung, Network information flow. IEEE Trans. Inf. Theory 46(4), 1204–1216 (2000)
MathSciNet Article Google Scholar
Y. Yang, C. Zhong, Y. Sun, J. Yang, Network coding based reliable disjoint and braided multipath routing for sensor networks. J. Netw. Comput. Appl. 33(4), 422–432 (2010)
M. Ploumidis, N. Pappas, V.A. Siris, A. Traganitis, On the performance of network coding and forwarding schemes with different degrees of redundancy for wireless mesh networks. Comput. Commun. 72, 49–62 (2015)
M. Kim, K. Park, W.W. Ro, Benefits of using parallelized non-progressive network coding. J. Netw. Comput. Appl. 36(1), 293–305 (2013)
S.R. Li, C. Ning, R.W. Yeung, On theory of linear network coding. In: Proceedings. International Symposium on Information Theory, 2005. ISIT 2005., pp. 273–277 (2005)
T. Ho, M. Médard, R. Koetter, D.R. Karger, M. Effros, J. Shi, B. Leong, A random linear network coding approach to multicast. IEEE Trans. Inf. Theory 52, 4413–4430 (2006)
Z. Lin, Y. Wang, Y. Lin, L. Wu, Z. Chen, Analysis and optimization of rlnc-based cache placement in 5g d2d networks. IEEE Access 6, 65179–65188 (2018)
J. Huang, S. Huang, C. Xing, When d2d meets network coding: a tutorial case study. In: Proceedings of the International Conference on Research in Adaptive and Convergent Systems, pp. 146–151 (2017). ACM
J. Connelly, K. Zeger, Capacity and achievable rate regions for linear network coding over ring alphabets. IEEE Trans. Inf. Theory 65(1), 220–234 (2019)
M.S. Karim, S. Sorour, P. Sadeghi, Network coding for video distortion reduction in device-to-device communications. IEEE Trans. Veh. Technol. 66(6), 4898–4913 (2017)
K. Nguyen, T. Nguyen, S.C. Cheung, Video streaming with network coding. J. Signal Process. Syst. 59(3), 319–333 (2010)
D. Ferreira, R.A. Costa, J. Barros, Real-time network coding for live streaming in hyper-dense WiFi spaces. IEEE J. Sel. Areas Commun. 32(4), 773–781 (2014)
O.B. Rhaiem, F.C. Lamia, W. Ajib, Network coding-based approach for efficient video streaming over MANET. Comput. Netw. 103, 84–100 (2016)
L. Wang, Y. Yang, W. Zhao, Network coding-based multipath routing for energy efficiency in wireless sensor networks. EURASIP J. Wireless Commun. Netw. 2012(1), 115 (2012)
L. Wang, Z. Yang, L. Xu, Y. Yang, NCVCS: network-coding-based video conference system for mobile devices in multicast networks. Ad Hoc Netw. 45, 13–21 (2016)
O.B. Rhaiem, L.C. Fourati, W. Ajib, QoS improvement for video streaming over MANET using network-coding. In: 2015 IEEE 82nd vehicular technology conference (VTC2015-Fall), pp. 1–5 (2015)
P. Kitsos, G. Theodoridis, O. Koufopavlou, An efficient reconfigurable multiplier architecture for Galois field GF(2m). Microelectron. J. 34(10), 975–980 (2003)
T. Lehnigk-Emden, N. Wehn, Complexity evaluation of non-binary Galois field LDPC code decoders. In: 2010 6th International Symposium on Turbo Codes Iterative Information Processing, pp. 53–57 (2010)
S. Park, D.H. Cho, Random linear network coding based on non-orthogonal multiple access in wireless networks. IEEE Commun. Lett. 19(7), 1273–1276 (2015)
P. Vingelmann, M. Pedersen, F. Fitzek, J. Heide, Multimedia distribution using network coding on the iphone platform. In: Proceedings of the 2010 ACM Multimedia Workshop on Mobile Cloud Media Computing, pp. 3–6 (2010). ACM
P.S. Stanimirović, M.D. Petković, Gauss-Jordan elimination method for computing outer inverses. Appl. Math. Comput. 219(9), 4667–4679 (2013)
MathSciNet MATH Google Scholar
N. Bahri, N. Belhadj, M.A.B. Ayed, N. Masmoudi, T. Grandpierre, M. Akil, Real-time H264/AVC high definition video encoder on a multicore DSP TMS320c6678. In: International Conference on Computer Vision and Image Analysis Applications, pp. 1–6 (2015)
D.C. Nguyen, T.S. Nguyen, C.C. Chang, H.S. Hsueh, F.R. Hsu, High embedding capacity data hiding algorithm for H.264/AVC video sequences without intraframe distortion drift (2018)
I.U. Khan, M.A. Ansari, S.S. Hasan, K. Khan, Evaluation and analysis of rate control methods for H. 264/AVC and MPEG-4 video codec. Int. J. Electr. Comput. Eng. (2088-8708) 8(2) (2018)
C.F. Chiasserini, E. Viterbo, C. Casetti, Decoding probability in random linear network coding with packet losses. IEEE Commun. Lett. 17(11), 1–4 (2013)
R. Khan, G.K. Kurt, I. Altunba, Decoding failure probability of random network coding systems in fading channels. In: 2015 23nd Signal Processing and Communications Applications Conference (SIU), pp. 2050–2053 (2015)
S. Katti, H. Rahul, W. Hu, D. Katabi, M. Médard, J. Crowcroft, XORs in the air: practical wireless network coding. IEEE/ACM Trans. Netw. 16(3), 497–510 (2008)
A. Douik, S. Sorour, T.Y. Al-Naffouri, M.S. Alouini, Instantly decodable network coding: from centralized to device-to-device communications. IEEE Commun. Surv. Tutorials 19(2), 1201–1224 (2017)
The authors gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.
This work was supported by China Postdoctoral Science Foundation (2019M651921).
Jiangsu Key Laboratory of Big Data Security and Intelligent Processing of NUPT, 9, Wenyuan Road, Nanjing, China
Lei Wang, Jun Yin & Lijie Xu
School of Computer, Nanjing University of Posts and Telecommunications, 9, Wenyuan Road, Nanjing, 210023, China
Yu Liu & Jia Xu
School of Computer Science and Engineering, Nanjing University of Science and Technology, 200, Xiaolingwei Road, Nanjing, 210094, China
Yuwang Yang
Yu Liu
Jia Xu
Jun Yin
Lijie Xu
The authors have contributed jointly to the manuscript. All authors have read and approved the final manuscript.
Correspondence to Lei Wang.
Wang, L., Liu, Y., Xu, J. et al. Network coding for reliable video distribution in multi-hop device-to-device communications. J Wireless Com Network 2020, 253 (2020). https://doi.org/10.1186/s13638-020-01869-0
Network coding | CommonCrawl |
At equilibrium are the rates of the forward and backward reactions equal to zero?
At equilibrium, the concentrations do not change with time. So, is it true that the rates of the chemical reactions are zero at equilibrium? Wikipedia says that they are not zero. Why is this?
equilibrium kinetics
jerepierre
Krishna KannaKrishna Kanna
$\begingroup$ Who say these two r zero?? $\endgroup$
– Vidyanshu Mishra
In equilibrium, the forward and backward rates are equal to each other. The net is zero, but the individual rates are not zero.
Consider something as simple as water — we know that the water molecule can dissociate in to an $\ce{OH-}$ and $\ce{H+}$, and the pH is a direct measure of the amount of $\ce{H+}$. Once equilibrium is achieved, it isn't like $\ce{H2O}$ stops splitting, or $\ce{H+}$ and $\ce{OH-}$ get back together — it is just at a macro level nothing changes.
Jon CusterJon Custer
The magic word here is "dynamic equilibrium". Let's look into that.
Let's say we got a simple reaction like: $$\ce{A <--> B}$$
Now how fast does A react to B? For this case we assume first order in both directions, which means the reaction rate would be:
$$r_1= [\ce{A}] \times k_1$$
and of course for the reaction back it's
$$r_2=[\ce{B}]\times k_2$$
So in both cases the reaction rate is dependend on the concentration of the reactant, that's very important.
Now let's say we start off with only A, in this case $r_1$ will be large and $r_2$ will be 0, since there's no B there. Now A gets less and B is getting more, so $r_1$ is going down and $r_2$ is going up. At some point $r_1$ will be equal to $r_2$, but this means there will be as much of A produced from B, than A will react to B. And the same for B. So over all the reaction is still going in both directions, but it's in a dynamic equilibrium.
Here's an example which describes this pretty well:
Let's imagine you got a backyard and right on the boarder to the next one is an apple tree and a lot of old apples are laying on the ground. You don't like those in your garden, so you go around, pick them up and throw them into your neighbours garden. He doesn't like that and starts throwing them back. Now he is old and is moving much slower than you are, so what is happening?
At first there are a lot of apples on your side and you might need to walk some meters but then you can pick one up and throw them. So you will throw a lot of apples very fast. On the other hand there aren't many apples on the other side and the old man is slow, so he needs much longer to grab one. So he isn't throwing them back to you very rapidely. But here's the problem: the more apples you throw the less apples there are, and you need to move longer distances to grab one. So the rate in which you are throwing apples decreases. On the other side there are more and more apples, so the old man doesn't need to move far to grab one, so his rate in throwing apples increases.
After some time there will be a point where for every apple you throw to your neighbour he will throw one back. And if you count the total number of apples on each side it will pretty much stays the same, even if there are apples flying in both directions the whole time.
DSVADSVA
Correction to above statement which assumes that there must be equal concentrations of products and reactants in an equilibrium reaction.
In equilibrium there is no net change of products or reactants. The forward and backward rates are only equal to eachother if there are equal amounts of products and reactants which is not always the case.
A reaction can be 99% reactants, 1% products and be in equilibrium as long as those concentrations are not changing, and in that case the forward and reverse reaction rates are not equal.
Consider for example the disassociation of a weak acid: HA <> H+ + A- since it is a weak acid you already know the equilibrium will lie to the left with most of the acid not disassociated. The forward rate is smaller than the reverse rate. If the [HA], and [H+][A-] are not changing over time, the reaction has reached equilibrium. There is always HA disassociating, and H+ and A- combining, but the overall concentrations do not change.
Efram GoldbergEfram Goldberg
$\begingroup$ I don't see anywhere that it says that the concentrations are equal. $\endgroup$
– hBy2Py
So, is it true that the rates of the chemical reactions are zero at equilibrium?
Here you are making a mistake. Just take a simple look at the definition of equilibrium you will find that:
In a chemical reaction, chemical equilibrium is the state in which both reactants and products are present in concentrations which have no further tendency to change with time. Usually, this state results when the forward reaction proceeds at the same rate as the reverse reaction.
What the last line says is the state of equilibrium results when the forward reaction proceeds at the same rate as the reverse reaction i.e rate if reactions (forward and backward) become equal not each zero.
Vidyanshu MishraVidyanshu Mishra
Not the answer you're looking for? Browse other questions tagged equilibrium kinetics or ask your own question.
What are good examples of questions that make college students think about chemistry concepts?
How can we justify setting the affinity $\sum_i \mu_i \nu_i$ equal to zero at chemical equilibrium?
Chemical reactions as exponential functions
Why, when chemical equilibrium is defined as when two opposing reactions are equal in rate, can the products or reactants be favoured?
Is liquid water in equilibrium with water vapor at room temperature?
Intuition for why catalyst affects both forward and reverse reactions equally?
Same Activation Energy for Forward and Backward Reactions
What is the difference between chemical equilibrium and dynamic equilibrium?
Dynamic equilibrium in unsaturated solutions | CommonCrawl |
Visual support for complex repair operations in a global aerospace company
Joel-Edgar, S., Shi, L., Emanuel, L., Jones, S., Watts, L., Newnes, L., Payne, S., Hicks, B. & Culley, S., 2017, Visual Analytics for Management: Translational Science and Applications in Practice. Bendoly, E. & Clark, S. (eds.). Abingdon: Routledge, p. 100-112 8
Vitruvius and English Architecture up to 'Vitruvius Britannicus'
Hart, V., 2020, Companion to the Reception of Vitruvius. Ingrid, R. & Bell, S. (eds.). Brill, (Brill's Companions to Classical Reception).
Goode, J., 2012, The Oxford Companion to Comparative Politics. Krieger, J. (ed.). Oxford University Press, p. 292-296
Vladimir Sorokin and the Norm
Gillespie, D., 2000, Reconstructing the Canon: Russian Writing in the 1980s. McMillin, A. (ed.). Amsterdam: Harwood Academic Publishers, p. 299-309 11 p.
Vladimir Sorokin and the return of history
Gillespie, D., 6 Apr 2016, Facets of Russian Irrationalism between Art and Life: Mystery inside Enigma. Tabachnikova, O. (ed.). Brill, p. 519-530 12 p. (Studies in Slavic Literature and Poetics).
Vocational knowledge – Regions and recontextualisation capability
Hordern, J., 1 Jan 2017, Vocational Education and Training in Times of Economic Crisis. Springer Nature, p. 425-438 14 p. (Technical and Vocational Education and Training; vol. 24).
Voices and Silences of the Dead in Western Modernity
Walter, T., 2019, Articulate Necrographies. Panagiotopoulos, A. & Santo, D. E. (eds.). New York: Berghahn Books, p. 17-39
Vortex motion for the Landau-Lifshitz-Gilbert equation with applied magnetic field
Kurzke, M., Melcher, C. & Moser, R., 2013, Singular Phenomena and Scaling in Mathematical Models. Griebel, M. (ed.). Heidelberg: Springer, p. 113-131
Voting Together: Why the Household Matters
Cutts, D., 2014, Sex Lies and the Ballot Box. Biteback
Vulnerability of the brain to neuropsychiatric disorders resulting from abnormal thyroid hormone or vitamin D homeostasis
Bailey, S. J. & McCaffery, P. J., 2010, Brain Protection in Schizophrenia, Mood and Cognitive Disorders. Ritsner, M. (ed.). Netherlands: Springer, p. 105-133 29 p.
Vulnerable road user safety: Social interaction on the road?
Walker, I., 2005, Driver Behaviour and Training Vol II. Dorn, L. (ed.). Aldershot: Ashgate
Vygotsky and inclusion
Daniels, H., Sep 2008, Psychology for Inclusive Education: New Directions in Thoery and Practice. Hick, P., Kershner, R. & Farrell, P. (eds.). London: Routledge, p. 24-37 14 p.
Vygotsky and Psychology
Daniels, H., 15 Jul 2010, The Wiley-Blackwell Handbook of Childhood Cognitive Development. Goswami, U. (ed.). 2nd ed. Chichester: Wiley-Blackwell, p. 673-697 25 p.
Waiting: The shrouded backbone of ethnographic research
Agarwal, P., 23 Jul 2021, The Politics and Ethics of Representation in Qualitative Research: Addressing Moments of Discomfort. C. M. C. (ed.). UK: Routledge
Walking as Trans(disciplinary)mattering: A Speculative Musing on Acts of Feminist Indiscipline
Taylor, C., 1 Jul 2020, Transdisciplinary Feminist Research: Innovations in Theory, Method and Practice. Taylor, C. A., Hughes, C. & Ulmer, J. B. (eds.). Oxon: Routledge, p. 4-15 11 p. (Routledge Research in Gender and Society; vol. 87).
War and patriotism: Russian war films and the lessons for today
Gillespie, D., 15 Dec 2015, The Long Aftermath: Cultural Legacies of Europe at War, 1936-2016. Branganca, M. & Tame, P. (eds.). Berghahn Books, p. 344-357 14 p.
War on terror or a search for meaning?
Durodie, B., Sep 2013, Looking Back, Looking Forward: Perspectives on Terrorism and Responses to it: Strategic Multi-Layer Assessment White Paper. Arlington, U. S. A.: Strategic Multilayer Assessment Program, Office of the Secretary of Defense , p. 21-30
Warum haben marktwirtschaftliche Instrumente im internationalen Umweltschutz kaum eine Chance?
Finus, M. & Endres, A., 2002, Instrumente des Umweltschutzes imWirkungsverbund. Rengeling, H-W. & Hof, H. (eds.). Baden-Baden: Nomos, p. 309-325 17 p.
War without Death: America's Ingenious Plan to Defeat Enemies without Bloodshed
Troyer, J., 2012, Emotion, Identity and Death . Davies, D. & Park, C. W. (eds.). Ashgate
Wary co-existence: Baltic-Russian relations in the post-enlargement era
Lasas, A. & Galbreath, D. J., 2013, National Perspectives on Russia. David, M., Gower, J. & Haukkala, H. (eds.). Routledge, p. 149-169 (Routledge Advances in European Politics).
Washback effect in teaching English as an international language
Mckinley, J. & Thompson, G., Feb 2018, TESOL Encyclopedia of English Language Teaching. Liontas, J. I., DelliCarpini, M. & Abrar-ul-Hassan, S. (eds.). 1st ed. Wiley
Wasserfrauen in ökofeministischer Perspektive bei Ingeborg Bachmann und Karen Duve: Mahnende Stimmen über unsere Beziehung zur Natur
Goodbody, A. H., 2008, Wasser Kultur Ökologie. Beiträge zum Wandel im Umgang mit dem Wasser und zu seiner literarischen Imagination. Goodbody, A. & Wanning, B. (eds.). Göttingen: v&r unipress
Waste minimization in industry
Crittenden, B. D., 2003, Encyclopedia of Life Support Systems (EOLSS). p. 16 1 p.
Wastewater Analysis for Community-Wide Drugs Use Assessment
Ort C, Bijlsma L, Castiglioni S, Covaci A, de Voogt P, Emke E, Hernández F, Reid M, van Nuijs ALN, Thomas KV, Kasprzyk-Hordern B & Kasprzyk-Hordern, B., 13 Jun 2018, (E-pub ahead of print) Handbook of Experimental Pharmacology. Berlin, Heidelberg: Springer
Wastewater-Based Epidemiological Engineering - Modeling Illicit Drug Biomarker Fate in Sewer Systems as a Means to Back-Calculate Urban Chemical Consumption Rates: Modelling illicit drug biomarker fate in sewer systems as a means to back-calculate urban chemical consumption rates
Plosz, B. G. & Ramin, P., 1 Jan 2019, Wastewater-Based Epidemiology: Estimation of Community Consumption of Drugs and Diets. Subedi, B., Burgard, D. A. & Loganathan, B. G. (eds.). American Chemical Society, p. 99-115 17 p. (ACS Symposium Series; vol. 1319).
Water Policy and Regulations: A UK Perspective
Adeyeye, K., 30 Dec 2013, Water Efficiency in Buildings: Theory and Practice. Adeyeye, K. (ed.). Wiley-Blackwell, p. 5-23 19 p.
Water Quality issues in Developing Countries
Markandya, A., 2006, Economic Development and Environmental Sustainability. Lopez, R. & Toman, M. A. (eds.). Columbia University Press
Water transport through nanoporous materials: Porous silicon and single walled carbon nanotubes
Shearer, C., Velleman, L., Acosta, F., Ellis, A., Voelcker, N., Mattia, D. & Shapter, J., Feb 2010, ICONN 2010 - Proceedings of the 2010 International Conference on Nanoscience and Nanotechnology. Piscataway, NJ: IEEE, p. 196-199 4 p. 6045248
Wavelength Conversion by Terahertz Electro-Optic Modulation in Asymmetric Coupled Quantum Wells
Zhang, J. Z. & Allsopp, D., 2008, 2008 Conference on Lasers and Electro-Optics & Quantum Electronics and Laser Science Conference. New York: IEEE, p. 2780-2781 2 p.
Wavering between radical and moderate: The discourse of the Vlaams Belang in Flanders (Belgium).
Coffe, H. & Dewulf, J., 2014, Doublespeak: The rhetoric of the far-right since 1945. Ibidem Verlag, p. 147-165 19 p.
Wave run-up and response spectrum for wave scattering from a cylinder
Zang, J., Liu, S., Eatock Taylor, R. & Taylor, P. H., 2008, Proceedings of the 18th International Offshore and Polar Engineering Conference, 2008. International Society of Offshore and Polar Engineers, p. 69-74 6 p.
Ways of seeing data: towards a critical literacy for data visualizations as research objects and research devices
Gray, J., Bounegru, L., Milan, S. & Ciuccarelli, P., Jan 2017, Innovative Methods in Media and Communication Research. Kubitschko, S. & Kaun, A. (eds.). London, U. K.: Palgrave Macmillan, p. 227-252
Weak Convergence in$$L:\infty (X, \mathcal L, \lambda )$$
Toland, J., 3 Jan 2020, (E-pub ahead of print) The Dual of L∞(X,L,λ), Finitely Additive Measures and Weak Convergence. Cham, Switzerland: Springer Science and Business Media B.V., p. 67-75 9 p. (SpringerBriefs in Mathematics).
Wealth as an indicator or socio-economic development: Islamic views
Naveed, A., Zaman, A. & Rehman, A. U., 2019, Towards a Maqāṣid al-Sharīʿah Index of Socio-Economic Development: Theory and application. Syed Ali, S. (ed.). Palgrave Macmillan
Web 2.0 Practices for Peer Assessment Processes: Exploring the Synergies and Tensions
Jones, G., 2010, Proceedings of the 9th European Conference on E-Learning. Reading: Academic Conferences, Vol. 1. p. 274-283 10 p.
Web Accessibility: Practical Advice for the Library and Information Professional
Kelly, B., 2008, Web Accessibility: Practical Advice for the Library and Information Professional. Craven, J. (ed.).
Web Based Consultation for Cambridge University's Building Program
Ruffle, S. & Richens, P., 2001, Reinventing the Discourse - How Digital Tools Help Bridge and Transform Research, Education and Practice in Architecture: Proceedings of the Twenty First Annual Conference of the Association for Computer-Aided Design in Architecture. ACADIA, p. 366-371 6 p.
Web-based information system for cladding industry
Du, Q., Ledbetter, S. & Yang, R., 2012, Frontiers of Manufacturing and Design Science II. Trans Tech Publications, Vol. 121-126. p. 4265-4268 4 p. (Applied Mechanics and Materials).
Webs Of Influence: Corporate Impacts On Governance
Miller, D. & Harkins, C., 2014, Reframing addiction: policies, processes and pressures: Barcelona: The ALICE RAP project. Anderson, P., Bühringer, G. & Colom, J. (eds.).
Weighted Poincaré inequalities and applications in domain decomposition
Pechstein, C. & Scheichl, R., 2010, Domain Decomposition Methods in Science and Engineering XIX. Heidelberg: Springer, Vol. 78. p. 197-204 8 p.
Welcome Message from Co-organizer of the Commemorative Academic Conference for the 60th Anniversary of the 1955 Asian-African Conference in Bandung
Wong, P. N., 2015, The Proceeding of the Commemorative Academic Conference for the 60th Anniversary of the 1955 Asian-African Conference in Bandung, Indonesia, 4-6 June 2015.. Bandung, Indonesia: Universitas Pendidikan Indonesia, p. iii 1 p.
Welfare Reform and the Employment of Lone Parents
Harkness, S. & Gregg, P., 2003, The Labour Market Under New Labour: The State of Working Britain. Dickens, R., Gregg, P. & Wadsworth, J. (eds.). Palgrave Macmillan, p. 18 1 p.
Welfare regimes in development contexts: a global and regional analysis
Gough, I., 2004, Insecurity and Welfare Regimes in Asia, Africa and Latin America: Social Policy in Development Contexts. Gough, I., Wood, G., Barrientos, A., Bevan, P., Davis, P. & Room, G. (eds.). Cambridge University Press, p. 15 1 p.
Welfare Regimes in East Asia
Gough, I., 2003, New social policy agendas for Europe and Asia: Challenges, experience, and lessons. Marshall, K. & Butzbach, O. (eds.). Washington, D.C: World Bank, p. 499-512 14 p.
Welfare Regimes in East Asia and Europe Compared
Gough, I., 2003, New social policy agendas for Europe and Asia: Challenges, experience, and lessons. Marshall, K. & Butzbach, O. (eds.). Washington, D.C: World Bank, p. 27-42 16 p.
Welfare States in Europe and the Third Sector
Room, G. & 6, P., 1994, Delivering welfare: repositioning non-profit and co-operative action in western European welfare states. 6, P. & Vidal, I. (eds.). Barcelona: CEIS, p. 39-67 29 p.
Welfare to Work in the United Kingdom
Evans, M. & Millar, J., 2006, International Perspectives on Welfare to Work Policy. Hoefer, R. & Midgley, J. (eds.). Taylor and Francis, p. 61-76 16 p.
Wellbeing, livelihoods and resources in social practice
White, S. & Ellison, M., 2007, Wellbeing in Developing Countries: New Approaches and Research Strategies. Gough, I. & McGregor, J. A. (eds.). Cambridge: Cambridge University Press
Well-being and consumption: towards a theoretical approach based on human need satisfaction
Guillen-Royo, M., 2007, Handbook On The Economics Of Happiness. Bruni, L. (ed.). Cheltenham: Edward Elgar Publishing Ltd, p. 151-169 19 p.
Wellbeing and Institutions
Alvarez, J. L. & Copestake, J., 2008, Wellbeing and Development in Peru: Local and Universal Views Confronted. Copestake, J. (ed.). Basingstoke: Palgrave Macmillan, p. 153-184 32 p. | CommonCrawl |
Energy carried by solar wind
What are the velocity, mass, and charge distribution of the solar wind.
Near the earth within the magnetosphere in the ecliptic
Near the earth but outside the magnetosphere in the ecliptic
Outside the ecliptic at 1 AU
I would like to understand the energy content of the solar wind, and how it compares with the solar radiation(solar constant is $1360\, Wm^{-2}$)?
the-sun solar-wind photons
Milind RMilind R
$\begingroup$ It might be tough to find a precise answer as estimates to the mass of coronal mass ejections (CMEs) vary and the energy would be proportional to the amount of ejected material. I found this table with kinetic energy of some CMEs. spacemath.gsfc.nasa.gov/weekly/4Page17.pdf By comparison, the Sun emits about 3.8 x 10^26 joules per second, so the largest CME listed there is about 1/4 second of solar energy. Other than saying it's quite tiny, I wouldn't want to guess a percentage though. $\endgroup$ – userLTK Jul 18 '16 at 0:06
$\begingroup$ @userLTK Thanks for the nice link! It is my understanding that the solar wind is a relatively constant stream of particles, and that CMEs are "mere" fluctuations. I am interested in the constant stream. Even a rough annual average would do. $\endgroup$ – Milind R Jul 18 '16 at 13:59
The solar wind speed has a large range of variation, between ~250–820 km/s [e.g., Chen et al., 2014; Gopalswamy, 2006; Jian et al., 2011, 2014; Kasper et al., 2012; Maksimovic et al., 1998; Marsch, 1983; McComas et al., 2013; Schwenn, 1983; Stverak et al., 2008, 2009] near the ecliptic plane. These values are not including interplanetary shocks, which can have speeds exceeding 2000 km/s.
The speed is generally higher at higher latitudes out of the ecliptic plane, tending to be over 650 km/s [e.g., McComas et al., 2008; 2013].
Number Density
The number density also has a large range of values, from ~2–90 $cm^{-3}$ [e.g., Chen et al., 2014; Gopalswamy, 2006; Jian et al., 2011, 2014; Kasper et al., 2012; Maksimovic et al., 1998; Marsch, 1983; McComas et al., 2013; Schwenn, 1983; Stverak et al., 2008, 2009]. Again, these do not include interplanetary shocks or coronal mass ejections (CMEs).
Charge State
The alpha particle to proton number density ratio varies between ~1-5%, depending on solar cycle and solar wind speed [e.g., Kasper et al., 2012; Schwadron et al., 2014].
We have also measured the ratio of $O^{7+}/O^{6+}$ and $C^{6+}/C^{5+}$, finding ~1-30% and ~20-200%, respectively [e.g., Schwadron et al., 2014].
The properties of the terrestrial magnetosphere vary so widely, you would need to narrow down this question. For instance, the charge states are completely different (e.g., we observe $O^{1+}$ but not $O^{7+}$) but the number densities range from ~$10^{-2}-10^{3} \ cm^{-3}$.
See responses above to first part.
We don't have any measurements near 1 AU that are at high latitudes. Some spacecraft have done out of ecliptic polar orbits with high apogees, but the heliocentric latitudes were still within ~$10^{\circ}$ of the ecliptic plane. The notes above discuss our only real measurements out of the ecliptic by the Ulysses spacecraft.
I would like to understand the energy content of the solar wind, and how it compares with the solar radiation(solar constant is 1360 W $m^{-2}$?
The solar wind ram(dynamic) pressure is typically only ~1 nPa or $10^{-9} \ J \ m^{-3}$. This is highly variable and can change in milliseconds (e.g., interplanetary shocks), but that would still only be $10^{-6} \ W \ m^{-3}$. If we make a hand-wavy argument that this drops to zero in ~3 $R_{E}$ (i.e., upper bound on thickness of magnetosheath), then the power per unit area can be up to ~20 W $m^{-2}$. However, I would not read too much into that number as the actual power dissipated per unit area is different for numerous reasons.
C.H.K. Chen et al., Geophys. Res. Lett. 41, pp. 8081, 2014.
N. Gopalswamy, Space Sci. Rev. 124, pp. 145, 2006.
L.K. Jian et al., Solar Phys. 274, pp. 321, 2011.
L.K. Jian et al., Astrophys. J. 786, pp. 123, 2014.
J. C. Kasper et al., Astrophys. J. 745, pp. 162, 2012.
M. Maksimovic et al., Geophys. Res. Lett. 25, pp. 1265, 1998.
E. Marsch, Fifth International Solar Wind Conference 228, pp. 355, 1983.
D.J. McComas et al., Geophys. Res. Lett. 35, pp. L18103, 2008.
D.J. McComas et al., Astrophys. J. 779, pp. 2, 2013.
N.A. Schwadron et al., J. Geophys. Res. 119, pp. 1486-1492, 2014.
R. Schwenn, Fifth International Solar Wind Conference 228, pp. 489, 1983.
v. Stverak et al., J. Geophys. Res. 113, pp. 3103, 2008.
honeste_viverehoneste_vivere
I'm not sure such a detailed answer to your question is available.
This book cites this paper as a source for the mass loss due to the solar wind:
$\dot{M} \sim 2.5 \times 10^{-14}\,M_\odot/yr$.
This book cites this paper as the source for the following data at 1 AU:
Kinetic energy density of the solar wind:
$\frac{1}{2}N_\mathrm{p}m_\mathrm{p}v^2 = 1.44 \pm 0.09 \times 10^{-8}\,$erg cm$^{-3}$.
Thermal energy density for protons, electrons and helium atoms:
$\frac{3}{2} N k T \approx 4.8 \pm 3.2 \times 10^{-10}\,$erg cm$^{-3}$.
Wind velocity:
$v_\mathrm{w} = 468 \pm 116\,$km s$^{-1}$.
The numbers in 2. and 3. indicate that the thermal energy contributes for about 3%, and can be ignored given the uncertainties involved. Using 1. and 4. we get a kinetic 'wind luminosity' of
$~~~ L_\mathrm{w} \approx \frac{1}{2} \dot{M} v_\mathrm{w}^2 \approx 1.7 \times 10^{27}\,$erg s$^{-1} \approx 4.5 \times 10^{-7} L_\odot.$
AstroFloydAstroFloyd
Not the answer you're looking for? Browse other questions tagged the-sun solar-wind photons or ask your own question.
Rate of Mass Loss from the Solar Wind
Composition and Proton Flux from the Solar Wind
Can cosmic rays alone produce noticeable aurorae in bodies far away from the Sun?
Solar Wind and Asteroid orbital behavior
The sun's SOLAR WIND VS MAGNETAR
Why does the Solar Wind consist of charged particles?
Do Electric Charges in the Van Allen Radiation belt move in Opposite Directions?
Why does the solar wind switch to flowing straight by the time it reaches Earth?
How much mass does the Sun lose as light, neutrinos, and solar wind? | CommonCrawl |
Materials Theory
Fracture as a material sink
K. Y. Volokh1
Materials Theory volume 1, Article number: 3 (2017) Cite this article
Cracks are created by massive breakage of molecular or atomic bonds. The latter, in its turn, leads to the highly localized loss of material, which is the reason why even closed cracks are visible by a naked eye. Thus, fracture can be interpreted as the local material sink. Mass conservation is violated locally in the area of material failure. We consider a theoretical formulation of the coupled mass and momenta balance equations for a description of fracture. Our focus is on brittle fracture and we propose a finite strain hyperelastic thermodynamic framework for the coupled mass-flow-elastic boundary value problem. The attractiveness of the proposed framework as compared to the traditional continuum damage theories is that no internal parameters (like damage variables, phase fields, etc.) are used while the regularization of the failure localization is provided by the physically sound law of mass balance.
Within the framework of continuum mechanics there are surface and bulk material failure models.
Surface failure models are known by name of cohesive zone models (CZMs). In the latter case, continuum is enriched with discontinuities along surfaces—cohesive zones—with additional traction-displacement-separation constitutive laws. These laws are built qualitatively as follows: traction increases up to a maximum and then goes down to zero via increasing separation (Barenblatt 1959; Camacho and Ortiz 1996; de Borst 2001; Gong et al. 2012; Moes et al. 1999; Needleman 1987; Park et al. 2009; Rice and Wang 1989; Tvergaard and Hutchinson 1992; Xu and Needleman 1994). If the location of the separation surface is known in advance (e.g., fracture along weak interfaces) then the use of CZM is natural. Otherwise, the insertion of cracks in the bulk in the form of the separation surfaces remains an open problem, which includes definition of the criteria for crack nucleation, orientation, branching and arrest. Besides, the CZM approach presumes the simultaneous use of two different constitutive models: one for the cohesive zone and another for the bulk, for the same real material. Certainly, a correspondence between these two constitutive theories is desirable yet not promptly accessible. The issues concerning the CZM approach have been discussed by Needleman (2014), the pioneer of the field.
Bulk failure models are known by name of Continuum Damage Mechanics (CDM). In the latter case, material failure or damage is described by constitutive laws including softening in the form of the falling stress-strain curves (Benzerga et al. 2016; Dorfmann and Ogden 2004; Gurson 1977; Kachanov 1958; Klein and Gao 1998; Lemaitre and Desmorat 2005; Menzel and Steinmann 2001; Simo 1987; Volokh 2004; 2007; Voyiadjis and Kattan 1992). Remarkably, damage nucleation, propagation, branching and arrest naturally come out of the constitutive laws. Unfortunately, numerical simulations based on the the bulk failure laws show the so-called pathological mesh sensitivity, which means that the finer meshes lead to the narrower damage localization areas. In the limit case, the energy dissipation in failure tends to zero with the diminishing size of the computational mesh. This physically unacceptable mesh sensitivity is caused by the lack of a characteristic length in the traditional formulation of continuum mechanics. To surmount the latter pitfall gradient- or integral- type nonlocal continuum, formulations are used where a characteristic length is incorporated to limit the size of the spatial failure localization (Borden et al. 2012; de Borst and van der Giessen 1998; Francfort and Marigo 1998; Hofacker and Miehe 2012; Lasry and Belytschko 1988; Peerlings et al. 1996; Pijaudier-Cabot and Bazant 1987; Silling 2000). The regularization strategy rooted in the nonlocal continua formulations is attractive because it is lucid mathematically.
Unluckily, the generalized nonlocal continua theories are based (often tacitly) on the physical assumption of long-range particle interactions while the actual particle interactions are short-range—on nanometer or angstrom scale. Therefore, the physical basis for the nonlocal models appears disputable. A more physically based treatment of the pathological mesh sensitivity of the bulk failure simulations should likely include multi-physics coupling. Such an attempt to couple mass flow (sink) and finite elastic deformation within the framework of brittle fracture is considered in the present work.
Cracks are often thought of as material discontinuities of zero thickness. Such idealized point of view is probably applicable to nano-structures with perfect crystal organization. In the latter case fracture appears as a result of a separation—unzipping—of two adjacent atomic or molecular layers—Fig. 1 (left).
Schematic drawing of cracks with zero or finite thickness
In the case of the bulk material with a sophisticated heterogeneous organization, the crack appears as a result of the development of multiple micro-cracks triggered by the massive breakage of molecular or atomic bonds—Fig. 1 (right). The bond breakage is not confined to two adjacent molecular layers, and the process involves thousands layers within an area or volume with the representative characteristic size l.
It is interesting to note that material failure does not require the breakage of all molecular or atomic bonds within a representative volume. Only fraction of these bonds should be broken for the material disintegration. For example, in the case of natural rubber, roughly speaking, every third bond should be broken within a representative volume to create a crack (Volokh 2013a).
The local bond failure leads to the highly localized loss of material. The latter, in our opinion, is the reason why even closed cracks are visible by a naked eye. Thus, material flows out of the system during the fracture process. The system becomes open from the thermodynamic standpoint. However, cracks usually have very small thickness and the amount of the lost material is negligible as compared to the whole bulk. The latter observation allows ignoring the additional supply of momenta and energy in the formulation of the initial boundary value problem described in the next sections.
Following the approach of continuum mechanics, we replace the discrete molecular structure of materials by a continuously distributed set of material points which undergo mappings from the initial (reference), Ω0, to current, Ω, configuration: x↦y(x). The deformation in the vicinity of the material points is described by the deformation gradient F=Grady(x).
In what follows we use the Lagrangean description with respect to the initial or reference configuration and define the local mass balance in the form
$$ \frac{d\rho}{dt}=\text{Div}\mathbf{s}+\xi, $$
where ρ is the referential (Lagrangean) mass density; s is the referential mass flux; ξ is the referential mass source (sink); and Divs=∂ s i /∂ x i in Cartesian coordinates.
We further assume that failure and, consequently, mass flow are highly localized and the momenta and energy balance equations can be written in the standard form without adding momenta and energy due to the mass alterations.
In view of the assumption above, we write momenta and energy balance equations in the following forms accordingly
$$ \frac{d(\rho\mathbf{v})}{dt}=\text{Div}\mathbf{P}+\rho\mathbf{b},\quad\mathbf{P}\mathbf{F}^{\mathrm{T}}=\mathbf{F}\mathbf{P}^{\mathrm{T}}, $$
$$ \frac{d(\rho e)}{dt}=\mathbf{P}:\dot{\mathbf{F}}+\rho r-\text{Div}\mathbf{q}, $$
where \(\mathbf {v}=\dot {\mathbf {y}}\) is the velocity of a material point; b is the body force per unit mass; P is the first Piola-Kirchhoff stress and (DivP) i =∂ P ij /∂ x j ; e is the specific internal energy per unit mass; r is the specific heat source per unit mass; and q is the referential heat flux.
Entropy inequality reads
$$ \frac{d(\rho\eta)}{dt}\geq\frac{1}{T}(\rho r-\text{Div}\mathbf{q})+\frac{1}{T^{2}}\mathbf{q}\cdot\text{Grad}T, $$
where T is the absolute temperature.
Substitution of (ρ r−Divq) from (3) to (4) yields
$$ \rho\dot{\eta}+\dot{\rho}\eta\geq\frac{1}{T}(\rho\dot{e}+\dot{\rho}e-\mathbf{P}:\dot{\mathbf{F}})+\frac{1}{T^{2}}\mathbf{q}\cdot\text{Grad}T, $$
or, written in terms of the internal dissipation,
$$ D_{\text{int}}=\mathbf{P}:\dot{\mathbf{F}}-\rho(\dot{e}-T\dot{\eta})-\dot{\rho}(e-T\eta)-\frac{1}{T}\mathbf{q}\cdot\text{Grad}T\geq0. $$
We introduce the specific Helmholtz free energy per unit mass
$$ w=e-T\eta, $$
and, consequently, we have
$$ e=w+T\eta,\quad\dot{e}=\dot{w}+\dot{T}\eta+T\dot{\eta}. $$
Substituting (8) in (6) we get
$$ D_{\text{int}}=\mathbf{P}:\dot{\mathbf{F}}-\rho(\dot{w}+\dot{T}\eta)-\dot{\rho}w-\frac{1}{T}\mathbf{q}\cdot\text{Grad}T\geq0. $$
Then, we calculate the Helmholtz free energy increment
$$ \dot{w}=\frac{\partial w}{\partial\mathbf{F}}:\dot{\mathbf{F}}+\frac{\partial w}{\partial T}\dot{T}, $$
and substitute it in (9) as follows
$$ D_{\text{int}}=\left(\mathbf{P}-\rho\frac{\partial w}{\partial\mathbf{F}}\right):\dot{\mathbf{F}}-\rho\left(\frac{\partial w}{\partial T}+\eta\right)\dot{T}-\dot{\rho}w-\frac{1}{T}\mathbf{q}\cdot\text{Grad}T\geq0. $$
The Coleman-Noll procedure suggests the following choice of the constitutive laws
$$ \mathbf{P}=\rho\frac{\partial w}{\partial\mathbf{F}},\quad\eta=-\frac{\partial w}{\partial T}. $$
and, consequently, the dissipation inequality reduces to
$$ D_{\text{int}}=-\dot{\rho}w-\frac{1}{T}\mathbf{q}\cdot\text{Grad}T\geq0. $$
We further note that the process of the bond breakage is very fast as compared to the dynamic deformation process and the mass density changes in time as a step function. So, strictly speaking, the density rate should be presented by the Dirac delta in time. We will not consider the super fast transition to failure, which is of no interest on its own, and assume that the densities before and after failure are constants and, consequently,
$$\dot{\rho}=\text{Div}\mathbf{s}+\xi=0, $$
$$ \text{Div}\mathbf{s}+\xi=0. $$
Then, the dissipation inequality reduces to
$$ D_{\text{int}}=-\frac{1}{T}\mathbf{q}\cdot\text{Grad}T\geq0, $$
which is obeyed because the heat flows in the direction of the lower temperature.
It remains to settle the boundary and initial conditions.
Natural boundary conditions for zero mass flux represent the mass balance on the boundary ∂Ω0
$$ \mathbf{s}\cdot\mathbf{n}=0, $$
where n is the unit outward normal to the boundary in the reference configuration.
Natural boundary conditions for given traction \(\bar {\mathbf {t}}\) represent the linear momentum balance on the boundary ∂Ω0
$$ \mathbf{P}\mathbf{n}=\bar{\mathbf{t}}, $$
or, alternatively, the essential boundary conditions for placements can be prescribed on ∂Ω0
$$ \mathbf{y}=\bar{\mathbf{y}}. $$
Initial conditions in Ω0 complete the formulation of the coupled mass-flow-elastic initial boundary value problem
$$ \mathbf{y}(t=0)=\mathbf{y}_{0},\quad\mathbf{v}(t=0)=\mathbf{v}_{0}. $$
Remark The fact that we ignore the process of the transition to failure and use (14) instead of (1) might be difficult to comprehend at first glance. To ease the comprehension the reader might find it useful to consider the analogy between fracture and the buckling process in thin-walled structure. The pre-buckled and post-buckled states of a structure are usually analyzed by using a time-independent approach. The very process of the fast dynamic transition to the buckled state is of no interest and it is normally ignored in analysis by dropping the inertia terms from the momentum balance equation: \(\frac {d(\rho \mathbf {v})}{dt}=\text {Div}\mathbf {P}+\rho \mathbf {b}=\mathbf {0}\) or DivP+ρ b=0. By analogy with the buckling analysis we are only interested in the pre-cracked and post-cracked states while the transition (bond rupture) process can be ignored. The latter is the reason why the mass balance can be written in the simplified form: \(\frac {d\rho }{dt}=\text {Div}\mathbf {s}+\xi =0\) or Divs+ξ=0. It is also important to emphasize that the proposed simplification does not affect the natural boundary condition (16). This boundary condition is the expression of the mass balance on the boundary, which is obtained by using the standard Cauchy tetrahedron argument.
Constitutive equations
Constitutive law for the mass source is the very heart of the successful formulation of the theory and the reader is welcome to make a proposal.
We choose, for example, the following constitutive law, whose motivation is clarified below,
$$ \xi(\rho,\rho_{0},w,\phi)=\beta(\rho_{0}H(\zeta)\exp[-(w/\phi)^{m}]-\rho), $$
where ρ 0=ρ(t=0) is a constant initial density; β>0 is a material constant; ϕ is the specific energy limiter per unit mass, which is calibrated in macroscopic experiments; m is a dimensionless material parameter, which controls the sharpness of the transition to material failure on the stress-strain curve; and H(ζ) is a unit step function, i.e. H(ζ)=0 if ζ<0 and H(ζ)=1 otherwise.
The switch parameter ζ, which is necessary to prevent from material healing, will be explained below.
Constitutive law for the Lagrangean mass flux can be written by analogy with the Fourier law for heat conduction
$$ \mathbf{s}=\kappa H(\zeta)\exp[-(w/\phi)^{m}]J(\mathbf{F}^{\mathrm{T}}\mathbf{F})^{-1}\text{Grad}\rho, $$
where κ>0 is a mass conductivity for the isotropic case, which might depend on the deformation process.
The exponential factor is necessary in (21) to suppress diffusion in the failed material.
Substitution of (21) and (20) in (14) yields
$$ \text{Div} \left(l^{2}H(\zeta)\exp[-(w/\phi)^{m}]J(\mathbf{F}^{\mathrm{T}}\mathbf{F})^{-1}\text{Grad}\frac{\rho}{\rho_{0}}\right) +H(\zeta)\exp[-(w/\phi)^{m}]-\frac{\rho}{\rho_{0}}=0, $$
$$ {l=\sqrt{\kappa/\beta}} $$
is the characteristic length, which might depend on the deformation process.
It is remarkable that we, actually, do not need to know κ and β separately and the knowledge of the characteristic length is enough. For example, the estimate of the characteristic length for rubber is l=0.2 mm (Volokh 2011) and for concrete it is l=2.6 cm (Volokh 2013b).
To justify the choice of the constitutive Eq. (20) for the mass source/sink we note that in the case of the homogeneous deformation and mass flow the first term on the left hand side of (22) vanishes and we obtain
$$ \rho=\rho_{0}H(\zeta)\exp[-(w/\phi)^{m}]. $$
Substituting this mass density in the hyperelastic constitutive law we have
$$ \mathbf{P}=\rho_{0}H(\zeta)\exp[-(w/\phi)^{m}]\frac{\partial w}{\partial\mathbf{F}}=H(\zeta)\exp[-(W/\varPhi)^{m}]\frac{\partial W}{\partial\mathbf{F}}, $$
$$ W=\rho_{0}w,\quad\varPhi=\rho_{0}\phi $$
are the Helmholtz free energy and energy limiter per unit referential volume accordingly.
Constitutive law (25) presents the hyperelasticity with the energy limiters - see Volokh (2007, 2013a, 2016) for the general background. Integrating (25) with respect to the deformation gradient, we introduce the following form of the strain energy function
$$ \varPsi(\mathbf{F},\zeta)=\varPsi_{\mathrm{f}}-H(\zeta)\varPsi_{\mathrm{e}}(\mathbf{F}), $$
$$ {\varPsi}_{\mathrm{e}}(\mathbf{F})=\frac{\varPhi}{m}{\varGamma}\left(\frac{1}{m},\frac{W(\mathbf{F})^{m}}{{\varPhi}^{m}}\right), \quad{\varPsi}_{\mathrm{f}}={\varPsi}_{\mathrm{e}}(\mathbf{1}). $$
Here Ψf and Ψe(F) designate the constant bulk failure energy and the elastic energy respectively; \(\varGamma (s,x)=\int _{x}^{\infty }t^{s-1}e^{-t}dt\) is the upper incomplete gamma function.
The switch parameter ζ∈(−∞,0] is defined by the evolution equation
$$ \dot{\zeta}=-H\left(\epsilon-\frac{{\varPsi}_{\mathrm{e}}}{{\varPsi}_{\mathrm{f}}}\right),\quad\zeta(t=0)=0, $$
where 0<ε≪1 is a dimensionless precision constant.
The physical interpretation of (27) is straightforward: material is hyperelastic for the strain energy below the failure limit - Ψf. When the failure limit is reached, then the strain energy becomes constant for the rest of the deformation process precluding the material healing. Parameter ζ≤0 is not an internal variable. It is a switch: ζ=0 for the reversible process; and ζ<0 for the irreversibly failed material and dissipated strain energy.
For illustration, we present the following specialization of the intact strain energy for a filled Natural Rubber (NR) (Volokh 2010)
$$ {W=\rho_{0}w=\sum\limits_{k=1}^{3}c_{k}(I_{1}-3)^{k},\quad J}=\det\mathbf{F}=1, $$
where c 1=0.298 MPa, c 2=0.014 MPa, c 3=0.00016 MPa and the failure parameters are m=10, and Φ =82.0 MPa.
The Cauchy stress, defined by σ=J −1 P F T, versus stretch curve for the uniaxial tension is shown in Fig. 2 for both cases with and without the energy limiter. Material failure takes place at the critical limit point in correspondence with tests conducted by Hamdi et al. (2006).
Uniaxial tension of natural rubber: Cauchy stress [MPa] versus stretch. Dashed line specifies the intact model; solid line specifies the model with energy limiter
For the implications and experimental comparisons of the elasticity with energy limiters, the reader is advised to look through Volokh (2013a; 2016), for example. We completely skip this part for the sake of brevity.
Thus, the proposed constitutive law for the mass source is motivated by the limit case of the coupled formulations in which the deformation is homogeneous.
Crack in a bulk material is not an ideal unzipping of two adjacent atomic layers. It is rather a massive breakage of atomic bonds diffused in a volume of characteristic size. The massive bond breakage is accompanied by the localized loss of material. Thus, material sinks in the vicinity of the crack. Evidently, the law of mass conservation should be replaced by the law of mass balance, accounting for the mass flow in the vicinity of the crack. The coupled mass-flow-elasticity problem should be set for analysis of crack propagation.
In the present work, we formulated the coupled problem based on the thermodynamic reasoning. We assumed that the mass loss related to the crack development was small as compared to the mass of the whole body. In addition, we assumed that the process of the bond breakage was very fast and the mass density jumped from the intact to failed material abruptly allowing to ignore the transient process of the failure development. These physically reasonable assumptions helped us to formulate a simple coupled initial boundary value problem. In the absence of failure localization into cracks the theory is essentially the hyperelasticity with the energy limiters. However, when the failure starts localizing into cracks the diffusive material sink activates via the mass balance equation and it provides the regularization of numerical simulations. The latter regularization is due to the mass diffusion—first term on the left hand side of (22).
The attractiveness of the proposed framework as compared to the traditional continuum damage theories is that no internal parameters (like damage variables and phase fields) are used while the regularization of the failure localization is provided by the physically sound law of mass balance.
A numerical integration procedure for the formulated coupled initial boundary value problem is required and it will be considered elsewhere.
Finally, it should be noted that in the present work we focused on brittle fracture of elastomers, concrete, etc. In the case of ductile fracture of polycrystalline metals, for example, the dislocation-triggered plasticity should be taken into account. The marriage of the mass sink approach and theories of ductile failure would be interesting.
GI Barenblatt, The formation of equilibrium cracks during brittle fracture.General ideas and hypotheses. Axially-symmetric cracks. J. Appl. Math. Mech. 23:, 622–636 (1959).
AA Benzerga, JB Leblond, A Needleman, V Tvergaard, Ductile failure modeling. Int. J. Fract. 201:, 29–80 (2016).
MJ Borden, CV Verhoosel, MA Scott, TJR Hughes, CM Landis, A phase-field description of dynamic brittle fracture. Comp. Meth. Appl. Mech. Eng. 217-220:, 77–95 (2012).
GT Camacho, M Ortiz, Computational modeling of impact damage in brittle materials. Int. J. Solids. Struct. 33:, 2899–2938 (1996).
R de Borst, Some recent issues in computational failure mechanics. Int. J. Numer. Meth. Eng. 52:, 63–95 (2001).
R de Borst, E van der Giessen, Material Instabilities in Solids (John Wiley and Sons, Chichester, 1998).
A Dorfmann, RW Ogden, A constitutive model for the Mullins effect with permanent set in particle-reinforced rubber. Int. J. Solids. Struct. 41:, 1855–1878 (2004).
GA Francfort, JJ Marigo, Revisiting brittle fracture as an energy minimization problem. J. Mech. Phys. Solids. 46:, 1319–1342 (1998).
H Gao, P Klein, Numerical simulation of crack growth in an isotropic solid with randomized internal cohesive bonds. J. Mech. Phys. Solids. 46:, 187–218 (1998).
B Gong, M Paggi, A Carpinteri, A cohesive crack model coupled with damage for interface fatigue problems. Int. J. Fract. 137:, 91–104 (2012).
AL Gurson, Continuum theory of ductile rupture by void nucleation and growth: part I-yield criteria and flow rules for porous ductile media. J. Eng. Mat. Tech. 99:, 2–151 (1977).
A Hamdi, Nait Abdelaziz M, Ait Hocine N, Heuillet P, Benseddiq N, A fracture criterion of rubber-like materials under plane stress conditions. Polym. Test. 25:, 994–1005 (2006).
M Hofacker, C Miehe, Continuum phase field modeling of dynamic fracture: variational principles and staggered FE implementation. Int. J. Fract. 178:, 113–129 (2012).
LM Kachanov, Time of the rupture process under creep conditions. Izv. Akad. Nauk. SSSR, Otdelenie Teckhnicheskikh Nauk. 8:, 26–31 (1958).
P Klein, H Gao, Crack nucleation and growth as strain localization in a virtual-bond continuum. Eng. Fract. Mech. 61:, 21–48 (1998).
D Lasry, T Belytschko, Localization limiters in transient problems. Int. J. Solids. Struct. 24:, 581–597 (1988).
J Lemaitre, R Desmorat, Engineering Damage Mechanics: Ductile, Creep, Fatigue and Brittle Failures (Springer, Berlin, 2005).
A Menzel, P Steinmann, A theoretical and computational framework for anisotropic continuum damage mechanics at large strains. Int. J. Solids. Struct. 38:, 9505–9523 (2001).
N Moes, J Dolbow, T Belytschko, A finite element method for crack without remeshing. Int. J. Num. Meth. Eng. 46:, 131–150 (1999).
A Needleman, A continuum model for void nucleation by inclusion debonding. J. Appl. Mech. 54:, 525–531 (1987).
A Needleman, Some issues in cohesive surface modeling. Procedia IUTAM. 10:, 221–246 (2014).
K Park, GH Paulino, JR Roesler, A unified potential-based cohesive model of mixed-mode fracture. J. Mech. Phys. Solids. 57:, 891–908 (2009).
RHJ Peerlings, R de Borst, WAM Brekelmans, JHP de Vree, Gradient enhanced damage for quasi-brittle materials. Int. J. Num. Meth. Eng. 39:, 3391–3403 (1996).
G Pijaudier-Cabot, ZP Bazant, Nonlocal damage theory. J. Eng. Mech. 113:, 1512–1533 (1987).
JR Rice, JS Wang, Embrittlement of interfaces by solute segregation. Mater Sci. Eng. A. 107:, 23–40 (1989).
SA Silling, Reformulation of elasticity theory for discontinuities and long-range forces. J. Mech. Phys. Solids. 48:, 175–209 (2000).
JC Simo, On a fully three-dimensional finite strain viscoelastic damage model: Formulation and computational aspects. Comp. Meth. Appl. Mech. Eng. 60:, 153–173 (1987).
V Tvergaard, JW Hutchinson, The relation between crack growth resistance and fracture process parameters in elastic-plastic solids. J. Mech. Phys. Solids. 40:, 1377–1397 (1992).
GZ Voyiadjis, PI Kattan, A plasticity-damage theory for large deformation of solids—I. Theoretical formulation. Int. J. Eng. Sci.30:, 1089–1108 (1992).
KY Volokh, Nonlinear elasticity for modeling fracture of isotropic brittle solids. J. Appl. Mech. 71:, 141–143 (2004).
KY Volokh, Hyperelasticity with softening for modeling materials failure. J. Mech. Phys. Solids. 55:, 2237–2264 (2007).
KY Volokh, On modeling failure of rubber-like materials. Mech. Res. Com. 37:, 684–689 (2010).
KY Volokh, Characteristic length of damage localization in rubber. Int. J. Fract. 168:, 113–116 (2011).
KY Volokh, Review of the energy limiters approach to modeling failure of rubber. Rubber. Chem. Technol. 86:, 470–487 (2013a).
KY Volokh, Characteristic length of damage localization in concrete. Mech. Res. Commun. 51:, 29–31 (2013b).
KY Volokh, Mechanics of Soft Materials (Springer, Singapore, 2016).
XP Xu, A Needleman, Numerical simulations of fast crack growth in brittle solids. J. Mech. Phys. Solids. 42:, 1397–1434 (1994).
The support from the Israel Science Foundation (ISF-198/15) is gratefully acknowledged.
The author declares that he has no competing interests.
Faculty of Civil and Environmental Engineering, Technion - I.I.T., Haifa, Israel
K. Y. Volokh
Correspondence to K. Y. Volokh.
Volokh, K.Y. Fracture as a material sink. Mater Theory 1, 3 (2017). https://doi.org/10.1186/s41313-017-0002-4
Sink Material
Breaking Mass
Bulk Failure | CommonCrawl |
Improved results for Klein-Gordon-Maxwell systems with general nonlinearity
DCDS Home
Dichotomy spectrum and almost topological conjugacy on nonautonomus unbounded difference systems
May 2018, 38(5): 2305-2332. doi: 10.3934/dcds.2018095
KdV-like solitary waves in two-dimensional FPU-lattices
Fanzhi Chen 1, and Michael Herrmann 2,,
University of Münster, Institute for Analysis and Numerics, Einsteinstr. 62, 48149 Münster, Germany
Technische Universität Braunschweig, Institute for Computational Mathematics, Universitätsplatz 2, 38106 Braunschweig, Germany
* Corresponding author: Michael Herrmann
Received March 2017 Revised January 2018 Published March 2018
Figure(15)
We prove the existence of solitary waves in the KdV limit of two-dimensional FPU-type lattices using asymptotic analysis of nonlinear and singularly perturbed integral equations. In particular, we generalize the existing results by Friesecke and Matthies since we allow for arbitrary propagation directions and non-unidirectional wave profiles.
Keywords: Two-dimensional FPU-lattices, KdV limit of lattice waves, asymptotic analysis of singularly perturbed integral equations.
Mathematics Subject Classification: Primary: 37K60; Secondary: 37K40, 74H10.
Citation: Fanzhi Chen, Michael Herrmann. KdV-like solitary waves in two-dimensional FPU-lattices. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2305-2332. doi: 10.3934/dcds.2018095
F. Chen, Wandernde Wellen in FPU-Gittern, Master Thesis, Institute for Mathematics, Saarland University, Germany, 2013. Google Scholar
F. Chen, Traveling waves in two-dimensional FPU lattices, PhD Thesis, Institute for Applied Mathematics, University of Münster, Germany, 2017. Google Scholar
E. Fermi, J. Pasta and S. Ulam, Studis on nonlinear problems, Los Alamos Scientific Laboraty Report, 1940. Google Scholar
A.-M. Filip and S. Venakides, Existence and modulation of traveling waves in particle chains, Comm. Pure Appl. Math., 52 (1999), 693-735. doi: 10.1002/(SICI)1097-0312(199906)52:6<693::AID-CPA2>3.0.CO;2-9. Google Scholar
G. Friesecke and K. Matthies, Geometric solitary waves in a 2d mass spring lattice, Discrete Contin. Dyn. Syst. Ser. B, 3 (2003), 105-114. Google Scholar
G. Friesecke and A. Mikikits-Leitner, Cnoidal waves on Fermi-Pasta-Ulam lattices, J. Dynam. Differential Equations, 27 (2015), 627-652. doi: 10.1007/s10884-013-9343-0. Google Scholar
G. Friesecke and R. L. Pego, Solitary waves on FPU lattices. Ⅰ. Qualitative properties, renormalization and continuum limit, Nonlinearity, 12 (1999), 1601-1627. doi: 10.1088/0951-7715/12/6/311. Google Scholar
G. Friesecke and R. L. Pego, Solitary waves on FPU lattices. Ⅱ. Linear implies nonlinear stability, Nonlinearity, 15 (2002), 1343-1359. doi: 10.1088/0951-7715/15/4/317. Google Scholar
G. Friesecke and R. L. Pego, Solitary waves on Fermi-Pasta-Ulam lattices. Ⅲ. Howland-type Floquet theory, Nonlinearity, 17 (2004), 207-227. doi: 10.1088/0951-7715/17/1/013. Google Scholar
G. Friesecke and R. L. Pego, Solitary waves on Fermi-Pasta-Ulam lattices. Ⅳ. Proof of stability at low energy, Nonlinearity, 17 (2004), 229-251. doi: 10.1088/0951-7715/17/1/014. Google Scholar
G. Friesecke and J. A. D. Wattis, Existence theorem for solitary waves on lattices, Comm. Math. Phys., 161 (1994), 391-418. doi: 10.1007/BF02099784. Google Scholar
J. Gaison, S. Moskow, J. D. Wright and Q. Zhang, Approximation of polyatomic FPU lattices by KdV equations, Multiscale Model. Simul., 12 (2014), 953-995. doi: 10.1137/130941638. Google Scholar
M. Herrmann, Unimodal wavetrains and solitons in convex Fermi-Pasta-Ulam chains, Proc. Roy. Soc. Edinburgh Sect. A, 140 (2010), 753-785. doi: 10.1017/S0308210509000146. Google Scholar
M. Herrmann, K. Matthies, H. Schwetlick and J. Zimmer, Subsonic phase transition waves in bistable lattice models with small spinodal region, SIAM J. Math. Anal., 45 (2013), 2625-2645. doi: 10.1137/120877878. Google Scholar
M. Herrmann and A. Mikikits-Leitner, KdV waves in atomic chains with nonlocal interactions, Discrete Contin. Dyn. Syst., 36 (2016), 2047-2067. Google Scholar
M. Herrmann and J. D. M. Rademacher, Heteroclinic travelling waves in convex FPU-type chains, SIAM J. Math. Anal., 42 (2010), 1483-1504. doi: 10.1137/080743147. Google Scholar
A. Hoffman and C. E. Wayne, Counter-propagating two-soliton solutions in the Fermi-Pasta-Ulam lattice, Nonlinearity, 21 (2008), 2911-2947. doi: 10.1088/0951-7715/21/12/011. Google Scholar
A. Hoffman and C. E. Wayne, Asymptotic two-soliton solutions in the Fermi-Pasta-Ulam model, J. Dynam. Differential Equations, 21 (2009), 343-351. doi: 10.1007/s10884-009-9134-9. Google Scholar
A. Hoffman and C. E. Wayne, A simple proof of the stability of solitary waves in the FermiPasta-Ulam model near the KdV limit, in Infinite dimensional dynamical systems, vol. 64 of Fields Inst. Commun., Springer, New York, 2013,185–192. Google Scholar
A. Hoffman and J. D. Wright, Nanopteron solutions of diatomic Fermi-Pasta-Ulam-Tsingou lattices with small mass-ratio, Phys. D, 358 (2017), 33-59. doi: 10.1016/j.physd.2017.07.004. Google Scholar
G. Iooss, Travelling waves in the Fermi-Pasta-Ulam lattice, Nonlinearity, 13 (2000), 849-866. doi: 10.1088/0951-7715/13/3/319. Google Scholar
A. Pankov, Traveling Waves and Periodic Oscillations in Fermi-Pasta-Ulam Lattices, Imperial College Press, London, 2005. Google Scholar
G. Schneider and C. E. Wayne, Counter-propagating waves on fluid surfaces and the continuum limit for the Fermi-Pasta-Ulam model, in International Conference on Differential Equations, vol. 1, World Scientific, 2000,390-404. Google Scholar
H. Schwetlick and J. Zimmer, Kinetic relations for a lattice model of phase transitions, Arch. Rational Mech. Anal., 206 (2012), 707-724. doi: 10.1007/s00205-012-0566-8. Google Scholar
D. Smets and M. Willem, Solitary waves with prescribed speed on infinite lattices, J. Funct. Anal., 149 (1997), 266-275. doi: 10.1006/jfan.1996.3121. Google Scholar
L. Truskinovsky and A. Vainchtein, Kinetics of martensitic phase transitions: Lattice model, SIAM J. Appl. Math., 66 (2005), 533-553. doi: 10.1137/040616942. Google Scholar
A. Vainchtein, Y. Starosvetsky, J. Wright and R. Perline, Solitary waves in diatomic chains, Phys. Rev. E, 93 (2016), 042210. doi: 10.1103/PhysRevE.93.042210. Google Scholar
N. J. Zabusky and M. D. Kruskal, Interaction of 'solitons' in a collisionless plasma and the recurrence of initial states, Phys. Rev. Lett., 15 (1965), 240-243. doi: 10.1103/PhysRevLett.15.240. Google Scholar
Figure 1. Cartoon of the square lattice. The vertical and the horizontal springs are described by the potential $V_1$, while all diagonal springs correspond to $V_2$. Center panel: Triangle lattice with identical springs and single potential function $V$. Right panel: Cartoon of the diamond lattice, which can be regarded as a square lattice without horizontal springs. The lattices have different symmetry groups and produce different coupling terms in the advance-delay-differential equation for lattice waves, see (3)
Figure 2. Left panel: Numerical approximations of $W_{\epsilon, 1}(\xi)$ (black) and $W_{\epsilon, 2}(\xi)$ (gray) for the square lattice with angle $\alpha = \frac{\pi}{8}$ and positive $\epsilon$. Right panel: The plot $W_{\epsilon, 2} $ versus $W_{\epsilon, 1}$ reveals that the two components of $W_{\epsilon}$ are not proportional, which means that $W_{\epsilon}$ is not unidirectional and our problem cannot be reduced to a one-dimensional one as in [5]
Figure 3. Left panel. Scaled velocity profile $W_{\epsilon}$ as function of $\xi$. Right panel. Cartoon of the atomistic velocities in the corresponding KdV wave, where $\zeta = \kappa_1i+\kappa_2j-c_{\epsilon}t$ denotes the phase with respect to the original variables. The unscaled profile is obtained from the scaled one by stretching the argument by ${1}/{\epsilon}$ and pressing the amplitude by $\epsilon^2$
Figure 4. Left panel: Graph of the potential energy for the limit ODE (11). Right panel: The unique homoclinic solution in $\mathsf{L}_{\rm{even}}^2(\mathbb{R})$, which corresponds to the region between the two zeros of $E[W]: = \frac{d_2}{3}W^3-\frac{d_1}{2}W^2$
Figure 5. Left panel: The ${\mathop{\rm sinc}\nolimits} $ function. Right panel: Lower bound $\tfrac{1}{6}\min\{|z|, 2\}^2$ (dashed) and upper bound $\tfrac{1}{3}\min\{|z|, 2\}^2$ (dashed) for $S_1 = 1-{\mathop{\rm sinc}\nolimits} ^2$ (solid)
Figure 6. The auxiliary functions $\mu_1$ and $\mu_2$ from (52) in solid and dashed lines, respectively, for the square lattice with $\alpha = 0$
Figure 7. KdV-limit profiles for selected values of $\alpha$ in the square lattice, where the first and the second component of $W_0$ are represented by the solid and the dashed lines, respectively
Figure 8. Parameter test for the square lattice. $T(z)$ (solid) and $g(z) = 0.3\cdot (\min\{z, 2\})^2$ (dashed) for several values of $\alpha$. Assumption 7 requires $T(z)\geq g(z)$ for all $z\in \mathbb{R}$
Figure 10. The plots from Figure 7 for the diamond lattice. In the graph of $\lambda$ we find jumps at multiples of $\pi$, which is consistent with the fact that the lattice is symmetric with respect to the horizontal direction. For $\alpha = 0$ no KdV wave exists due to this singularity
Figure 11. The plots from Figure 8 for the diamond lattice
Figure 13. The plots from Figure 7 for the triangle lattice
Mia Jukić, Hermen Jan Hupkes. Dynamics of curved travelling fronts for the discrete Allen-Cahn equation on a two-dimensional lattice. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020402
Yuxi Zheng. Absorption of characteristics by sonic curve of the two-dimensional Euler equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 605-616. doi: 10.3934/dcds.2009.23.605
Li-Bin Liu, Ying Liang, Jian Zhang, Xiaobing Bao. A robust adaptive grid method for singularly perturbed Burger-Huxley equations. Electronic Research Archive, 2020, 28 (4) : 1439-1457. doi: 10.3934/era.2020076
Caterina Balzotti, Simone Göttlich. A two-dimensional multi-class traffic flow model. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020034
Zhenzhen Wang, Tianshou Zhou. Asymptotic behaviors and stochastic traveling waves in stochastic Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020323
Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243
Abdollah Borhanifar, Maria Alessandra Ragusa, Sohrab Valizadeh. High-order numerical method for two-dimensional Riesz space fractional advection-dispersion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020355
Mingjun Zhou, Jingxue Yin. Continuous subsonic-sonic flows in a two-dimensional semi-infinitely long nozzle. Electronic Research Archive, , () : -. doi: 10.3934/era.2020122
Lu Xu, Chunlai Mu, Qiao Xin. Global boundedness of solutions to the two-dimensional forager-exploiter model with logistic source. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020396
Elena Nozdrinova, Olga Pochinka. Solution of the 33rd Palis-Pugh problem for gradient-like diffeomorphisms of a two-dimensional sphere. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1101-1131. doi: 10.3934/dcds.2020311
Tong Peng. Designing prorated lifetime warranty strategy for high-value and durable products under two-dimensional warranty. Journal of Industrial & Management Optimization, 2021, 17 (2) : 953-970. doi: 10.3934/jimo.2020006
Qianqian Hou, Tai-Chia Lin, Zhi-An Wang. On a singularly perturbed semi-linear problem with Robin boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 401-414. doi: 10.3934/dcdsb.2020083
Yue-Jun Peng, Shu Wang. Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 415-433. doi: 10.3934/dcds.2009.23.415
Wei Feng, Michael Freeze, Xin Lu. On competition models under allee effect: Asymptotic behavior and traveling waves. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5609-5626. doi: 10.3934/cpaa.2020256
Gui-Qiang Chen, Beixiang Fang. Stability of transonic shock-fronts in three-dimensional conical steady potential flow past a perturbed cone. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 85-114. doi: 10.3934/dcds.2009.23.85
Cung The Anh, Dang Thi Phuong Thanh, Nguyen Duong Toan. Uniform attractors of 3D Navier-Stokes-Voigt equations with memory and singularly oscillating external forces. Evolution Equations & Control Theory, 2021, 10 (1) : 1-23. doi: 10.3934/eect.2020039
Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407
Zhiting Ma. Navier-Stokes limit of globally hyperbolic moment equations. Kinetic & Related Models, 2021, 14 (1) : 175-197. doi: 10.3934/krm.2021001
Mengting Fang, Yuanshi Wang, Mingshu Chen, Donald L. DeAngelis. Asymptotic population abundance of a two-patch system with asymmetric diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3411-3425. doi: 10.3934/dcds.2020031
Fanzhi Chen Michael Herrmann | CommonCrawl |
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Is there a name for this property Weil saw for integer polynomials?
Andre Weil noticed as a student in 1925 that the polynomial ring $\mathbb{Z}[x]$ comes close to being a PID, and he describes this as `` the embryo of my future thesis.''
He observed that, given $f(x),g(x)\in\mathbb{Z}[x]$, the Euclidean algorithm computes a sequence of polynomials where each is a linear combination of the preceding ones and which either:
1) Ends in a term which divides the preceding term and thus is something like a GCD for $f$ and $g$; or
2) Ends in an integer $d$ which does not divide the preceding term but bounds the common divisors of values pointwise in this way: For any integer $n$, an integer common divisor of $f(n),g(n)$ must divide $d$.
I have not seen this property discussed anywhere. Does it have a name?
There is related discussion in the question The resultant and the ideal generated by two polynomials in $\mathbb{Z}[x]$
The number $d$ is called the reduced resultant of $f,g$.
nt.number-theory ac.commutative-algebra
122 silver badges33 bronze badges
Colin McLartyColin McLarty
I do not know of a name. But Weil's observation follows from properties of the resultant, and generalizes to other rings $A[x]$ ($A$ a GCD domain).
The resultant $R(f,g)$ of two polynomials is defined either as a certain determinant or as a certain product over pairs of roots of $f,g$, see this Wikipedia page.
From the two definitions, we see that:
The resultant vanishes iff $f,g$ have common root (this is basically the definition of the resultant).
The resultant has the property that there exist $p,q \in \mathbb{Z}[x]$ ($\deg p < \deg g$, $\deg q < \deg f$) such that $$(*) p(x)f(x)+q(x)g(x)=R(f,g)$$ identically. In particular, $R(f,g) \in (f(x),g(x))$ - this ideal contains a non-zero integer!
If the resultant is 0, there is a genuine GCD.
Otherwise, we end up with a non-zero integer $R(f,g)$. Plugging $x=n$ in $(*)$ we see that indeed the GCD of $f(n),g(n)$ divides $R(f,g)$.
The above observations generalize to $A[x]$ when $A$ has the GCD property.
As explained in the wikipedia page, the Euclidean algorithm differs from the resultant calculation by a simple factor.
It is interesting to note that the resultant $R(f,g)$ is not the necessarily the generator of $(f(x),g(x)) \cap \mathbb{Z}$, which we will denote $c(f,g)$. Myerson has shown that $R(f,g)$ divides an (effective) power of $c(f,g)$. When $R(f,g)$ is squarefree and $f,g$ are monics it may be shown that $c(f,g)=R(f,g)$. This may be seen from the results of this recent paper of Frenkel and Pelikán, who study the possibly values of $\gcd(f(n),g(n))$ as $n$ ranges over $\mathbb{Z}$.
Ofir GorodetskyOfir Gorodetsky
$\begingroup$ The relation between $R(f,g)$ and $(f(x),g(x))\cap\mathbb{Z}$ was also discussed in this MO question. $\endgroup$
– Jarek Kuben
$\begingroup$ Does this use of the resultant generalize to multiple variables? Say $f(x,y,z),g(x,y,z)$. You can calculate the resultant in terms of $x$ by the determinant (treating $y$ and $z$ as parameters). But the ring of polynomials in $y,z$ is not a GCD domain so it is not clear to me what conclusions you can draw about common divisors of values $f(a,b,c),g(a,b,c)$. $\endgroup$
– Colin McLarty
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged nt.number-theory ac.commutative-algebra or ask your own question.
The resultant and the ideal generated by two polynomials in $\mathbb{Z}[x]$
What happens to factors of the resultant upon specialization?
Irreducible Polynomials from a Reccurence
How important is Weil's decomposition theorem today?
What is known about ideal and divisibility lattices of GCD domains and their generalizations?
Integer valued polynomials and polynomials with integer coefficients | CommonCrawl |
Can small asteroids in the asteroid belt be detected on the fly and how much of a threat do they represent for a human manned space mission there?
Supposedly there are between 1.1 to 1.9 millions asteroids larger than 1 km in the asteroid belt. I assume there are many more , a lot of smaller ones, which cant be detected from here.
According to Wikipedia Asteroid Belt
The first spacecraft to traverse the asteroid belt was Pioneer 10, which entered the region on 16 July 1972. At the time there was some concern that the debris in the belt would pose a hazard to the spacecraft, but it has since been safely traversed by 12 spacecraft without incident
If SpaceX succesfully lands a human mission on Mars in 2024, and if they actually start building a self sustained permanent base there by 2029, space exploration could speed up really fast and the closer (on the cold side of the solar system) solar system objects are in the asteroid belt, with a dwarf planet and some large asteroids which were confused with planets in the past due to their large sizes. And some of them get closer to Mars in distance than the distance of Earth-Mars. So one of the biggest problems could be traveling among all those asteroids and debris without getting your spaceship destroyed. That's what I would like to know,
Can small asteroids in the asteroid belt be detected on the fly by a spaceship and how much of a threat do they represent for a human manned space mission there, is there any technology developed for this?
asteroid debris asteroid-belt
PabloPablo
$\begingroup$ Read wikipedia the chapter about Exploration. "Due to the low density of materials within the belt, the odds of a probe running into an asteroid are now estimated at less than 1 in 1 billion." $\endgroup$ – Uwe Feb 22 at 16:09
The orbital speed of Mars is about 24 km/sec or ~ 2 million km/day, and so relative velocity to an asteroid could be fairly slow for one in a similar orbit, or easily 1 million km/day or more for one in a weird orbit.
Optical (Visual, thermal IR)
You could set up several optical (visible or thermal IR) survey telescopes around Mars, and try to vigilantly scan the most celestial sphere (except for directions near the Sun) for unexpected, very dim objects moving unpredictably, but that's quite a challenge, and there's still the big blind spot anywhere in the general direction of the Sun.
A better way is to start looking now! For potential NMO's (near-Mars-objects) you would start looking now, or at least several years before you started to need the information. And you would look using a space telescope with an orbit closer to the Sun than Mars for two reasons:
With a different period than Mars, the object will approach close to you within a few orbits. The synodic period $\frac{1}{T_S} = \left(\frac{1}{T_1} - \frac{1}{T_2} \right) $ is fairly short when the two periods are significantly different.
With the target objects farther from the Sun than you are, their close approaches happen when they are in the night sky, so you don't have the Sun to worry about.
Number 2. is important because it means you can shield your telescope from the Sun's heating effects as well as from the light. You want to keep an asteroid-hunter's telescope's cryogenic optics and sensors very cold.
It turns out that it's easier to search for these trouble-makers in the thermal infrared than it is in visible light. They are fairly black (albedo's often from 0.05 to 0.1) and reflect direct sunlight poorly. But that means that they absorb most of the Sun's light and effectively radiate it in thermal infrared. But the only way to have a telescope sensitive to thermal IR is to have the entire telescope (mirrors and optics and sensors) very cold so that the thermal radiation of the distant asteroid isn't swamped by thermal IR from the telescope itself.
Most stars (but not all objects) are much dimmer in thermal IR than they are in the visible, so that could potentially help when sorting out asteroids versus stars in preprocessing, compared to visual surveys.
Source: NEOCam: Finding Asteroids Before They Find Us (Orbit)
For more about WISE, NeoWISE, NEOCam, and B612, see:
Why has the Earth-Sun libration point L1 been chosen over L2 for NEOCam to detect new NEOs?
How will the "fleet of small asteroid hunters" of the B612 Foundation & York Space Systems work?
What does this paper say is wrong (quantitatively and procedurally) with WISE & NeoWISE asteroid data?
What is synthetic tracking, and why would a 35 cm Earth imager be 10-30x better than Pan-STARRS or LSST for interstellar asteroid discovery?
Calculate NEO object position with nasa Near Earth Object data
Radar, Lidar
You might consider searching using Radar (similarly to the way space debris is monitored) but you'd dismiss it quickly. The Sun paints objects between here and Mars with 0.5 to 1.3 kilowatts per square meter, and at a million kilometers you won't be able to match that with radar or even Lidar, at least not for the foreseeable future.
Another way this is often expressed is that the strength of signals using optical detection varies as $1/r^2$ whereas for radar (where you have a round-trip) it varies as $1/r^4$.
edited Feb 23 at 3:58
$\begingroup$ "The big blind spot anywhere in the general direction of the Sun" is mostly covered by asteroid survey missions on Earth/in Earth orbit. $\endgroup$ – Hobbes Feb 23 at 8:42
$\begingroup$ @Hobbes I think I've already said something like that: "For potential NMO's (near-Mars-objects) you would start looking now... And you would look using a space telescope with an orbit closer to the Sun than Mars for two reasons..." except I don't guarantee that it's "already covered" because the way those surveys are scheduled it may cause them to miss some NMOs. They might catch many/most, but I'm not certain. $\endgroup$ – uhoh Feb 23 at 8:57
$\begingroup$ If you look at this asteroid discovery video, you see that the asteroid surveys are not aimed at NEOs but find lots of asteroids between Mars and Jupiter. $\endgroup$ – Hobbes Feb 23 at 8:58
$\begingroup$ @Hobbes I have! ;-) Anyway I adjusted my comment and replaced "NEO surveys" with "those surveys". I'm not so familliar with the specifics, so if you think current surveys have got Mars fairly well "covered", that would be an excellent answer to add. This is as far as I can go. $\endgroup$ – uhoh Feb 23 at 9:04
Not the answer you're looking for? Browse other questions tagged asteroid debris asteroid-belt or ask your own question.
What is the largest naturally occurring body that could be hollowed and safely lived in?
What happens if an astronaut is hit by a tiny micrometeorite?
Would a 'space elevator'/sling on a rotating asteroid work?
How does one spacecraft best visit multiple asteroids?
How much of the asteroid belt is discovered?
How fast could the rotation of asteroids and planetoids be sped up before they disintegrate?
How many asteroids could a retrograde probe in the Asteroid Belt flyby?
Is delay-doppler radar imaging of NEO asteroids possible only if it spins fast enough?
Are there clusters of small asteroids in the Kuiper or asteroid belt which could threaten the Earth?
How much asteroid can we divert from the Earth? | CommonCrawl |
Yann ROLLIN
Professor at Nantes University
Working seminar
Track me
Preprints Published Thesis In progress Other
Y. Rollin, Polyhedral approximation by Lagrangian and isotropic tori, https://arxiv.org/abs/2012.05777
Abstract: We prove that every smoothly immersed 2-torus of $\mathbb{R}^4$ can be approximated, in the C0-sense, by immersed polyhedral Lagrangian tori. In the case of a smoothly immersed (resp. embedded) Lagrangian torus of $\mathbb{R}^4$, the surface can be approximated in the C1-sense by immersed (resp. embedded) polyhedral Lagrangian tori. Similar statements are proved for isotropic 2-tori of $\mathbb{R}^{2n}$.
F. Jauberteau and Y. Rollin, Polyhedral Lagrangian surfaces and moment map flow, Coming up soon.
F. Jauberteau and Y. Rollin, Piecewise linear symplectomorphism and hyperKähler moment map flow, In progress.
Most of my preprints are available on www.arXiv.org
F. Jauberteau, Y. Rollin and S. Tapie, Discrete geometry and isotropic surfaces, Mém. Soc. Math. Fr. (N.S.) (161):vii+101, 2019, http://arxiv.org/abs/1802.08712
Abstract: We consider smooth isotropic immersions from the 2-dimensional torus into R2n, for n≥2. When n=2 the image of such map is an immersed Lagrangian torus of R4. We prove that such isotropic immersions can be approximated by arbitrarily C0-close piecewise linear isotropic maps. If n≥3 the piecewise linear isotropic maps can be chosen so that they are piecewise linear isotropic immersions as well. The proofs are obtained using analogies with an infinite dimensional moment map geometry due to Donaldson. As a byproduct of these considerations, we introduce a numerical flow in finite dimension, whose limit provide, from an experimental perspective, many examples of piecewise linear Lagrangian tori in R4. The DMMF program, which is freely available, is based on the Euler method and shows the evolution equation of discrete surfaces in real time, as a movie.
E. Legendre and Y. Rollin, Hamiltonian stationary Lagrangian fibrations, 2016, to appear in Journal of Symplectic Geometry, https://hal.archives-ouvertes.fr/hal-01332545
Abstract: Hamiltonian stationary Lagrangian submanifolds (HSLAG) are a natural generalization of special Lagrangian manifolds (SLAG). The latter only make sense on Calabi-Yau manifolds whereas the former are defined for any almost Kähler manifold. Special Lagrangians, and, more specificaly, fibrations by special Lagrangians play an important role in the context of the geometric mirror symmetry conjecture. However, these objects are rather scarce in nature. On the contrary, we show that HSLAG submanifolds, or fibrations, arise quite often. Many examples of HSLAG fibrations are provided by toric Kähler geometry. In this paper, we obtain a large class of examples by deforming the toric metrics into non toric almost Kähler metrics, together with HSLAG submanifolds.
V. Apostolov and Y. Rollin, ALE scalar-flat Kähler metrics on non-compact weighted projective spaces, Math. Ann. 367(3-4):1685-1726, 2017, http://arxiv.org/abs/1510.02226.
Y. Rollin, K-stability and parabolic stability, Advances in Mathematics 285:1741-1766, 2015, http://arxiv.org/abs/1303.2332
Abstract: Parabolic structures with rational weights encode certain iterated blowups of geometrically ruled surfaces. In this paper, we show that the three notions of parabolic polystability, K-polystability and existence of constant scalar curvature Kähler metrics on the iterated blowup are equivalent, for certain polarizations close to the boundary of the Kähler cone.
O. Biquard and Y. Rollin, Smoothing singular extremal Kähler surfaces and minimal Lagrangians, Advances in Mathematics 285:980 - 1024, 2015, http://arxiv.org/abs/1211.6957
Abstract: We consider smoothings of a complex surface with singularities of class T and no nontrivial holomorphic vector field. Under an hypothesis of non degeneracy of the smoothing at each singular point, we prove that if the singular surface admits an extremal metric, then the smoothings also admit extremal metrics in nearby Kähler classes. In addition, we construct small Lagrangian stationary spheres which represent Lagrangian vanishing cycles for surfaces close to the singular one.
Y. Rollin and C. Tipler, Deformations of extremal toric manifolds, J. Geom. Anal. 24(4):1929-1958, 2014
Abstract: Let X be a compact toric extremal Kähler manifold. Using the work of Székelyhidi, we provide a combinatorial criterion on the fan describing X to ensure the existence of complex deformations of X that carry extremal metrics. As an example, we find new CSC metrics on 4-points blow-ups of $CP^1\times CP^1$.
Y. Rollin, S.R. Simanca and C. Tipler, Deformation of extremal metrics, complex manifolds and the relative Futaki invariant, Math. Z. 273(1-2):547-568, 2013, http://arxiv.org/abs/1107.0456
Abstract: Let $(\mathcal {X},\Omega)$ be a closed polarized complex manifold, $g$ be an extremal metric on $\mathcal X$ that represents the Kähler class $\Omega$, and $G$ be a compact connected subgroup of the isometry group $Isom(\mathcal{X},g)$. Assume that the Futaki invariant relative to $G$ is nondegenerate at $g$. Consider a smooth family $(\mathcal{M}\to B)$ of polarized complex deformations of $(\mathcal{X},\Omega)\simeq (\mathcal{M}_0,\Theta_0)$ provided with a holomorphic action of $G$. Then for every $t\in B$ sufficiently small, there exists an $h^{1,1}(\mathcal{X})$-dimensional family of extremal Kähler metrics on $\mathcal{M}_t$ whose Kähler classes are arbitrarily close to $\Theta_t$. We apply this deformation theory to analyze the Mukai-Umemura 3-fold and its complex deformations.
Y. Rollin and M. Singer, Construction of Kähler surfaces with constant scalar curvature, J. Eur. Math. Soc. (JEMS) 11(5):979-997, 2009, math.DG/0412405
Abstract: We present new constructions of Kaehler metrics with constant scalar curvature on complex surfaces, in particular on certain del Pezzo surfaces. Some higher dimensional examples are provided as well.
O. Biquard and Y. Rollin, Wormholes in ACH Einstein manifolds, Trans. Amer. Math. Soc. 361(4):2021-2046, 2009, math.DG/0609558
Abstract: We give a new construction of Einstein and Kaehler-Einstein manifolds which are asymptotically complex hyperbolic, inspired by the work of Mazzeo-Pacard in the real hyperbolic case. The idea is to develop a gluing theorem for 1-handle surgery at infinity, which generalizes the Klein construction for the complex hyperbolic metric.
Y. Rollin and M. Singer, Constant scalar curvature Kähler surfaces and parabolic polystability, J. Geom. Anal. 19(1):107-136, 2009, math.DG/0703212
Abstract: A complex ruled surface admits an iterated blow-up encoded by a parabolic structure with rational weights. Under a condition of parabolic stability, one can construct a Kaehler metric of constant scalar curvature on the blow-up according to math.DG/0412405. We present a generalization of this construction to the case of parabolically polystable ruled surfaces. Thus we can produce numerous examples of Kaehler surfaces of constant scalar curvature with circle or toric symmetry.
T. Mrowka and Y. Rollin, Legendrian knots and monopoles, Algebr. Geom. Topol. 6:1-69 (electronic), 2006, math.DG/0410559, (MR)
Abstract: We prove a generalization of Bennequin's inequality for Legendrian knots in a 3-dimensional contact manifold, under the assumption that it is the boundary of a 4-dimensional manifold M and the version of Seiberg-Witten invariants introduced by Kronheimer and Mrowka is non-vanishing. The proof requires an excision result for Seiberg-Witten moduli spaces; then, the Bennequin inequality becomes a special case of the adjunction inequality for surfaces lying inside M.
Y. Rollin and M. Singer, Non-minimal scalar-flat Kähler surfaces and parabolic stability, Invent. Math. 162(2):235-270, 2005, math.DG/0404423, (MR)
Abstract: A new construction is presented of scalar-flat Kaehler metrics on non-minimal ruled surfaces. The method is based on the resolution of singularities of orbifold ruled surfaces which are closely related to rank-2 parabolically stable holomorphic bundles. This rather general construction is shown also to give new examples of low genus: in particular, it is shown that $\mathbb{CP}^2$ blown up at 10 suitably chosen points, admits a scalar-flat Kaehler metric; this answers a question raised by Claude LeBrun in 1986 in connection with the classification of compact self-dual 4-manifolds.
Y. Rollin, Rigidité d'Einstein du plan hyperbolique complexe, J. Reine Angew. Math. 567:175-213, 2004, math.DG/0112099, (MR)
Abstract: We prove that every Einstein metric on the unit ball $B^4$ of $\mathbb{C}^2$, asymptotic to the Bergman metric, is equal to it up to a diffeomorphism. We need a solution of Seiberg--Witten equations in this infinite volume setting. Therefore, and more generally, if $M^4$ is a manifold with a CR-boundary at infinity, an adapted spinc-structure which has a non zero Kronheimer--Mrowka invariant and an asymptotically complex hyperbolic Einstein metric, we produce a solution of Seiberg--Witten equations with an strong exponential decay property.
Y. Rollin, Surfaces kählériennes de volume fini et équations de Seiberg-Witten, Bull. Soc. Math. France 130(3):409-456, 2002, math.DG/0106077, (MR)
Abstract: Let M=P(E) be a ruled surface. We introduce metrics of finite volume on M whose singularities are parametrized by a parabolic structure over E. Then, we generalise results of Burns--de Bartolomeis and LeBrun, by showing that the existence of a singular Kahler metric of finite volume and constant non positive scalar curvature on M is equivalent to the parabolic polystability of E; moreover these metrics all come from finite volume quotients of $\mathbb{H}^2 \times \mathbb{CP}^1$. In order to prove the theorem, we must produce a solution of Seiberg-Witten equations for a singular metric g. We use orbifold compactifications $\overline M$ on which we approximate g by a sequence of smooth metrics; the desired solution for g is obtained as the limit of a sequence of Seiberg-Witten solutions for these smooth metrics.
Y. Rollin, Einstein rigidity of the complex hyperbolic plane, C. R. Math. Acad. Sci. Paris 334(8):671-676, 2002, http://yann.rollin.free.fr/crastein.pdf, (MR).
Y. Rollin, Métriques kählériennes de volume fini, uniformisation des surfaces complexes réglées et équations de Seiberg-Witten, PhD thesis, École Polytechnique, jan 2001,
http://yann.rollin.free.fr/these_rollin.pdf.
Y. Rollin, Topologie de contact et géométries à courbure spéciale via la théorie de jauge, mémoire d'habilitation , 2007, http://yann.rollin.free.fr/hab.pdf.
Y. Rollin, A remark on the Herzlich volume of asymptotically complex hyperbolic Einstein manifolds, 2008, http://arxiv.org/abs/0802.2474.
Last update: Mon, 07 Oct 2019 11:48:21 +0200 | CommonCrawl |
Mapping regional forest management units: a road-based framework in Southeastern Coastal Plain and Piedmont
Di Yang ORCID: orcid.org/0000-0002-4010-61631,2 &
Chiung-Shiuan Fu2
Management practices are one of the most important factors affecting forest structure and function. Landowners in southern United States manage forests using appropriately sized areas, to meet management objectives that include economic return, sustainability, and esthetic enjoyment. Road networks spatially designate the socio-environmental elements for the forests, which represented and aggregated as forest management units. Road networks are widely used for managing forests by setting logging roads and firebreaks. We propose that common types of forest management are practiced in road-delineated units that can be determined by remote sensing satellite imagery coupled with crowd-sourced road network datasets. Satellite sensors do not always capture road-caused canopy openings, so it is difficult to delineate ecologically relevant units based only on satellite data. By integrating citizen-based road networks with the National Land Cover Database, we mapped road-delineated management units across the regional landscape and analyzed the size frequency distribution of management units. We found the road-delineated units smaller than 0.5 ha comprised 64% of the number of units, but only 0.98% of the total forest area. We also applied a statistical similarity test (Warren's Index) to access the equivalency of road-delineated units with forest disturbances by simulating a serious of neutral landscapes. The outputs showed that the whole southeastern U.S. has the probability of road-delineated unit of 0.44 and production forests overlapped significantly with disturbance areas with an average probability of 0.50.
The Southeastern United States (SEUS) forest comprises 32% of the total U.S. forestland (Oswalt et al. 2014), which combined with the productivity of the forest, places this region at the forefront of American forestry production (Fox et al. 2007). This heterogeneous landscape is composed of heavily managed forests, intensive agriculture, and multiple metropolitan areas. SEUS, although one of the most densely forested regions in the United States (Hanson 2010), is also heavily dissected by road networks (Coffin 2007). The diverse forest management patterns, reflecting long-term land-use legacies (Josephson 1989; Haynes 2002), contribute to the complex land mosaic of SEUS.
It is challenging to quantify the ecological and anthropogenic mechanisms that control the spatial structure of the forest landscape and its surrounding areas in these complex forest mosaics. Forest management is the predominant factor in forest ecology and structural patterns (Becknell et al. 2015), but little is known about how management practices are related to surrounding land-use at the regional scale. One thing that is known is that, in the SEUS, significant expansions of urban areas tend to convert forested land to urban uses and that croplands tend to transition to pine plantations (Davis et al. 2006; Haynes 2002; Wear and Greis 2002, 2012, 2013; Stanturf et al. 2003; Becknell et al. 2015). To understand the ecological and anthropogenic influences of differently managed forests on ecosystem processes, all landscapes should be understood at multiple scales, from the local scale (forest management unit) to regional scales, with the regional scale referring to broad forest mosaics that are formed from management patches (O'Neill et al. 1996). Understanding forest management spatial patterns require defining a map-based management unit, which is the subdivision regarding effects of land use on forest ecosystems. This research seeks to further the understanding of how spatial patterns of forest management affect land use, by posing the following questions: Do roads delineate forest management units? What is the spatial distribution of road-defined forest management units in the SEUS? Moreover, how are the distributions of management units affected by different forest management approaches? And how does forest management affect nearby land use, and how does nearby land use affect forest management?
Forest management is the main driving force of forest structure in SEUS (Becknell et al. 2015) and alters forest properties and processes, which affects forest ecosystem services (Kurz et al. 2008; Stephens et al. 2012; Oswalt et al. 2014). Forest management can be classified into four categories: production forestry, ecological forestry, passive management and preservation management (Becknell et al. 2015). Production management harvests forest products and sustains the bio-productivity of the system with the sole objective of producing wood, pulp, and other forest products. In SEUS, production management based on silviculture systems, which homogenize parts of the landscape, has predominated the SEUS (Siry 2002). However, forest conservation systems have evolved considerably over recent decades (Mitchener and Parker 2005; Franklin et al. 2007). Ecological management uses legacies of disturbance, including intermediate stand disturbance processes such as variable density thinning and fire, and variable and appropriate recovery times to manage forests that still produce economically valuable wood products while preserving many of the values of natural forests (Franklin et al. 2007). Passive management is defined as a practice with little or no active management. We argue that all forests are managed to some degree and that doing nothing is a form of management. Preservation forestry aims to minimize the ecological footprint of society with the objectives of protecting wildlife and maintaining ecosystem services. Furthermore, certain forest management practices can suppress wildfires (Waldrop et al. 1992), prevent insect /pathogen outbreaks (Netherer and Nopp-Mayr 2005; Faccoli and Bernardinelli 2014), change water yield and hydrologic regulation (Douglass 1983), produce wood products, provide places for hunting and recreation, and conserve habitat biodiversity.
Forestland structures, functions, and ecological processes are scale-dependent (Battaglia and Sands 1998; Niemelä 1999; Drever et al. 2006). Regionally, for the purpose of sustainable forest management, we need to develop criteria and indicators of management units. Forest management units in this study are zones or patches, which can be identified, mapped and managed according to the land-use objectives. Road networks link human activities (e.g. management practices) and surrounding physical environments (land cover). For production, preservation, and ecological forestry, in many cases, forest management units are harvest or burn units. Roads are built to create access for the managers and harvesters. For example, the preserved forest in the Ordway Swisher Biological Station (OSBS) in northcentral Florida is subdivided by road networks into management (burn) units, which is the smallest unit of land that is actively managed (Ordway-Swisher Biological Station 2015). In the Joseph W. Jones Ecological Research Center in Ichauway, Georgia, and OSBS, the internal road network provides access to the research site and serves as prescribed fire breaks (Ordway-Swisher Biological Station 2015). For passive management practices, there are currently no clear criteria defining the management units. However, in national forest systems (mostly with ecologically managed forests and multi-use production forests), existing roads and trails are used for controlling prescribed fire and wildfires (e.g. Apalachicola National Forest, Osceola National Forest, United States Department of Agriculture Forest Service 1999). For an example of a privately-owned forest, the Red Hills region in Georgia uses roads to delineate burn units ranging approximately from five to 30 ha (Robertson and Ostertag 2007).
Road networks facilitate movements of humans and connect natural resources with societies and economies. As conduits for human access to nature, the physical footprint of approximately 6.6 million km of roads in the United States FHWA (Federal Highway Administration 2013) has significant primary and secondary impacts on ecosystems and the distribution of species (Bennett 1991). Fifteen to 20 % of American land is subject to the ecological effects of road networks (Forman and Alexander 1998; Forman and Deblinger 2000). The most noticeable effects of road networks on forest structures are landscape structure changes, including reduced mean patch size, increased patch shape complexity, increased edge densities, and reduced unit connectivity. In one case, McGarigal et al. (2001) investigated the landscape structure changes of the San Juan Mountains from 1950 to 1993 and found that roads had a more significant ecological impact (e.g. core forest areas and patch sizes decrease) on landscape structure than logging activities.
In addition to management practices (e.g., harvesting, fertilizing), the construction of road networks divides the forested land into smaller patches, thereby increasing the potential intensity of the effects of management practices. Road networks in managed forests provide easy access for managers and harvesters to extract and regenerate resources (Demir and Hasdemir 2005). Roads may influence fire regimes by increasing fire ignition as a result of human activities (Franklin and Forman 1987). Moreover, road networks alter the spatial configuration of management patches by functioning as firebreaks, which form new patterns in landscapes (Franklin and Forman 1987; Nelson and Finn 1991; Eker and Coban 2010). By quantifying the spatial patterns of management units created by roads we may gain insight into the ecological effects of road networks on spatial forest structures within differently managed areas.
The impacts and ecological effects of roads on the landscape might be misestimated because methods measuring the road-effect zones and landscape scale effects are not yet well developed (Ries et al. 2004; Hou et al. 2013). Roads and streams may be challenging to identify, or invisible because they do not open the canopy so that many roads are not detectable on satellite imagery or even aerial photography. The reliability of large-scale road data is also challenged due to issues of accuracy, coverage and immediacy, all of which can underestimate the extent and ecological impacts of roads on forest structures (Riitters et al. 2004). We propose that common types of forest management are practiced in road-delineated units that are detectable by remote sensing satellite images coupled with crowd-sourced road network datasets. We also describe and study the patterns of forest management units in response to land ownership and different management practices.
We focus on the Southeastern U.S. Coastal Plain and Piedmont (SEUS) region (Fig. 1). The SEUS is located between Piedmont to the north and the Atlantic Ocean to the east and covers a significant portion of the southeastern United States. The SEUS is the home to the most densely production-forested region nationwide, which makes up 32% of total U. S forest cover (Oswalt et al. 2014). Based on EPA eco-region descriptions, land cover in the SEUS is a mosaic of cropland, pasture, woodland and forests (Bailey 2004). Major silvicultural forests in SEUS are pine forests, such as slash pine (Pinus elliottii Engelm.) and loblolly pine (Pinus taeda L.) forests. European settlement and the extensive harvesting in the early 1900s removed 98% of the original longleaf pine (Pinus palustris Mill.) forests, which was one of the most dominant ecosystems in SEUS (Outcalt 2000) and converted them to plantations of native slash pine. The SEUS forest system is a fire-dominated system with native trees adapted to short-period stand-clearing events. The primary forest management types are production and passive management due to the dominant ownership of private owners, logging companies, and investment institutions (Real Estate Investment Trusts (REITs) and Timber Investment Management Organizations (TIMOs)) (Zhang et al. 2012). This diversity of land cover types is spatially heterogenous, and patch sizes of the numerous vegetation classes vary across a wide range of scales (Fig. 1).
Study area of Southeastern United States Coastal Plain and Piedmont, with level III ecoregions (Bailey 2004)
Forest extent
In this study, forest extent is determined by a composite of the 2006 and 2011 USGS National Land Cover Database (NLCD), which was constructed from Landsat imagery at 30-m spatial resolution (Jin et al. 2013). We aggregated 21 NLCD classes into two classes: forest (deciduous, evergreen, mixed forest and woody wetland) and non-forest (including water). Only the pixels that contain 50% or more forest area in NLCD will be considered as forested pixels. We also extracted the SEUS urban areas by using the most recent 2015 US Census Bureau's TIGER cartographic boundary urban areas (TIGER 2015) dataset to remove urban areas from the analysis.
Forest management type
An integrated random forest classifier was built from the analysis of long-term phenological features derived from BFAST outputs and spectral entropy calculated from the Terra-MODIS enhanced vegetation index (EVI: MOD13Q data product), along with ancillary data such as land ownership, and disturbance history to classify different forest management types (Breiman 2001; Verbesselt et al. 2010; Zaccarelli et al. 2013). The forest management type map has a spatial grain of 250 m and is a composite of phenological patterns and changes in the patterns from February 2001 through December 2016 (Figure S1). The SEUS forest management type map has an overall accuracy of 89% for a 10-fold cross-validation. The forest management raster is available for each region as georeferenced GeoTIFF rasters with a 250-m resolution from PANGAEA (Marsik et al. 2017).
We selected OpenStreetMap as the primary road data and the USDA National Forest service trail and road maps (https://data.fs.usda.gov/geodata/edw/datasets.php?dsetCategory=transportation, accessed Dec 2016) as secondary in this study. OpenStreetMap is a collaborative, crowdsourced project that creates free, open, and accessible maps of road networks. OpenStreetMap is one of the most popular and well-supported Volunteered Geographic Information (VGI) datasets (Mooney et al. 2010). Community volunteers collect geographic information and submit it to the global OpenStreetMap database (Ciepluch et al. 2009). OpenStreetMap monitors road networks at near real-time and includes additional classes of roads such as private access roads and driveways in rural areas, small service roads or alleys in urban areas, and forest access roads. All those road features are critical for this research and no distinctions were drawn between the types of road, traffic volume, or other factors. OpenStreetMap shows up-to-date road networks information, which the other official road databases do not offer. The accuracy of the OpenStreetMap in our study region has been studied. For the state of Florida, Zielstra and Hochmair (2011) compared road networks dataset from different sources and concluded that OpenStreetMap was significantly better the other road databases. All OpenStreetMap data were downloaded from the website of Geofabrik (http://download.Geofabrik.de, Accessed Dec 2016). USDA National Forest Services Trails and Road Map provide the coverage of detailed transportation map in National Forests (Coghlan and Sowa 1997).
Road density in SEUS was measured as the total length of all roads (in kilometers) in a district divided by the total land coverage area of the district (km2) based on our developed road networks map (Figure S2).
Landscape fire and resource management planning tools (LANDFIRE)
Disturbance data from the Landscape Fire and Resource Management Planning Tools (LANDFIRE) disturbance database were used to evaluate the management unit map. LANDFIRE is a combination of Landsat images, fire program data, and cooperator-provided field data and other ancillary databases (e.g., PAD-US), and is a shared program between the wildland fire management programs of the U.S. Department of Agriculture Forest Services and U.S. Department of the Interior (Rollins 2009). LANDFIRE also describes land cover/use both spatially and temporally from 1999 to 2014 and provides the existing vegetation composition map based on dominant species or group of dominant species. Spatially, LANDFIRE is a Landsat-based (30 m) database, which matches the 30-m spatial resolution of this study (https://www.landfire.gov/disturbance.php). We chose the LANDFIRE project data because the spatial scale is small enough to detect subtle changes brought about by land management practices, and large enough to reflect the characteristic variability of essential ecological processes (such as wildfire) in the appropriate spatial context. The disturbance data from LANDFIRE will be used to evaluate management units delineated by the road network in the context of intensely managed forests in SEUS (Figure S3).
Forest ownership
Geospatial land-ownership data from federal and nongovernmental agencies were integrated for land ownership mapping (Figure S4). Forest ownership in SEUS is broadly categorized as publicly owned and privately owned according to the landowners. There are six sub-types of public ownership, which are federally protected, federal, state protected, state, military, and local. Also, there are four sub-types of private ownership: non-governmental organization, private, family, and corporate. The ownership classification implies different management objectives, as well as landowner skills, budgets and interests. Datasets from other federal and state government agencies were regrouped and classified into ten sub-types to create a comprehensive dataset that includes public land ownership and privately protected easements as well as specially designated areas and associated protection level (see Table S1). The final product is a 250-m spatial resolution raster data depicting the forest ownership types and resampled to 30-m in this study to match the spatial resolution of the NLCD database.
Mapping road-delineated units
In each forest management type, the fundamental element of management practice is the management unit. In this study, we define the individual forest clusters that delineated by road-networks as "management units" and the clusters that directly derived from forest extent map as "management patches." We hereby developed two comparative methods for landscape analysis by using two sets of input data, with and without incorporating OpenStreetMap. For the method without incorporating OpenStreetMap, management patches were mapped on the forest extent map resulting from the map described in the section of Forest Extent and the Region Group tool in ESRI ArcGIS 10.X (ESRI Inc.,) to identify clusters of forest pixels that formed unique and unconnected forests.
Road-delineated forests units were mapped on the forest extent map from the section of Forest Extent after superimposing detailed road networks with the Region Group tool to identify forest clusters as units. When superimposing road maps, all road networks were converted to one-pixel segments. After one-pixel wide road segments were derived, we converted all the forest pixels to 1 and the pixels that contained at least one road segment to non-forest pixels (30-m spatial resolution) to 0.
Geospatial assessment
In SEUS, regional forest management activities were represented as disturbances as described by LANDFIRE data, such as clear cuts, fires, and thinning. Figure 2 shows the example view of LANDFIRE cumulative disturbance with delineated forest extent, and it can be clearly observed (with the Warren's Index of 0.62) that the disturbed areas and delineated unit shared boundaries and show a large degree of equivalence. The spatial coincidence has been shown to facilitate the interpretation and integration of defining regional forest management units. In this study, we propose a test based on forest management unit and the geographic corresponding forest disturbances to compare the geographical similarity between the management unit and corresponding forest disturbances. The assessment of geographic image overlap is analogous to quantifying the niche similarity of two species in two dimensions. As two-dimensional rasters, both disturbance and forest extent data can be treated as homogenous and spatial-explicit datasets.
Spatial visualization of road-delineated forest extent (road delineated units - yellow colored polygon) matching with LANDFIRE disturbance data. Warren's Index for this sub-landscape is 0.62
Testing the overlap between pairs of road-delineated forest extent with disturbance regions was compared using the similarity statistics of Warren's Index (Warren et al. 2008). The values of Warren's Index range from 0 to 1. The value of 0 means forest management units have no spatial overlap with forest disturbance areas, and 1 means all forest management units are identical to disturbance areas. The statistics of Warren's Index assume probability distribution defined over geographic space, in which the pX, i (or pY, i) denotes the spatial probability distribution of X (road-delineated forests compartments), or Y (probability distribution of forest disturbances) to cell i. In nature, Warren's equivalency index carries no biological assumptions concerning the parameters as being from any probability distributions. Spatially, we applied it into assessing road-delineated forest management units versus the areas of corresponding disturbances.
Firstly, the Hellinger distance was calculated to measure and compare the probability distance (Van der Vaart 1998):
$$ H\left({p}_X,{p}_Y\right)=\sqrt{\sum \limits_i{\left(\sqrt{p_{X,i}}-\sqrt{p_{Y,i}}\right)}^2} $$
while the Warren's statistic is:
$$ I\left({p}_X,{p}_Y\right)=1-\frac{1}{2}H\left({p}_X,{p}_Y\right) $$
To measure and quantify the similarity of differently managed management units with the corresponding disturbance areas, all the units were classified into four groups (ecological, passive, preservation and production management). A 10 km × 10 km grid was utilized to compute Warren's Index over the SEUS. The inputs of this iterative Warren's Index spatial analysis are: 1) stacked disturbance area data from LANDFIRE and 2) corresponding road-delineated forest management units. These analyses were carried out using the nivheOverlap function of dismo package in R 3.3.X (Hijmans et al. 2015).
We further tested the hypothesis of road delineated forest management units by generating neutral landscape models from two perspective as:
1) Spatially. In this study, we used the Worldwide Reference System 2 (WRS-2) row 17 path 39 (17_39 hereafter), which is one scene of Landsat Thematic Mapper (TM). As shown in Fig. 3, the forests in 17_39 are heavily disturbed and fragmented, but it also shared the larges contiguous forests in SEUS (Okefenokee National Refuge). This heterogeneous landscape consists of a mixture of natural and plantation forests, urban centers, urban and rural residential areas, and commercial and small-scale agricultural operations. Instead of using LANDFIRE disturbance data, we simulated a neutral disturbance layer by mimicking the disturbance patch shape and area called "Random NLM" (Saura and Martinez-Millán 2000; Sciaini et al. 2018). 17_39 was divided into 325 individual landscapes with the spatial resolution of 30 m (10 km × 10 km in size), which means each landscape in 17_39 contains 333 × 333 pixels (10,000 ha). We applied the neutral landscape generation algorithm in each landscape in 17_39 for 325 times and recalculated the Warren's Index for each 10 km × 10 km landscape with the simulated disturbance layer with the road networks delineated forest extent map.
Case study region of Worldwide Reference System 2 (WRSII) row 17 path 39 (17_39). As the LANDFIRE EVT product shows, it encompasses a diversity of landcover and disturbance types. Much of the conifer and conifer-hardwood land cover found outside of the large riparian area of the Okefenokee Swamp (upper central) are heavily managed, privately owned plantations and mixed agriculture/timber land use
2) Iteratively. We applied a range-based attribute filter to the SEUS road density map (Figure S2). The road densities range from 0 to 91.48 km∙km− 2, so we used the interval of 5 to randomly select a serious spatial noncorrelated 10 km × 10 km grids with a total number of 19 landscapes in SEUS (Fig. 4). For each spot, we iteratively run the Random NLM algorithm for 500 times and calculated the Warren's Index with overlaying the simulated disturbance resulted from NLM with the road networks delineated forest extent map.
Spatial distribution of 19 points with 5 km∙km− 2 intervals
Spatial assessment
We calculated the Warren's Index with overlaying the road networks delineated forest extent with LANDFIRE disturbance map on a 10 km × 10 km grid in SEUS (Fig. 5). The total number of 10 km × 10 km grids is 11,072. Figure 5 shows the probability distribution of road network delineating management unit, with an average of 0.44 and the standard deviation is 0.28. We also computed the distribution and found that most forested areas have the Warren's Index from 0.3 to 0.5. When we extracted 17_39 (Fig. 6a), shown as Fig. 6b with the mean value of 0.54 and the standard deviation of 0.17.
The probability of road networks delineating management unit in SEUS. The spatial resolution of this map is 10 km, and the variable is the value of Warren's Index calculated for the overlap of road-defined areas and disturbance areas from the LANDFIRE database
The spatial distribution map of Warren's Index overlaying the road network delineated forested extent with neutral simulated disturbance map over 17_39 area (a). (b) is the one of 10 km × 10 km simulated landscapes, (c) is the distribution of the warrens' I with the average value of 0.14 and the standard deviation of 0.08. The sub-figure shows the randomly simulated patches in a plot of 10 km by 10 km
By replotting the spatial probability of road networks delineating management units in 17_39 (Fig. 6c), we recalculated the histogram of how the Warren's Index distributed over 17_39 region after replacing the LANDFIRE data with simulated disturbance layer, as shown in Fig. 6d. The mean value of the probability distribution is 0.14 with the standard deviation of 0.082. The results show that human-derived forest disturbances can result in a non-random association with forest extent. By comparing with the LANDFIRE disturbance derived Warrans' I map (Fig. 6a), road networks make a great contribution in identifying and shaping forest patterns in the case study area 17_39, wherein SEUS reflect the management practices on the ground, and so for the road-delineated forest compartments, we hereby call them forest management units.
Road density in SEUS ranges from 0 to 91 km∙km− 2. Table 1 lists the info of all 19 10 km × 10 km grids with a variation of road densities, the mean values of 500 simulated landscapes. We compared the original Warren's Index of all 19 landscapes with the mean values calculated from simulated landscapes, and found all the simulated landscape show significant difference on each landscape with the original Warren's Index. We also plot the histogram of the landscapes to see how the simulated neutral landscape distributed (Fig. 7). A linear relationship between the Warren's Index overlaying simulated disturbances and roads with the Warren's Index overlaying LANDFIRE disturbances with roads (Figure S5) with R2 of 0.4755. It shows the evidence that roads help to shape forest patterns by statistical test and approve the hypnosis that road delineates forest management patterns.
Table 1 Spatial assessment covering the different road densities (km/km2)
Distributions of 19 simulated landscapes of SEUS (Numbers are corresponding with Fig. 4)
Forest management unit map
A total area of 5.24 × 107 ha managed forest was measured in the SEUS from the forest extent map based on NLCD. When we compared the NLCD derived forest/non-forest maps between 2006 and 2011, a strong tendency of deforestation was found on our study area (1.7 × 106 ha). The total length of the roads in SEUS is 2.26 million km, which was calculated based on the number of rasterized road network multiply the spatial resolution (30 m). The road densities in SEUS range from zero to 49.29 km∙km− 2 (Figure S2). We mapped the forest management units at regional scales, and the spatial distribution of those management units represented the diversity and heterogeneity of road-delineated forests clusters.
As the unit size increases, the number of units decreases in an approximately logarithmic manner (Fig. 8). Overall, 177,400 units occupied the SEUS with the mean size of 29.5 ha. We found that 35.94% of the forest management units represent more than 99% of the whole forest and ranged in area from 0.5 to 172,886 ha (the Okefenokee National Wildlife Refuge). The remaining 64.06% of small forest compartments, which are smaller than 0.5 ha, covers only 0.98% of the forested area.
The spatial distribution of SEUS forest management units based on unit size
The SEUS forests is dominated by management unit size ranging from 100 to 10,000 ha. The forest management unit-size class map was reclassified based on the forest management unit sizes (Fig. 9). From Fig. 9, it can be clearly seen that riparian forests stand out as large, unbroken linear features (most of the orange colored units), which cover average unit sizes from 10,000 to 100,000 ha. A characteristic feature for the southeastern forest is that relatively small and large management units locate close to each other, surrounded with small-sized units (Figs. 9 and 10). Figure 10 shows five close-ups/examples from multiple locations and landscapes. In our study area, the largest management unit is an area of the Okefenokee National Wildlife Refuge (Georgia and Florida) with the size of 172,886 ha (Figs. 9 and 10d), followed by the Atchafalaya River basin in Louisiana at 102,367 ha (Fig. 9 and 10c). Riparian forests are often undisturbed in SEUS without much road access because their soils are too wet for harvesting machinery and the trees are not as valuable. There is also a cluster of large forest management units (yellow: 1000–10,000 ha) along the Piedmont ecoregion, and the middle part of the southeastern plains. The Atchafalaya River basin in southern Louisiana is full of canals and river channels but no roads. There are many smaller units in Alabama and Mississippi and central Louisiana due to the relatively high road density in this region, so the landscape is broken up according to the road density map. Another example is Fig. 10e, Great Dismal Swamp National Wildlife Refuge, is the largest intact forest across southeastern Virginia and northeastern North Carolina and was established for protecting and managing the swamp's ecosystem (USFWS 2006).
Logarithmic size-based frequency distribution of SEUS management unit. The blue dotted line indicates the mean value of the management units within the SEUS
Examples of forest management unit distributions
To illustrate the contributions of road networks to our regional SEUS management unit map, a representative comparison set was done with and without incorporating road networks data. In Fig. 11, we overlay the histogram of the management unit (incorporating road networks) with the histogram of the management patches (without incorporating road networks. The red histogram shows the size-based frequency distribution of patches without incorporating road networks and the purple histogram illustrates the size frequency of forest management units. When refining the management units with road networks, there are 17 times more patches (management patches) compared with the map without incorporating road networks (management units).
Comparison between with and without incorporating OpenStreetMap on unit size frequency distribution
Management units under different management approaches
Road density (Figure S2) has been proposed as a broad index of roads' ecological effects in a landscape (Forman and Alexander 1998). The magnitude of average road densities ranges from 0.63 to 2.2 km∙km− 2 in all managed forests in SEUS. Preservation forest holds the lowest average road density of 0.63 km∙km− 2, and the passively managed forest has the highest average road density of 2.2 km∙km− 2. Ecological forest and production forest have the average road density of 1.86 and 1.29 km∙km− 2, respectively. One reason is the building of forest service roads and private logging roads, which obviously increase the road density in production forestland. Road density is a predictor of forest management intensity (Wendland et al. 2011), and the indicator of human interactions with forests (Forman et al. 2003). We compared the size-frequency distributions of management units with a map of different kinds of management (production, ecological, preservation, and passive management) derived independently. Preservation and production management had the largest patches, with means of 109.6 and 82.6 ha, respectively. Ecological and passively managed units averaged about half as large as 73.8 and 73.0 ha, respectively (Table 2).
Table 2 Average management unit size in each forest management type
We incorporated Warren's Index to assess quantitatively the geographic overlap between forest management units under different functional forest management types. For ecological management, the specific practice is designed to emulate the outcome of natural disturbance, which is to create an uneven-age stand structure to manage competition between and within multi-cohort stands. The distribution of ecological management units shows spatial heterogeneity with structurally complex stands. For passive management forest lands, as the passively managed forests mostly adopt many irregular shapes with blurred boundaries, and rupture of connectivity. For preservation management forest: mostly large government-managed land for multiple-purpose including watershed, wildlife, recreation and wilderness aspects. Accordingly, various practices may be applied to it such as harvest, cutting, retention cutting, thinning and prescribed fire. As shown in Table 3, Warren's Index represents the overlap between forest management units under different management types with the corresponding forest disturbance area.
Table 3 Warren's Index between different managed units and corresponding disturbances overlap with stacked LANDFIRE disturbance
The 10 km × 10 km based spatial grid analysis of Warren's Index is shown in Fig. 5 and Figure S6, S7, S8 and S9. Among all four forest management types, production forestry showed the highest probability of road-delineated management units with I = 0.50, and the passive managed forests showed the lowest probability of road-delineated management units I = 0.33. As the dominant forest: production forests, the average size of plantation management units tends to be large, have a uniform composition (Figure S9), are internally homogeneous and involve practice such as clearcutting and thinning. For preservation/wilderness, across the whole SEUS, the probability of road-delineated management unit is 0.44, with the standard deviation of 0.29, because the large areas of wildfires happened but with road network setting as firebreaks. Roads provide access and firebreaks, as the use of prescribed fires is widespread in SEUS and much of the prescribed fires are on private lands (Haines et al. 2001). In this study, we used a threshold of 50% of the similarity score although many useful criteria for establishing such thresholds have been proposed (Jimenez-Valverde and Lobo 2007). In the analysis of Table 3, the criteria were just used to show binary predictions of four differently managed forests.
Ownership representation of forest management unit
Forest management activities are important links between human and environmental factors, especially at regional scale. Forest ownership patterns also explain different types of land management practices and trajectories of land cover change (Turner et al. 1996). The aim of this part of the research is to link regional land ownership to management. We produced the SEUS ownership database (Figure S4; Table S1) by collecting the data from different sources, where the ownership was divided as Private and Public. Based on our understanding of the SEUS forest ownership, we reclassified the forest owner types into public and private with 10 sub-classes (Table 4): 1) Public: Federal protected, federal, state protected, state, military, local and NGO lands; and 2) Private: Private, family and corporate forests. Our ownership data indicate that 18.7% of the landowners are under public forest and 81.3% of private forest landowners, which covers 41.5% and 58.5% total forestland.
Table 4 Forest mean unit sizes based on different ownership types in SEUS
We can see that the type of land ownership affects units' distribution (Table 4). The special characteristics of SEUS forest ownership patterns can result in strong contrasts in management unit distribution. The major ownership types in the region are family, corporate and state, which have different management objectives. In addition, as the road networks increased, the number of small-sized parcels also shows a substantial increase.
The mean forest management units under different ownerships range from 73.2 (privately owned forests) to 115.9 ha (state protected forests) with the standard deviation of 33.4 ha. By comparing with Table 2, we found that the average unit size of military land is 74.9 ha, which is the closest to the average management unit size of ecological forestry at 73.8 ha. The land with the ownership of state protected represents the largest average management unit at 115.9 ha.
Quantifying forest management units under different management approaches is a key step to ensure that appropriate management practices and policies are in place to maintain the array of forest ecosystem services. Regionally, forest harvesting operations are conducted within road-defined boundaries. Forest management practices in SEUS are specifically the activities primarily dictated by forest harvest and the needs of management for recreation and sustainability. Roads networks in SEUS are often used for setting firebreaks and timber harvest boundaries, so there should be a spatial coincidence between road-delineated management units and disturbance. In this study, we assessed the forest management from stands to regional scale, by incorporating road networks and multi-temporal disturbance remote sensing database.
A number of conclusions can be drawn from the analysis in this study.
Road networks play a role in delineating forests from local to regional scale. By defining the individual forest clusters delineated by road networks as "forest management units" and the clusters that directly derived from forest extent map as "management patches", we mapped the forest extent map of both "units" and "patches" and compared them with treating "patches" map as background. There were 17 times more "units" than "patches" over the whole SEUS. And we also summarized the size distribution road delineated units, with units smaller than 0.5 ha comprised 64% of the counts of units, these small units altogether covered only 0.98% of the total forest area.
We quantitatively tested the probability distribution patterns by using Warren's Index of road-delineated management units and the corresponding forest disturbances area. The average probability of road-delineated management units is 0.44, and we also visualized the probabilities by setting a 10 km × 10 km grid. In SEUS, the high equivalency between the road-delineated units and the corresponding areas were found at most production forests, and large-size preserved areas (e.g. Okefenokee National Wildlife Preserve and St. Marks National Wildlife Refuge).
The combination of remote sensing data and OpenStreetMap constitutes a useful tool to monitor, characterize and quantify land cover and management unit distributions at macrosystems scale. By using the NLCD as the forest reference data and OpenStreetMap as road networks dataset, we produced the OpenStreetMap refined management unit pattern map and analyzed the spatial size distribution of forest patterns. In addition, by incorporating the OpenStreetMap, the roads are shown to play an important role in causing fragmentation of the remnant forestlands. The size frequency distribution tells us that all of the 64% of management units are small management units (< 0.5 ha) making up just 0.98% forestlands.
Our land ownership product indicates that 18.7% of public forest and 81.3% of private and industrial forestland owners, cover 41.5% and 58.5% area of total forestland, respectively. Management practices affected units are represented not only at the stand or local scale, but also will change the forest pattern dramatically at the regional scale. We provided substantial evidence that road networks occupy a substantial proportion of forest. From a forest management perspective, more road landscape area leads to less available land for trees at the macrosystems scale. On the other hand, logging roads and trails are an efficient way to manage forests, which we can see the relatively high road network densities in ecological and production forestry.
This study represents benefits so society in that future management decisions can be evaluated cross scales, taking account of both climate and disturbance regimes. More information on the effects of land ownership and forest management, combined with the detailed road network and a continental coverage land cover maps can aid in thwarting further forest fragmentation by promoting more reasonable road planning by land planners and decision makers.
The dataset and code used during the current study are available at: https://doi.org/10.6084/m9.figshare.11406612.v1
GEE:
LANDFIRE:
Landscape fire and resource management planning tools
MODIS:
Moderate resolution imaging spectroradiometer
NLCD:
National land cover database
NLM:
Neutral landscape model
OSM:
SEUS:
Southeastern U.S. Coastal Plain and Piedmont
VGI:
Volunteered geographic information
Bailey RG (2004) Identifying ecoregion boundaries. Environ Manag 34(1):S14–S26
Battaglia M, Sands PJ (1998) Process-based forest productivity models and their application in forest management. Forest Ecol Manag 102(1):13–32
Becknell JM, Desai AR, Dietze MC, Schultz CA, Starr G, Duffy PA, Franklin JF, Pourmokhtarian A, Hall J, Stoy PC, Binford MW, Boring LR, Staudhammer CL (2015) Assessing interactions among changing climate, management, and disturbance in forests: a macrosystems approach. BioScience 65(3):263–274
Bennett AF (1991) Roads, roadsides and wildlife conservation: a review. http://worldcat.org/isbn/0949324353. Accessed 22 May 2020
Breiman L (2001) Random forests. Mach Learn 45(1):5–32
Ciepluch B, Mooney P, Jacob R, Winstanley AC (2009) Using OpenStreetMap to deliver location-based environmental information in Ireland. SIGSPATIAL Special 1(3):7–22
Coffin AW (2007) From roadkill to road ecology: a review of the ecological effects of roads. J Transp Geogr 15(5):396–406
Coghlan G, Sowa R (1997) National forest road system and use. Draft rep. US Department of Agriculture, Forest Service, Engineering Staff, Washington, DC
Davis DE, Colten CE, Nelson MK, Saikku M, Allen BL (2006) Southern United States: an environmental history. ABC-CLIO
Demir M, Hasdemir M (2005) Functional planning criterion of forest road network systems according to recent forestry development and suggestion in Turkey. Am J Environ Sci 1(1):22–28
Douglass JE (1983) The potential for water yield augmentation from forest management in the eastern United States. J Am Water Res Assoc 19(3):351–358
Drever CR, Peterson G, Messier C, Bergeron Y, Flannigan M (2006) Can forest management based on natural disturbances maintain ecological resilience? Can J For Res 36(9):2285–2299
Eker M, Coban HO (2010) Impact of road network on the structure of a multifunctional forest landscape unit in southern Turkey. J Environ Biol 31(1–2):157–168
Faccoli M, Bernardinelli I (2014) Composition and elevation of spruce forests affect susceptibility to bark beetle attacks: implications for forest management. Forests 5(1):88–102
FHWA (Federal Highway Administration) (2013). Highway statistics 2013.
Forman RT, Sperling D, Bissonette JA, Clevenger AP, Cutshall CD, Dale VH, Fahrig L, France RL, Goldman CR, Heanue K, Jones J, Swanson F, Turrentine T, Winter TC (2003) Road ecology: science and solutions. Island Press, Washington
Forman RTT, Alexander LE (1998) Roads and their major ecological effects. Ann Rev Ecol Syst 29:207–231
Forman RTT, Deblinger RD (2000) The ecological road-effect zone of a Massachusetts (U.S.a.) suburban highway. Conserv Biol 14(1):36–46
Fox TR, Jokela EJ, Allen HL (2007) The development of pine plantation silviculture in the southern United States. J For 105(7):337–347
Franklin JF, Forman RTT (1987) Creating landscape patterns by forest cutting: ecological consequences and principles. Landsc Ecol 1(1):5–18
Franklin JF, Mitchell RJ, Palik BJ (2007) Natural disturbance and stand development principles for ecological forestry. Gen tech rep NRS-19. Newtown Square, PA: U.S. Department of Agriculture, Forest Service, Northern Research Station
Haines TK, Busby RL, Cleaves DA (2001) Prescribed burning in the south: trends, purpose, and barriers. South J Appl For 25(4):149–153
Hanson C (2010) Southern forests for the future. https://www.wri.org/publication/southern-forests-future. Accessed 22 May 2020
Haynes RW (2002) Forest management in the 21st century: changing numbers, changing context. J For 100(2):38–43
Hijmans RJ, Phillips S, Leathwick J, Elith J, Hijmans MRJ (2015) Package 'dismo'. https://cran.r-project.org/web/packages/dismo/dismo.pdf. Accessed 22 May 2020
Hou Z, Xu Q, Nuutinen T, Tokola T (2013) Extraction of remote sensing-based forest management units in tropical forests. Remote Sens Environ 130:1–10
Jiménez-Valverde A, Lobo J M (2007) Threshold criteria for conversion of probability of species presence to either–or presence–absence. Acta Ecol 31(3):361–369
Jin S, Yang L, Danielson P, Homer C, Fry J, Xian G (2013) A comprehensive change detection method for updating the National Land Cover Database to circa 2011. Remote Sens Environ 132:159–175
Josephson H (1989) A history of forestry research in the southern United States. Notes 1462:78 http://www.srs.fs.usda.gov/pubs/3175. Accessed 22 May 2020
Kurz WA, Dymond CC, Stinson G, Rampley GJ, Neilson ET, Carroll AL, Ebata T, Safranyik L (2008) Mountain pine beetle and forest carbon feedback to climate change. Nature 452(7190):987–990
Marsik M, Staub C, Kleindl W, Fu CS, Hall J, Yang D, Stevens FR, Binford M (2017) PANGAEA. doi: https://doi.org/10.1594/PANGAEA.880304. Accessed 22 May 2020
McGarigal K, Romme WH, Crist M, Roworth E (2001) Cumulative effects of roads and logging on landscape structure in the San Juan Mountains, Colorado (USA). Landsc Ecol 16:327–349
Mitchener LJ, Parker AJ (2005) Climate, lightning, and wildfire in the national forests of the southeastern United States: 1989-1998. Phys Geogr 26(2):147–162
Mooney P, Corcoran P, Winstanley AC (2010) Towards quality metrics for OpenStreetMap. Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems. ACM, San Jose 514–517
Nelson JD, Finn ST (1991) The influence of cut-block size and adjacency rules on harvest levels and road networks. Can J For Res 21(5):595–600
Netherer S, Nopp-Mayr U (2005) Predisposition assessment systems (PAS) as supportive tools in forest management—rating of site and stand-related hazards of bark beetle infestation in the high Tatra Mountains as an example for system application and verification. Forest Ecol Manag 207(1):99–107
Niemelä J (1999) Management in relation to disturbance in the boreal forest. Forest Ecol Manag 115(2):127–134
O'Neill RV, Hunsaker CT, Timmins SP, Jackson BL, Jones KB, Riitters KH, Wickham JD (1996) Scale problems in reporting landscape pattern at the regional scale. Landsc Ecol 11(3):169–180
Ordway-Swisher Biological Station (2015) OSBS User Guide V. http://www.ordway-swisher.ufl.edu/docs/OSBS_User_Guide.pdf. Accessed 22 May 2020
Oswalt SN, Smith WB, Miles PD et al (2014) Forest resources of the United States, 2012: a technical document supporting the Forest Service 2010 update of the RPA assessment. Gen tech rep WO-91. Washington, DC: U.S. Department of Agriculture, Forest Service, Washington Office, p 218
Outcalt KW (2000) The longleaf pine ecosystem of the south. Native Plants J 1(1):42–53
Ries L, Fletcher RJ Jr, Battin J, Sisk TD (2004) Ecological responses to habitat edges: mechanisms, models, and variability explained. Ann Rev Ecol Evol Syst 35:491–522
Riitters KH, Wickham JD, Coulston JW (2004) Use of road maps in national assessments of forest fragmentation in the United States. Ecol Soc 9(2):13
Robertson KM, Ostertag TE(2007) Effects of land use on fuel characteristics and fire behavior in pinelands of Southwest Georgia. Proceedings of the 23rd tall timbers fire ecology conference: fire in Grassland & Shrubland Ecosystems, San Diego, California
Rollins MG (2009) LANDFIRE: a nationally consistent vegetation, wildland fire, and fuel assessment. Int J Wildland Fire 18(3):235–249
Saura S, Martinez-Millán J (2000) Landscape patterns simulation with a modified random clusters method. Landsc Ecol 15(7):661–678
Sciaini M, Fritsch M, Scherer C, Simpkins CE (2018) NLMR and landscapetools: An integrated environment for simulating and modifying neutral landscape models in R bioRxiv 307306
Siry JP (2002) Intensive timber management practices. Southern Forest Res Assess 14:327–340
Stanturf JA, Kellison RC, Broerman FS, Jones SB (2003). Productivity of southern pine plantations: where are we and how did we get here?. J Fores 101(3):26–31
Stephens SL, McIver JD, Boerner REJ, Fettig CJ, Fontaine JB, Hartsough BR, Kennedy PL, Schwilk DW (2012) The effects of forest fuel-reduction treatments in the United States. BioScience 62(6):549–560
Turner MG, Wear DN, Flamm RO (1996) Land ownership and land-cover change in the southern Appalachian highlands and the Olympic peninsula. Ecol Appl 6(4):1150–1172
TIGER Cartographic Boundary – Urban Areas, prepared by the U.S. Census Bureau (2015) Available from https://www.census.gov/geo/mapsdatadata/kml/kml_ua.html.
United States Department of Agriculture Forest Service (1999) Final environmental impact statement for the land and resource management plans for National Forests in Florida. https://www.fs.usda.gov/Internet/FSE_DOCUMENTS/fseprd500375.pdf. Accessed 22 May 2020
US Fish, Wildlife Service (2006) Great Dismal Swamp National Wildlife Refuge and Nansemond National Wildlife Refuge Final Comprehensive Conservation Plan. http://www.fws.gov/uploadedFiles/Region_5/NWRS/South_Zone/Great_Dismal_Swamp_Complex/Great_Dismal_Swamp/FinalCCP_GDS.pdf. Accessed 22 May 2020
Van der Vaart AW (1998) Asymptotic statistics (Vol. 3). Cambridge University Press, UK
Verbesselt J, Hyndman R, Zeileis A, Culvenor D (2010) Phenological change detection while accounting for abrupt and gradual trends in satellite image time series. Remote Sensing Environ 114(12):2970–2980
Waldrop TA, White DL, Jones SM (1992) Fire regimes for pine-grassland communities in the southeastern United States. Forest Ecol Manag 47(1–4):195–210
Warren DL, Glor RE, Turelli M (2008) Environmental niche equivalency versus conservatism: quantitative approaches to niche evolution. Evolution 62(11):2868–2883
Wear DN, Greis J (2002) Southern forest resource assessment: summary of findings. J For 100(7):6–14
Wear DN, Greis JG (2012) The southern forest futures project: summary report. Gen tech rep SRS-GTR-168. USDA-Forest Service, Southern Research Station, Asheville, p 54
Wear DN, Greis JG (2013) The southern forest futures project: technical report. Gen tech rep SRS-GTR-178. USDA-Forest Service, Southern Research Station, Asheville, p 542
Wendland KJ, Lewis DJ, Alix-Garcia J, Ozdogan M, Baumann M, Radeloff VC (2011) Regional-and district-level drivers of timber harvesting in European Russia after the collapse of the Soviet Union. Glob Environ Chang 21(4):1290–1300
Zaccarelli N, Li BL, Petrosillo I, Zurlini G (2013) Order and disorder in ecological time-series: introducing normalized spectral entropy. Ecol Indic 28:22–30
Zhang D, Butler BJ, Nagubadi RV (2012) Institutional timberland ownership in the US south: magnitude, location, dynamics, and management. J For 110(7):355–361
Zielstra D, Hochmair HH (2011) Digital street data: Free versus proprietary. GIM Int 25(7):29–33
Acknowledgement is made of the assistance of Dr. Michael Binford and Dr. Peter Waylen, Department of Geography, University of Florida, for English writing and reviewing, for suggestions and discussion. Funding for this research was provided by the National Science Foundation Macrosystems Biology Program Grant EF #1241860.
We acknowledge funding from the Macrosystems Biology Program Grant EF #1241860 from United States National Science Foundation (NSF).
Wyoming Geographic Information Science Center, University of Wyoming, 1000E. University Ave, Laramie, Wyoming, 82071, USA
Di Yang
Department of Geography, University of Florida, 330 Newell Dr, Gainesville, Florida, 32611, USA
Di Yang & Chiung-Shiuan Fu
Chiung-Shiuan Fu
YD and FC conceptualized the idea for the study, FC generated the SEUS forest ownership database; YD performed data analysis and led the writing of the manuscript; FC critically reviewed the data analysis, and contributed substantially to the writing. Both authors read and approved the final manuscript.
Correspondence to Di Yang.
40663_2021_289_MOESM1_ESM.pdf
Yang, D., Fu, CS. Mapping regional forest management units: a road-based framework in Southeastern Coastal Plain and Piedmont. For. Ecosyst. 8, 17 (2021). https://doi.org/10.1186/s40663-021-00289-w
Forest management unit
Warren's index
Neutral landscape
Road ecology | CommonCrawl |
Searches for relativistic magnetic monopoles in IceCube
Regular Article - Experimental Physics
M. G. Aartsen3,
K. Abraham33,
M. Ackermann49,
J. Adams16,
J. A. Aguilar13,
M. Ahlers30,
M. Ahrens40,
D. Altmann24,
T. Anderson46,
I. Ansseau13,
M. Archinger31,
C. Arguelles30,
T. C. Arlen46,
J. Auffenberg1,
X. Bai38,
S. W. Barwick27,
V. Baum31,
R. Bay8,
J. J. Beatty18,19,
J. Becker Tjus11,
K.-H. Becker48,
E. Beiser30,
M. L. Benabderrahmane2,
P. Berghaus49,
D. Berley17,
E. Bernardini49,
A. Bernhard33,
D. Z. Besson28,
G. Binder8,9,
D. Bindig48,
M. Bissok1,
E. Blaufuss17,
J. Blumenthal1,
D. J. Boersma47,
C. Bohm40,
M. Börner21,
F. Bos11,
D. Bose42,
S. Böser31,
O. Botner47,
J. Braun30,
L. Brayeur14,
H.-P. Bretz49,
N. Buzinsky23,
J. Casey6,
M. Casier14,
E. Cheung17,
D. Chirkin30,
A. Christov25,
K. Clark43,
L. Classen24,
S. Coenders33,
D. F. Cowen45,46,
A. H. Cruz Silva49,
J. Daughhetee6,
J. C. Davis18,
M. Day30,
J. P. A. M. de André22,
C. De Clercq14,
E. del Pino Rosendo31,
H. Dembinski34,
S. De Ridder26,
P. Desiati30,
K. D. de Vries14,
G. de Wasseige14,
M. de With10,
T. DeYoung22,
J. C. Díaz-Vélez30,
V. di Lorenzo31,
J. P. Dumm40,
M. Dunkman46,
B. Eberhardt31,
T. Ehrhardt31,
B. Eichmann11,
S. Euler47,
P. A. Evenson34,
S. Fahey30,
A. R. Fazely7,
J. Feintzeig30,
J. Felde17,
K. Filimonov8,
C. Finley40,
T. Fischer-Wasels48,
S. Flis40,
C.-C. Fösig31,
T. Fuchs21,
T. K. Gaisser34,
R. Gaior15,
J. Gallagher29,
L. Gerhardt8,9,
K. Ghorbani30,
D. Gier1,
L. Gladstone30,
M. Glagla1,
T. Glüsenkamp49,
A. Goldschmidt9,
G. Golup14,
J. G. Gonzalez34,
D. Góra49,
D. Grant23,
Z. Griffith30,
A. Groß33,
C. Ha8,9,
C. Haack1,
A. Haj Ismail26,
A. Hallgren47,
F. Halzen30,
E. Hansen20,
B. Hansmann1,
K. Hanson30,
D. Hebecker10,
D. Heereman13,
K. Helbing48,
R. Hellauer17,
S. Hickford48,
J. Hignight22,
G. C. Hill3,
K. D. Hoffman17,
R. Hoffmann48,
K. Holzapfel33,
A. Homeier12,
K. Hoshina30,
F. Huang46,
M. Huber33,
W. Huelsnitz17,
P. O. Hulth40,
K. Hultqvist40,
S. In42,
A. Ishihara15,
E. Jacobi49,
G. S. Japaridze5,
M. Jeong42,
K. Jero30,
M. Jurkovic33,
A. Kappes24,
T. Karg49,
A. Karle30,
M. Kauer30,35,
A. Keivani46,
J. L. Kelley30,
J. Kemp1,
A. Kheirandish30,
J. Kiryluk41,
J. Kläs48,
S. R. Klein8,9,
G. Kohnen32,
R. Koirala34,
H. Kolanoski10,
R. Konietz1,
L. Köpke31,
C. Kopper23,
S. Kopper48,
D. J. Koskinen20,
M. Kowalski10,49,
K. Krings33,
G. Kroll31,
M. Kroll11,
G. Krückl31,
J. Kunnen14,
N. Kurahashi37,
T. Kuwabara15,
M. Labare26,
J. L. Lanfranchi46,
M. J. Larson20,
M. Lesiak-Bzdak41,
M. Leuermann1,
J. Leuner1,
L. Lu15,
J. Lünemann14,
J. Madsen39,
G. Maggi14,
K. B. M. Mahn22,
M. Mandelartz11,
R. Maruyama35,
K. Mase15,
H. S. Matis9,
R. Maunu17,
F. McNally30,
K. Meagher13,
M. Medici20,
A. Meli26,
T. Menne21,
G. Merino30,
T. Meures13,
S. Miarecki8,9,
E. Middell49,
L. Mohrmann49,
T. Montaruli25,
R. Morse30,
R. Nahnhauer49,
U. Naumann48,
G. Neer22,
H. Niederhausen41,
S. C. Nowicki23,
D. R. Nygren9,
A. Obertacke Pollmann48,
A. Olivas17,
A. Omairat48,
A. O'Murchadha13,
T. Palczewski44,
H. Pandya34,
D. V. Pankova46,
L. Paul1,
J. A. Pepper44,
C. Pérez de los Heros47,
C. Pfendner18,
D. Pieloth21,
E. Pinat13,
J. Posselt48,
P. B. Price8,
G. T. Przybylski9,
J. Pütz1,
M. Quinnan46,
C. Raab13,
L. Rädel1,
M. Rameez25,
K. Rawlins4,
R. Reimann1,
M. Relich15,
E. Resconi33,
W. Rhode21,
M. Richman37,
S. Richter30,
B. Riedel23,
S. Robertson3,
M. Rongen1,
C. Rott42,
T. Ruhe21,
D. Ryckbosch26,
L. Sabbatini30,
H.-G. Sander31,
A. Sandrock21,
J. Sandroos31,
S. Sarkar20,36,
K. Schatto31,
F. Scheriau21,
M. Schimp1,
T. Schmidt17,
M. Schmitz21,
S. Schoenen1,
S. Schöneberg11,
A. Schönwald49,
L. Schulte12,
L. Schumacher1,
D. Seckel34,
S. Seunarine39,
D. Soldin48,
M. Song17,
G. M. Spiczak39,
C. Spiering49,
M. Stahlberg1,
M. Stamatikos18,
T. Stanev34,
A. Stasik49,
A. Steuer31,
T. Stezelberger9,
R. G. Stokstad9,
A. Stößl49,
R. Ström47,
N. L. Strotjohann49,
G. W. Sullivan17,
M. Sutherland18,
H. Taavola47,
I. Taboada6,
J. Tatar8,9,
S. Ter-Antonyan7,
A. Terliuk49,
G. Tešić46,
S. Tilav34,
P. A. Toale44,
M. N. Tobin30,
S. Toscano14,
D. Tosi30,
M. Tselengidou24,
A. Turcati33,
E. Unger47,
M. Usner49,
S. Vallecorsa25,
J. Vandenbroucke30,
N. van Eijndhoven14,
S. Vanheule26,
J. van Santen49,
J. Veenkamp33,
M. Vehring1,
M. Voge12,
M. Vraeghe26,
C. Walck40,
A. Wallace3,
M. Wallraff1,
N. Wandkowsky30,
Ch. Weaver23,
C. Wendt30,
S. Westerhoff30,
B. J. Whelan3,
K. Wiebe31,
C. H. Wiebusch1,
L. Wille30,
D. R. Williams44,
H. Wissing17,
M. Wolf40,
T. R. Wood23,
K. Woschnagg8,
D. L. Xu30,
X. W. Xu7,
Y. Xu41,
J. P. Yanez49,
G. Yodh27,
S. Yoshida15 &
M. Zoll40
The European Physical Journal C volume 76, Article number: 133 (2016) Cite this article
A preprint version of the article is available at arXiv.
Various extensions of the Standard Model motivate the existence of stable magnetic monopoles that could have been created during an early high-energy epoch of the Universe. These primordial magnetic monopoles would be gradually accelerated by cosmic magnetic fields and could reach high velocities that make them visible in Cherenkov detectors such as IceCube. Equivalently to electrically charged particles, magnetic monopoles produce direct and indirect Cherenkov light while traversing through matter at relativistic velocities. This paper describes searches for relativistic (\(v\ge 0.76\;c\)) and mildly relativistic (\(v\ge 0.51\;c\)) monopoles, each using one year of data taken in 2008/2009 and 2011/2012, respectively. No monopole candidate was detected. For a velocity above \(0.51 \; c\) the monopole flux is constrained down to a level of \(1.55 \times 10^{-18} \; \text {cm}^{-2}\; \text {s}^{-1} \text {sr}^{-1}\). This is an improvement of almost two orders of magnitude over previous limits.
In Grand Unified Theories (GUTs) the existence of magnetic monopoles follows from general principles [1, 2]. Such a theory is defined by a non-abelian gauge group that is spontaneously broken at a high energy to the Standard Model of particle physics [3]. The condition that the broken symmetry contains the electromagnetic gauge group \(\mathrm {U(1)_\mathrm{EM}}\) is sufficient for the existence of magnetic monopoles [4]. Under these conditions the monopole is predicted to carry a magnetic charge g governed by Dirac's quantization condition [5]
$$\begin{aligned} g=n \cdot g_D = n \cdot \frac{e}{2\alpha } \end{aligned}$$
where n is an integer, \(g_D\) is the elemental magnetic charge or Dirac charge, \(\alpha \) is the fine structure constant, and e is the elemental electric charge.
In a given GUT model the monopole mass can be estimated by the unification scale \(\Lambda _{\text {GUT}}\) and the corresponding value of the running coupling constant \(\alpha _{\text {GUT}}\) as \(M c^2 \gtrsim {\Lambda _{\text {GUT}}}/{\alpha _{\text {GUT}}}\). Depending on details of the GUT model, the monopole mass can range from 10\(^7\) to \(10^{17}\, \text {GeV}/c^2\) [6, 7]. In any case, GUT monopoles are too heavy to be produced in any existing or foreseeable accelerator.
After production in the very early hot universe, their relic abundance is expected to have been exponentially diluted during inflation. However, monopoles associated with the breaking of intermediate scale gauge symmetries may have been produced in the late stages of inflation and reheating in some models [8, 9]. There is thus no robust theoretical prediction of monopole parameters such as mass and flux, nevertheless an experimental detection of a monopole today would be of fundamental significance.
In this paper we present results for monopole searches with the IceCube Neutrino telescope covering a large velocity range. Due to the different light-emitting mechanisms at play, we present two analyses, each optimized according to their velocity range: highly relativistic monopoles with \(v\ge 0.76\,c\) and mildly relativistic monopoles with \(v\ge 0.4\,c\). The highly relativistic monopole analysis was performed with IceCube in its 40-string configuration while the mildly relativistic monopole analysis uses the complete 86-string detector.
The paper is organized as follows. In Sect. 2 we introduce the neutrino detector IceCube and describe in Sect. 3 the methods to detect magnetic monopoles with Cherenkov telescopes. We describe the simulation of magnetic monopoles in Sect. 4. The analyses for highly and mildly relativistic monopoles use different analysis schemes which are described in Sects. 5 and 6. The result of both analyses and an outlook is finally shown in Sects. 7–9.
A top view of the IceCube array. The IC40 configuration consists of all strings in the upper gray shaded area. After completion in the end of 2010, IceCube consists of all 86 strings, called the IC86 configuration. DeepCore strings were excluded in the presented analyses
The IceCube Neutrino Observatory is located at the geographic South Pole and consists of an in-ice array, IceCube [10], and a surface air shower array, IceTop [11], dedicated to neutrino and cosmic ray research, respectively. An aerial sketch of the detector layout is shown in Fig. 1.
IceCube consists of 86 strings with 60 digital optical modules (DOMs) each, deployed at depths between 1450 and \(2450\,\text {m}\), instrumenting a total volume of one cubic kilometer. Each DOM contains a \(25\;\text {cm}\) Hamamatsu photomultiplier tube (PMT) and electronics to read out and digitize the analog signal from the PMT [12]. The strings form a hexagonal grid with typical inter-string separation of \(125\,\text {m}\) and vertical DOM separation of \(17\,\text {m}\), except for six strings in the middle of the array that are more densely instrumented (with higher efficiency PMTs) and deployed closer together. These strings constitute the inner detector, DeepCore [13]. Construction of the IceCube detector started in December 2004 and was finished in December 2010, but the detector took data during construction. Specifically in this paper, we present results from two analyses, one performed with one year of data taken during 2008/2009, when the detector consisted of 40 strings, called IC40, and another analysis with data taken during 2011/2012 using the complete detector, called IC86.
IceCube uses natural ice both as target and as radiator. The analysis in the IC40 configuration of highly relativistic monopoles uses a six-parameter ice model [14] which describes the depth-dependent extrapolation of measurements of scattering and absorption valid for a wavelength of \(400\,\text {nm}\). The IC86 analysis of mildly relativistic monopoles uses an improved ice model which is based on additional measurements and accounts for different wavelengths [15].
Each DOM transmitted digitized PMT waveforms to the surface. The number of photons and their arrival times were then extracted from these waveforms. The detector is triggered when a DOM and its next or next-to-nearest DOMs record a hit within a \(1\, \upmu \text {s}\) window. Then all hits in the detector within a window of \(10\, \upmu \text {s}\) will be read-out and combined into one event [16]. A series of data filters are run on-site in order to select potentially interesting events for further analysis, reducing at the same time the amount of data to be transferred via satellite. For both analyses presented here, a filter selecting events with a high number of photo-electrons (\(>\)650 in the highly relativistic analysis and \(>\)1000 in the mildly relativistic analysis) were used. In addition filters selecting up-going track like events are used in the mildly relativistic analysis.
After the events have been sent to the IceCube's computer farm, they undergo some standard processing, such as the removal of hits which are likely caused by noise and basic reconstruction of single particle tracks via the LineFit algorithm [17]. This reconstruction is based on a 4-dimensional (position plus time) least-square fit which yields an estimated direction and velocity for an event.
The analyses are performed in a blind way by optimizing the cuts to select a possible monopole signal on simulation and one tenth of the data sample (the burn sample). The remaining data is kept untouched until the analysis procedure is fixed [18]. In the highly relativistic analysis the burn sample consists of all events recorded in August of 2008. In the mildly relativistic analysis the burn sample consists of every 10th 8-h-run in 2011/2012.
Monopole signatures
Magnetic monopoles can gain kinetic energy through acceleration in magnetic fields. This acceleration follows from a generalized Lorentz force law [20] and is analogous to the acceleration of electric charges in electric fields. The kinetic energy gained by a monopole of charge \(g_D\) traversing a magnetic field \(B\) with coherence length \(L\) is \(E \sim g_D BL\,\) [7]. This gives a gain of up to \(10^{14}\,\text {GeV}\) of kinetic energy in intergalactic magnetic fields to reach relativistic velocities. At such high kinetic energies magnetic monopoles can pass through the Earth while still having relativistic velocities when reaching the IceCube detector.
In the monopole velocity range considered in these analyses, \(v \ge 0.4\,c\) at the detector, three processes generate detectable light: direct Cherenkov emission by the monopole itself, indirect Cherenkov emission from ejected \(\delta \)-electrons and luminescence. Stochastical energy losses, such as pair production and photonuclear reactions, are neglected because they just occur at ultra-relativistic velocities.
An electric charge e induces the production of Cherenkov light when its velocity v exceeds the Cherenkov threshold \(v_C=c/n_P\approx 0.76\,c\) where \(n_P\) is the refraction index of ice. A magnetic charge g moving with a velocity \(\beta =v/c\) produces an electrical field whose strength is proportional to the particle's velocity and charge. At velocities above \(v_C\), Cherenkov light is produced analogous to the production by electrical charges [21] in an angle \(\theta \) of
$$\begin{aligned} \cos \theta = \frac{1}{n_P\,\beta } \end{aligned}$$
The number of Cherenkov photons per unit path length x and wavelength \(\lambda \) emitted by a monopole with one magnetic charge \(g=g_D\) can be described by the usual Frank-Tamm formula [21] for a particle with effective charge \(Ze \rightarrow g_D n_P\) [22]
$$\begin{aligned} \frac{d^2 N_{\gamma }}{dx d\lambda } = \frac{2 \pi \alpha }{\lambda ^2} \left( \frac{g_D n_P}{e} \right) ^2 \left( 1-\frac{1}{\beta ^2 n_P^2} \right) \end{aligned}$$
Thus, a minimally charged monopole generates \((g_D n_P/e)^2\approx 8200\) times more Cherenkov radiation in ice compared to an electrically charged particle with the same velocity. This is shown in Fig. 2.
In addition to this effect, a (mildly) relativistic monopole knocks electrons off their binding with an atom. These high-energy \(\delta \)-electrons can have velocities above the Cherenkov threshold. For the production of \(\delta \)-electrons the differential cross-section of Kasama, Yang and Goldhaber (KYG) is used that allows to calculate the energy transfer of the monopole to the \(\delta \)-electrons and therefore the resulting output of indirect Cherenkov light [23, 24]. The KYG cross section was calculated using QED, particularly dealing with the monopole's vector potential and its singularity [23]. Cross sections derived prior to KYG, such as the so-called Mott cross section [25–27], are only semi-classical approximations because the mathematical tools had not been developed by then. Thus, in this work the state-of-the-art KYG cross section is used to derive the light yield. The number of photons derived with the KYG and Mott cross section are shown in Fig. 2. Above the Cherenkov threshold indirect Cherenkov light is negligible for the total light yield.
Number of photons per cm produced by a muon (black), a monopole by direct Cherenkov light (blue), and monopoles by \(\delta \)-electrons. The photon yield per indirect Cherenkov light is shown using the KYG (red solid) and, for comparison, the Mott (red dotted) cross section, used in one earlier monopole analysis [19]. Light of wavelengths from 300 to \(600\,\text {nm}\) is considered here, covering the DOM acceptance of IceCube [15]
Using the KYG cross section the energy loss of magnetic monopoles per unit path length dE / dx can be calculated [28]
$$\begin{aligned} \frac{dE}{dx}= & {} \frac{4\pi N_e g_D^2 e^2}{m_e c^2} \left[ \ln {\frac{2 m_e c^2 \beta ^2 \gamma ^2}{I}}+\frac{K(g_D)}{2}\right. \nonumber \\&\left. -\frac{\delta +1}{2}-B(g_D) \right] \end{aligned}$$
where \(N_e\) is the electron density, \(m_e\) is the electron mass, \(\gamma \) is the Lorentz factor of the monopole, I is the mean ionization potential, \(K(g_D)\) is the QED correction derived from the KYG cross section, \(B(g_D)\) is the Bloch correction and \(\delta \) is the density-effect correction [29].
Luminescence is the third process which may be considered in the velocity range. It has been shown that pure ice exposed to ionizing radiation emits luminescence light [30, 31]. The measured time distribution of luminescence light is fit well by several overlapping decay times which hints at several different excitation and de-excitation mechanisms [32]. The most prominent wavelength peaks are within the DOM acceptance of about 300–600 nm [15, 32]. The mechanisms are highly dependent on temperature and ice structure. Extrapolating the latest measurements of luminescence light \(dN_{\gamma }/dE\) [32, 33], the brightness \( dN_\gamma / dx \)
$$\begin{aligned} \frac{dN_\gamma }{dx}=\frac{dN_\gamma }{dE} \cdot \frac{dE}{dx} \end{aligned}$$
could be at the edge of IceCube's sensitivity where the energy loss is calculated with Eq. 4. This means that it would not be dominant above \(0.5\, c\). The resulting brightness is almost constant for a wide velocity range from 0.1 to \(0.95\, c\). Depending on the actual brightness, luminescence light could be a promising method to detect monopoles with lower velocities. Since measurements of \(dN_\gamma /dE\) are still to be done for the parameters given in IceCube, luminescence has to be neglected in the presented analyses which is a conservative approach leading to lower limits.
The simulation of an IceCube event comprises several steps. First, a particle is generated, i.e. given its start position, direction and velocity. Then it is propagated, taking into account decay and interaction probabilities, and propagating all secondary particles as well. When the particle is close to the detector, the Cherenkov light is generated and the photons are propagated through the ice accounting for its properties. Finally the response of the PMT and DOM electronics is simulated including the generation of noise and the triggering and filtering of an event (see Sect. 2). From the photon propagation onwards, the simulation is handled identically for background and monopole signal. However the photon propagation is treated differently in the two analyses presented below due to improved ice description and photon propagation software available for the latter analysis.
Background generation and propagation
The background of a monopole search consists of all other known particles which are detectable by IceCube. The most abundant background are muons or muon bundles produced in air showers caused by cosmic rays. These were modeled using the cosmic ray models Polygonato [34] for the highly relativistic and GaisserH3a [35] for the mildly relativistic analysis.
The majority of neutrino induced events are caused by neutrinos created in the atmosphere. Conventional atmospheric neutrinos, produced by the decay of charged pions and kaons, are dominating the neutrino rate from the GeV to the TeV range [36]. Prompt neutrinos, which originate from the decay of heavier mesons, i.e. containing a charm quark, are strongly suppressed at these energies [37].
Astrophysical neutrinos, which are the primary objective of IceCube, have only recently been found [38, 39]. For this reason they are only taken into account as a background in the mildly relativistic analysis, using the fit result for the astrophysical flux from Ref. [39].
Coincidences of all background signatures are also taken into account.
Signal generation and propagation
Since the theoretical mass range for magnetic monopoles is broad (see Sect. 1), and the Cherenkov emission is independent of the mass, signal simulation is focused simply on a benchmark monopole mass of \(10^{11} \; \text {GeV}\) without limiting generality. Just the ability to reach the detector after passing through the Earth depends on the mass predicted by a monopole model. The parameter range for monopoles producing a recordable light emission inside IceCube is governed by the velocities needed to produce (indirect) Cherenkov light.
The starting points of the simulated monopole tracks are generated uniformly distributed around the center of the completed detector and pointing towards the detector. For the highly relativistic analysis the simulation could be run at specific monopole velocities only and so the characteristic velocities 0.76, 0.8, 0.9 and \(0.995\, c\), were chosen.
Due to new software, described in the next sub-section, in the simulation for the mildly relativistic analysis the monopoles can be given an arbitrary characteristic velocity v below \(0.99\, c\). The light yield from indirect Cherenkov light fades out below \(0.5\, c\). To account for the smallest detectable velocities the lower velocity limit was set to \(0.4\, c\) in simulation.
The simulation also accounts for monopole deceleration via energy loss. This information is needed to simulate the light output.
Light propagation
In the highly relativistic analysis the photons from direct Cherenkov light are propagated using Photonics [40]. A more recent and GPU-enabled software propagating light in IceCube is PPC [15] which is used in the mildly relativistic analysis. The generation of direct Cherenkov light, following Eq. 3, was implemented into PPC in addition to the variable Cherenkov cone angle (Eq. 2). For indirect Cherenkov light a parametrization of the distribution in Fig. 2 is used.
Both simulation procedures are consistent with each other and deliver a signal with the following topology: through-going tracks, originating from all directions, with constant velocities and brightness inside the detector volume, see Fig. 3. All these properties are used to discriminate the monopole signal from the background in IceCube.
Highly relativistic analysis
This analysis covers the velocities above the Cherenkov threshold \(v_C\approx 0.76\,c\) and it is based on the IC40 data recorded from May 2008 to May 2009. This comprises about 346 days of live-time or 316 days without the burn sample. The live-time is the recording time for clean data. The analysis for the IC40 data follows the same conceptual design as a previous analysis developed for the IC22 data [41], focusing on a simple and easy to interpret set of variables.
Event view of a simulated magnetic monopole with a velocity of \(0.83\, c\) using both direct and indirect Cherenkov light. The monopole track is created with a zenith angle of about \(170^{\circ }\) in upward direction. The position of the IceCube DOMs are shown with gray spheres. Hit DOMs are visualized with colored spheres. Their size is scaled with the number of recorded photons. The color denotes the time development from red to blue. The red line shows the reconstructed track which agrees with the true direction
The highly relativistic analysis uses spatial and timing information from the following sources: all DOMs, fulfilling the next or next-to-nearest neighbor condition (described in Sect. 2), and DOMs that fall into the topmost 10 % of the collected-charge distribution for that event which are supposed to record less scattered photons. This selection allows definition of variables that benefit from either large statistics or precise timing information.
The relative brightness after the first two cuts on \(n_{\text {DOM}}\) and \(n_{\text {NPE}}/n_{\text {DOM}}\). The expected distributions from monopoles (MP) of different velocities is shown for comparison
Event selection
The IC40 analysis selects events based on their relative brightness, arrival direction, and velocity. Some additional variables are used to identify and reject events with poor track reconstruction quality. The relative brightness is defined as the average number of photo-electrons per DOM contributing to the event. This variable has more dynamic range compared with the number of hit DOMs. The distribution of this variable after applying the first two quality cuts, described in Table 3, is shown in Fig. 4. Each event selection step up to the final level is optimized to minimize the background passing rate while keeping high signal efficiency, see Table 3.
The final event selection level aims to remove the bulk of the remaining background, mostly consisting of downward going atmospheric muon bundles. However, the dataset is first split in two mutually exclusive subsets with low and high brightness. This is done in order to isolate a well known discrepancy between experimental and simulated data in the direction distribution near the horizon which is caused by deficiencies in simulating air shower muons at high inclinations [42].
Since attenuation is stronger at large zenith angles \(\theta _z\), the brightness of the resulting events is reduced and the discrepancy is dominantly located in the low brightness subset. Only simulated monopoles with \(v = 0.76\;c\) significantly populate this subset. The final selection criterion for the low brightness subset is \(\cos \theta _z < -0.2\) where \(\theta _z\) is the reconstructed arrival angle with respect to the zenith. For the high brightness subset a 2-dimensional selection criterion is used as shown in Fig. 5. The two variables are the relative brightness described above and the cosine of the arrival angle. Above the horizon (\(\cos \theta _z > 0\)), where most of the background is located, the selection threshold increases linearly with increasing \(\cos \theta _z\). Below the horizon the selection has no directional dependence and values of both ranges coincide at \(\cos \theta _z = 0\). The optimization method applied here is the model rejection potential (MRP) method described in [41].
Comparison of signal distribution (top) vs. atmospheric muon background (bottom) for the final cut. The signal is the composed out of the sum of monopoles with \(\beta = 0.995, 0.9, 0.8\)
Uncertainties and flux calculation
Analogous to the optimization of the final event selection level, limits on the monopole flux are calculated using a MRP method. Due to the blind approach of the analysis these are derived from Monte Carlo simulations, which contain three types of uncertainties: (1) Theoretical uncertainties in the simulated models, (2) Uncertainties in the detector response, and (3) Statistical uncertainties.
For a given monopole-velocity the limit then follows from
$$\begin{aligned} \Phi _{\alpha } = \mathrm {MRP} \cdot \Phi _0 = \frac{\bar{\mu }_{\alpha }(n_{\mathrm {obs}})}{\bar{n}_{\mathrm {s}}} \Phi _0 \end{aligned}$$
where \(\bar{\mu }_{\alpha }\) is an average Feldman-Cousins (FC) upper limit with confidence \(\alpha \), which depends on the number of observed events \(n_{\mathrm {obs}}\). Similarly, though derived from simulation, \(\bar{n}_{\mathrm {s}}\) is the average expected number of observed signal events assuming a flux \(\Phi _0\) of magnetic monopoles. Since \(\bar{n}_{\mathrm {s}}\) is proportional to \(\Phi _0\) the final result is independent of whichever initial flux is chosen.
The averages can be independently expressed as weighted sums over values of \(\mu _{\alpha }(n_{\mathrm {obs}}, n_{\mathrm {bg}})\) and \(n_{\mathrm {s}}\) respectively with the FC upper limit here also depending on the number of expected background events \(n_{\mathrm {bg}}\) obtained from simulation. The weights are then the probabilities for observing a particular value for \(n_{\mathrm {bg}}\) or \(n_{\mathrm {s}}\). In the absence of uncertainties this probability has a Poisson distribution with the mean set to the expected number of events \(\lambda \) derived from simulations. However, in order to extend the FC approach to account for uncertainties, the distribution
$$\begin{aligned} \mathrm {PDF}(n|\lambda ,\sigma ) = \int \frac{(\lambda +x)^{n}\;e^{-\lambda -x}}{n!} \cdot w(x|\sigma ) \;dx \end{aligned}$$
is used instead to derive \(n_{\mathrm {bg}}\) and \(n_{\mathrm {s}}\). This is the weighted average of Poisson distributions where the mean value varies around the central value \(\lambda \) and the variance \(\sigma ^2\) is the quadratic sum of all individual uncertainties. Under the assumption that individual contributions to the uncertainty are symmetric and independent, the weighting function \(w(x|\sigma )\) is a normal distribution with mean 0 and variance \(\sigma ^2\). However, the Poisson distribution is only defined for positive mean values. Therefore a truncated normal distribution with the boundaries \(-\lambda \) and \(+\infty \) is used as the weighting function instead.
Mildly relativistic analysis
This analysis uses the data recorded from May 2011 to May 2012. It comprises about 342 days (311 days without the burn sample) of live-time. The signal simulation covers the velocity range of 0.4–\(0.99\, c\). The optimization of cuts and machine learning is done on a limited velocity range \(<\)0.76c to focus on lower velocities where indirect Cherenkov light dominates.
Following the filters, described in Sect. 2, further processing of the events is done by splitting coincident events into sub-events using a time-clustering algorithm. This is useful to reject hits caused by PMT after-pulses which appear several microseconds later than signal hits.
For quality reasons events are required to have 6 DOMs on 2 strings hit, see Table 4. The remaining events are handled as tracks reconstructed with an improved version [17] of the LineFit algorithm, mentioned in Sect. 2. Since the main background in IceCube are muons from air showers which cause a down-going track signature, a cut on the reconstructed zenith angle below \(86^{\circ }\) removes most of this background.
Estimated velocity after event reconstruction. In this plot only monopoles with a simulated true velocity below \(0.76\, c\) are shown and a cut on the reconstructed velocity at \(0.83\, c\). These restrictions were only used for training to focus on this range and released for sensitivity calculation and unblinding. Superluminal velocity values occur because of the simplicity of the chosen reconstruction algorithm which may lead to mis-reconstructed events that can be discarded. The air shower background is divided into high (HE) and low energy (LE) primary particle energy at \(100\,\text {TeV}\). The recorded signals differ significantly and are therefore treated with different variables and cuts
Figure 6 shows the reconstructed particle velocity at this level. The rate for atmospheric muon events has its maximum at low velocities. This is due to mostly coincident events remaining in this sample. The muon neutrino event rate consists mainly of track-like signatures and is peaked at the velocity of light. Dim events or events traversing only part of the detector are reconstructed with lower velocities which leads to the smearing of the peak rate for muon neutrinos and monopole simulations. Electron neutrinos usually produce a cascade of particles (and light) when interacting which is easy to separate from a track signature. The velocity reconstruction for these events results mainly in low velocities which can also be used for separation from signal.
In contrast to the highly relativistic analysis, machine learning was used. A boosted decision tree (BDT) [43] was chosen to account for limited background statistics. The multivariate method was embedded in a re-sampling method. This was combined with additional cuts to reduce the background rate and prepare the samples for an optimal training result. Besides that, these straight cuts reduce cascades, coincident events, events consisting of pure noise, improve reconstruction quality, and remove short tracks which hit the detector at the edges. See a list of all cuts in Table 4. To train the BDT on lower velocities an additional cut on the maximal velocity \(0.82\, c\) is used only during training which is shown in Fig. 6. Finally a cut on the penetration depth of a track, measured from the bottom of the detector, is performed. This is done to lead the BDT training to a suppression of air shower events underneath the neutrino rate near the signal region, as can be seen in Fig. 8.
Out of a the large number of variables provided by standard and monopole reconstructions 15 variables were chosen for the BDT using a tool called mRMR (Minimum Redundancy Maximum Relevance) [44]. These 15 variables are described in Table 5. With regard to the next step it was important to choose variables which show a good data – simulation agreement so that the BDT would not be trained on unknown differences between simulation and recorded data. The resulting BDT score distribution in Fig. 7 shows a good signal vs. background separation with reasonable simulation – data agreement. The rate of atmospheric muons and electron neutrinos induced events is suppressed sufficiently compared to the muon neutrino rate near the signal region. The main background is muon neutrinos from air showers.
Distribution of one BDT trained on 10 % of the burn sample. The cut value which is chosen using Fig. 8 is shown with the orange line. Statistical errors per bin are drawn
Background expectation
To calculate the background expectation a method inspired by bootstrapping is used [45], called pull-validation [46]. Bootstrapping is usually used to smooth a distribution by resampling the limited available statistics. Here, the goal is to smooth especially the tail near the signal region in Fig. 7.
Usually 50 % of the available data is chosen to train a BDT which is done here just for the signal simulation. Then the other 50 % is used for testing. Here, 10 % of the burn sample are chosen randomly, to be able to consider the variability in the tails of the background.
Average of 200 BDTs. An example of one contributing BDT is shown in Fig. 7. In each bin the mean bin height in 200 BDTs is shown with the standard deviation as error bar. Based on this distribution the MRF is calculated and minimized to choose the cut value
Testing the BDT on the other 90 % of the burn sample leads to an extrapolation of the tail into the signal region. This re-sampling and BDT training/testing is repeated 200 times, each time choosing a random 10 % sample. In Fig. 8 the bin-wise average and standard deviation of 200 BDT score distributions are shown.
By BDT testing, 200 different BDT scores are assigned to each single event. The event is then transformed into a probability density distribution. When cutting on the BDT score distribution in Fig. 8 a single event i is neither completely discarded nor kept, but it is kept with a certain probability \(p_i\) which is calculated as a weight. The event is then weighted in total with \(W_i=p_i \cdot w_i\) using its survival probability and the weight \(w_i\) from the chosen flux spectrum. Therefore, many more events contribute to the cut region compared to a single BDT which reduces the uncertainty of the background expectation.
Table 1 Uncertainties in both analyses. For the mildly relativistic analysis the average for the whole velocity range is shown. See Fig. 10 for the velocity dependence
To keep the error of this statistical method low, the cut on the averaged BDT score distribution is chosen near the value where statistics in a single BDT score distribution vanishes.
The developed re-sampling method gives the expected background rate including an uncertainty for each of the single BDTs. Therefore one BDT was chosen randomly for the unblinding of the data.
The uncertainties of the re-sampling method were investigated thoroughly. The Poissonian error per bin is negligible because of the averaging of 200 BDTs. Instead, there are 370 partially remaining events which contribute to the statistical error. This uncertainty \(\Delta _{\text {contr}}\) is estimated by considering the effect of omitting individual events i of the 370 events from statistics
$$\begin{aligned} \Delta _{\text {contr}} = \max _i \left( \frac{w_i p_i}{\sum _i w_i p_i} \right) \end{aligned}$$
Datasets with different simulation parameters for the detector properties are used to calculate the according uncertainties. The values of all calculated uncertainties are shown in Table 1.
The robustness of the re-sampling method was verified additionally by varying all parameters and cut values of the analysis. Several fake unblindings were done by training the analysis on a 10 % sample of the burn sample, optimizing the last cut and then applying this event selection on the other 90 % of the burn sample. This proves reliability by showing that the previously calculated background expectation is actually received with increase of statistics by one order of magnitude. The results were mostly near the mean neutrino rate, only few attempts gave a higher rate, but no attempt exceeded the calculated confidence interval.
The rate of the background events has a variability in all 200 BDTs of up to 5 times the mean value of 0.55 events per live-time (311 days) when applying the final cut on the BDT score. This contribution is dominating the total uncertainties. Therefore not a normal distribution but the real distribution is used for further calculations. This distribution is used as a probability mass function in an extended Feldman Cousin approach to calculate the 90 % confidence interval, as described in Sect. 5.3. The final cut at BDT score 0.47 is chosen near the minimum of the model rejection factor (MRF) [47]. To reduce the influence of uncertainties it was shifted to a slightly lower value. The sensitivity for many different velocities is calculated as described in Sect. 5.3 and shown in Fig. 9. This gives an 90 % confidence upper limit of 3.61 background events. The improvement of sensitivity compared to recent limits by ANTARES [19] and MACRO [48] reaches from one to almost two orders of magnitude which reflects a huge detection potential.
Sensitivities (magenta) and final limits (red) of both analysis at certain characteristic velocities compared to other limits. The lines are only drawn to guide the eyes. Other limits are from BAIKAL [33], ANTARES [19], IceCube 22 [41], MACRO [48]. Also shown is the Parker limit described in the text [49]
After optimizing the two analyses on the burn samples, the event selection was adhered to and the remaining 90 % of the experimental data were processed ("unblinded"). The corresponding burn samples were not included while calculating the final limits.
Result of the highly relativistic analysis
In the analysis based on the IC40 detector configuration three events remain, one in the low brightness subset and two in the high brightness subset. The low brightness event is consistent with a background- only observation with 2.2 expected background events. The event itself shows characteristics typical for a neutrino induced muon. For the high brightness subset, with an expected background of 0.1 events, the observation of two events apparently contradicts the background-only hypothesis. However, a closer analysis of the two events reveals that they are unlikely to be caused by monopoles. These very bright events do not have a track like signature but a spheric development only partly contained in the detector. A possible explanation is the now established flux of cosmic neutrinos which was not included in the background expectation for this analysis. IceCube's unblinding policy prevents any claims on these events or reanalysis with changed cuts as have been employed with IC22 [41]. Instead they are treated as an upward fluctuation of the background weakening the limit. The final limits outperform previous limits and are shown in Table 2 and Fig. 9. These limits can also be used as a conservative limit for \(v>0.995\,c\) without optimization for high values of Lorentz factor \(\gamma \) as the expected monopole signal is even brighter due to stochastic energy losses which are not considered here.
Result of the mildly relativistic analysis
In the mildly relativistic analysis three events remain after all cuts which is within the confidence interval of up to 3.6 events and therefore consistent with a background only observation. All events have reconstructed velocities above the training region of 0.76c . This is compared to the expectation from simulation in Fig. 10. Two of the events show a signature which is clearly incompatible with a monopole signature when investigated by eye because they are stopping within the detector volume. The third event, shown in Fig. 11, may have a mis-reconstructed velocity due to the large string spacing of IceCube. However, its signature is comparable with a monopole signature with a reduced light yield than described in Sect. 3. According to simulations, a monopole of this reconstructed velocity would emit about 6 times the observed light.
Signal and background rates per characteristic monopole velocity which are used to calculate the final limits. Reconstructed velocity is used for background and true simulated velocity for signal. The lower part of the plot shows the velocity dependence of the uncertainties including the re-sampling uncertainty which dominates. The different contributions to the uncertainties are listed in Table 1
One of the three events which were selected in the mildly relativistic analysis with a BDT Score of 0.53. The reconstructed parameters of this event are the same as in Fig. 3. In this event, 110 DOMs were hit on 8 strings. It has a brightness of \(595 \; \text {NPE}\) and causes an after-pulse. The position of the IceCube DOMs are shown with small gray spheres. Hit DOMs are visualized with colored spheres. Their size is scaled with the brightness of the hit. The color denotes the time development from red to blue. The red line shows the reconstructed track
To be comparable to the other limits shown in Fig. 9 the final result of this analysis is calculated for different characteristic monopole velocities at the detector. The bin width of the velocity distribution in Fig. 10 is chosen to reflect the error on the velocity reconstruction. Then, the limit in each bin is calculated and normalized which gives a step function. To avoid the bias on a histogram by choosing different histogram origins, five different starting points are chosen for the distribution in Fig. 10 and the final step functions are averaged [50].
The final limit is shown in Fig. 9 and Table 2 together with the limits from the highly relativistic analysis and other recent limits.
The resulting limits are placed into context by considering indirect theoretical limits and previous experimental results. The flux \(\Phi \) of magnetic monopoles can be constrained model independently by astrophysical arguments to \(\Phi _{\text {P}} \le 10^{-15} \; \text {cm}^{-2}\; \text {s}^{-1}\; \text {sr}^{-1}\) for a monopole mass below \(10^{17} \; \text {GeV}/c^2\). This value is the so-called Parker bound [49] which has already been surpassed by several experiments as shown in Fig. 9. The most comprehensive search for monopoles, regarding the velocity range, was done by the MACRO collaboration using different detection methods [48].
More stringent flux limits have been imposed by using larger detector volumes, provided by high-energy neutrino telescopes, such as ANTARES [19], BAIKAL [33], AMANDA [51], and IceCube [41]. The current best limits for non-relativistic velocities (\(\le \)0.1 c) have been established by IceCube, constraining the flux down to a level of \(\Phi _{\text {90~\%}}\ge 10^{-18} \; \text {cm}^{-2}\; \text {s}^{-1}\; \text {sr}^{-1}\) [52]. These limits hold for the proposal that monopoles catalyze proton decay. The analysis by ANTARES is the only one covering the mildly relativistic velocity range (\(\ge \)0.625 c) using a neutrino detector, to date. However, using the KYG cross section for the \(\delta \)-electron production would extend their limits to lower velocities. The Baksan collaboration has also produced limits on a monopole flux [53], both at slow and relativistic velocities, although due to its smaller size their results are not competitive with the results shown in Fig. 9.
We have described two searches using IceCube for cosmic magnetic monopoles for velocities \(>\)0.51 c. One analysis focused on high monopole velocities at the detector \(v>0.76\,c\) where the monopole produces Cherenkov light and the resulting detector signal is extremely bright. The other analysis considers lower velocities \(>\)0.51 c where the monopole induces the emission of Cherenkov light in an indirect way and the brightness of the final signal is decreasing largely with lower velocity. Both analyses use geometrical information in addition to the velocity and brightness of signals to suppress background. The remaining events after all cuts were identified as background. Finally the analyses bound the monopole flux to nearly two orders of magnitude below previous limits. Further details of these analyses are given in Refs. [42, 54].
Comparable sensitivities are expected from the future KM3NeT instrumentation based on scaling the latest ANTARES limit to a larger effective volume [55]. Also an ongoing ANTARES analysis plans to use six years of data and estimates competitive sensitivities for highly relativistic velocities [56].
Even better sensitivities are expected from further years of data taking with IceCube, or from proposed volume extensions of the detector [57]. A promising way to extend the search to slower monopoles with \(v \le 0.5\,c\) is to investigate the luminescence they would generate in ice which may be detectable using the proposed low energy infill array PINGU [58].
G. 't Hooft, Nucl. Phys. B 79, 276 (1974)
A.M. Polyakov, JETP Lett. 20, 194 (1974)
ADS Google Scholar
A.H. Guth, S.H.H. Tye, Phys. Rev. Lett. 44(10), 631 (1980)
Article ADS Google Scholar
J. Polchinski, Int. J. Mod. Phys. A 19, 145 (2004). doi:10.1142/S0217751X0401866X
MathSciNet Article ADS Google Scholar
P. Dirac, Proc. R. Soc. A 133, 60 (1931)
J.P. Preskill, Ann. Rev. Nucl. Part. Sci. 34, 461 (1984)
S.D. Wick, T.W. Kephart, T.J. Weiler, P.L. Biermann, Astropart. Phys. 18(6), 663 (2003)
S. Dar, Q. Shafi, A. Sil, Phys. Rev. D 74, 035013 (2006)
M. Sakellariadou, Lect. Notes Phys. 738, 359 (2008)
A. Achterberg et al., Astropart. Phys. 26, 155 (2006). doi:10.1016/j.astropartphys.2006.06.007
R. Abbasi et al., Nucl. Instrum. Methods A 700, 188 (2014)
R. Abbasi et al., Nucl. Instrum. Methods A 618(1–3), 139 (2010)
R. Abbasi et al., Astropart. Phys. 35(10), 615 (2012)
M. Ackermann, et al., J. Geophys. Res. 111(D13) (2006)
M.G. Aartsen, et al., Nucl. Instrum. Methods A 711, 73 (2013).
R. Abbasi et al., Nucl. Instrum. Methods A 601(3), 2994 (2009)
M.G. Aartsen, et al., Nucl. Instrum. Methods A 736, 143 (2014).
A. Roodman, in Proceedings of the conference on Statistical Problems in Particle Physics, Astrophysics, and Cosmology (2003), p. 166. arXiv:physics/0312102
S. Adrián-Martínez, et al., Astropart. Phys. 35, 634 (2012). doi:10.1016/j.astropartphys.2012.02.007
F. Moulin, Il Nuovo Cimento B 116, 869 (2001)
MathSciNet ADS Google Scholar
E. Tamm, M. Frank, Dokl. Akad. Nauk SSSR (Akad. of Science of the USSR), 14, 107 (1937)
D.R. Tompkins, Phys. Rev. 138(1B) (1964)
T.T. Wu, C.N. Yang, Nucl. Phys. B 107, 365 (1976)
Y. Kazama, C.N. Yang, A.S. Goldhaber, Phys. Rev. D 15, 2287 (1977)
E. Bauer, Math. Proc. Camb. Philos. Soc. 47(04), 777 (1951). doi:10.1017/S0305004100027225
H.J.D. Cole, Math. Proc. Camb. Philos. Soc. 47(01), 196 (1951)
S.P. Ahlen, Phys. Rev. D 14, 2935 (1975)
S.P. Ahlen, Phys. Rev. D 17(1), 229 (1978)
R.M. Sternheimer, At. Data Nucl. Data Tables 30(2), 261 (1984)
L.I. Grossweiner, M.S. Matheson, J. Chem. Phys. 20(10), 1654 (1952). doi:10.1063/1.1700246
L.I. Grossweiner, M.S. Matheson, J. Chem. Phys. 22(9), 1514 (1954). doi:10.1063/1.1740451
T.I. Quickenden, S.M. Trotman, D.F. Sangster, J. Chem. Phys. 77, 3790 (1982). doi:10.1063/1.444352
V. Aynutdinov et al., Astrophys. J. 29, 366 (2008)
J.R. Hoerandel, Astropart. Phys. 19(2), 193 (2003). doi:10.1016/S0927-6505(02)00198-6
T.K. Gaisser, Astropart. Phys. 35(12), 801 (2012)
M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, T. Sanuki, Phys. Rev. D 75(4), 043006 (2007)
R. Enberg, M.H. Reno, I. Sarcevic, Phys. Rev. D 78(4), 043005 (2008)
M.G. Aartsen, et al., Science 342(6161) (2013). doi:10.1126/science.1242856
M.G. Aartsen et al., Phys. Rev. Lett. 113(10), 101101 (2014)
J. Lundberg et al., Nucl. Instrum. Methods A 581, 619 (2007)
R. Abbasi, et al., Phys. Rev. D 87, 022001 (2013)
J. Posselt, Search for Relativistic Magnetic Monopoles with the IceCube 40-String Detector. Ph.D. thesis, University of Wuppertal (2013)
Y. Freund, Inform. Comput. 121(2), 256 (1995). doi:10.1006/inco.1995.1136
H. Peng, I.E.E.E. Trans, Pattern Anal. Mach. Intell. 27(8), 1226 (2005). doi:10.1109/TPAMI.2005.159
B. Efron, Ann. Stat. 7(1), 1 (1979)
MathSciNet Article Google Scholar
J. Kunnen, J. Luenemann, A. Obertacke Pollmann, F. Scheriau for the IceCube Collaboration, in proceedings of the 34th International Cosmic Ray Conference (2015), p. 361. arXiv:1510.05226
G.J. Feldman, R.D. Cousins, Phys. Rev. D 57(7), 3873 (1998)
M. Ambrosio et al., Eur. Phys. J. C 25, 511 (2002)
E.N. Parker, Astrophys. J. 160, 383 (1970)
W. Haerdle, Z. Hlavka, Multivariate Statistics (Springer New York, 2007). doi:10.1007/978-0-387-73508-5
R. Abbasi et al., Eur. Phys. J. C 69, 361 (2010)
M.G. Aartsen et al., Eur. Phys. J. C 74, 2938 (2014)
Y.F. Novoseltsev, M.M. Boliev, A.V. Butkevich, S.P. Mikheev, V.B. Petkov, Nucl. Phys. B, Proc. Suppl. 151, 337 (2006). doi:DOIurl10.1016/j.nuclphysbps.2005.07.048
A. Pollmann, Search for mildly relativistic Magnetic Monopoles with IceCube. Ph.D. thesis, University of Wuppertal (Submitted)
S. Adrian-Martinez, et al. The prototype detection unit of the KM3NeT detector (2014). arXiv:1510.01561
I.E. Bojaddaini, G.E. Pavalas, in Proceedings of the 34th International Cosmic Ray Conference (2015), p. 1097
M.G. Aartsen, et al., IceCube-Gen2: a vision for the future of neutrino astronomy in Antarctica (2014). arXiv:1412.5106
M.G. Aartsen, et al., Letter of intent: the precision icecube next generation upgrade (PINGU) (2014). arXiv:1401.2046
We acknowledge the support from the following agencies: U.S. National Science Foundation-Office of Polar Programs, U.S. National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foundation, the Grid Laboratory Of Wisconsin (GLOW) grid infrastructure at the University of Wisconsin - Madison, the Open Science Grid (OSG) grid infrastructure; U.S. Department of Energy, and National Energy Research Scientific Computing Center, the Louisiana Optical Network Initiative (LONI) grid computing resources; Natural Sciences and Engineering Research Council of Canada, WestGrid and Compute/Calcul Canada; Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research (BMBF), Deutsche Forschungsgemeinschaft (DFG), Helmholtz Alliance for Astroparticle Physics (HAP), Research Department of Plasmas with Complex Interactions (Bochum), Germany; Fund for Scientific Research (FNRS-FWO), FWO Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office (Belspo); University of Oxford, United Kingdom; Marsden Fund, New Zealand; Australian Research Council; Japan Society for Promotion of Science (JSPS); the Swiss National Science Foundation (SNSF), Switzerland; National Research Foundation of Korea (NRF); Danish National Research Foundation, Denmark (DNRF).
III. Physikalisches Institut, RWTH Aachen University, 52056, Aachen, Germany
J. Auffenberg, M. Bissok, J. Blumenthal, D. Gier, M. Glagla, C. Haack, B. Hansmann, J. Kemp, R. Konietz, M. Leuermann, J. Leuner, L. Paul, J. Pütz, L. Rädel, R. Reimann, M. Rongen, M. Schimp, S. Schoenen, L. Schumacher, M. Stahlberg, M. Vehring, M. Wallraff & C. H. Wiebusch
New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
M. L. Benabderrahmane
Department of Physics, University of Adelaide, Adelaide, 5005, Australia
M. G. Aartsen, G. C. Hill, S. Robertson, A. Wallace & B. J. Whelan
Department of Physics and Astronomy, University of Alaska Anchorage, 3211 Providence Dr., Anchorage, AK, 99508, USA
K. Rawlins
CTSPS, Clark-Atlanta University, Atlanta, GA, 30314, USA
G. S. Japaridze
School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta, GA, 30332, USA
J. Casey, J. Daughhetee & I. Taboada
Department of Physics, Southern University, Baton Rouge, LA, 70813, USA
A. R. Fazely, S. Ter-Antonyan & X. W. Xu
Department of Physics, University of California, Berkeley, CA, 94720, USA
R. Bay, G. Binder, K. Filimonov, L. Gerhardt, C. Ha, S. R. Klein, S. Miarecki, P. B. Price, J. Tatar & K. Woschnagg
Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
G. Binder, L. Gerhardt, A. Goldschmidt, C. Ha, S. R. Klein, H. S. Matis, S. Miarecki, D. R. Nygren, G. T. Przybylski, T. Stezelberger, R. G. Stokstad & J. Tatar
Institut für Physik, Humboldt-Universität zu Berlin, 12489, Berlin, Germany
M. de With, D. Hebecker, H. Kolanoski & M. Kowalski
Fakultät für Physik & Astronomie, Ruhr-Universität Bochum, 44780, Bochum, Germany
J. Becker Tjus, F. Bos, B. Eichmann, M. Kroll, M. Mandelartz & S. Schöneberg
Physikalisches Institut, Universität Bonn, Nussallee 12, 53115, Bonn, Germany
A. Homeier, L. Schulte & M. Voge
Université Libre de Bruxelles, Science Faculty CP230, 1050, Brussels, Belgium
J. A. Aguilar, I. Ansseau, D. Heereman, K. Meagher, T. Meures, A. O'Murchadha, E. Pinat & C. Raab
Vrije Universiteit Brussel, Dienst ELEM, 1050, Brussels, Belgium
L. Brayeur, M. Casier, C. De Clercq, K. D. de Vries, G. de Wasseige, G. Golup, J. Kunnen, J. Lünemann, G. Maggi, S. Toscano & N. van Eijndhoven
Department of Physics, Chiba University, Chiba, 263-8522, Japan
R. Gaior, A. Ishihara, T. Kuwabara, L. Lu, K. Mase, M. Relich & S. Yoshida
Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch, New Zealand
J. Adams
Department of Physics, University of Maryland, College Park, MD, 20742, USA
D. Berley, E. Blaufuss, E. Cheung, J. Felde, R. Hellauer, K. D. Hoffman, W. Huelsnitz, R. Maunu, A. Olivas, T. Schmidt, M. Song, G. W. Sullivan & H. Wissing
Department of Physics and Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH, 43210, USA
J. J. Beatty, J. C. Davis, C. Pfendner, M. Stamatikos & M. Sutherland
Department of Astronomy, Ohio State University, Columbus, OH, 43210, USA
J. J. Beatty
Niels Bohr Institute, University of Copenhagen, 2100, Copenhagen, Denmark
E. Hansen, D. J. Koskinen, M. J. Larson, M. Medici & S. Sarkar
Department of Physics, TU Dortmund University, 44221, Dortmund, Germany
M. Börner, T. Fuchs, T. Menne, D. Pieloth, W. Rhode, T. Ruhe, A. Sandrock, F. Scheriau & M. Schmitz
Department of Physics and Astronomy, Michigan State University, East Lansing, MI, 48824, USA
J. P. A. M. de André, T. DeYoung, J. Hignight, K. B. M. Mahn & G. Neer
Department of Physics, University of Alberta, Edmonton, Alberta, Canada, T6G 2E1
N. Buzinsky, D. Grant, C. Kopper, S. C. Nowicki, B. Riedel, Ch. Weaver & T. R. Wood
Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
D. Altmann, L. Classen, A. Kappes & M. Tselengidou
Département de physique nucléaire et corpusculaire, Université de Genève, 1211, Geneva, Switzerland
A. Christov, T. Montaruli, M. Rameez & S. Vallecorsa
Department of Physics and Astronomy, University of Gent, 9000, Gent, Belgium
S. De Ridder, A. Haj Ismail, M. Labare, A. Meli, D. Ryckbosch, S. Vanheule & M. Vraeghe
Department of Physics and Astronomy, University of California, Irvine, CA, 92697, USA
S. W. Barwick & G. Yodh
Department of Physics and Astronomy, University of Kansas, Lawrence, KS, 66045, USA
D. Z. Besson
Department of Astronomy, University of Wisconsin, Madison, WI, 53706, USA
J. Gallagher
Department of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Madison, WI, 53706, USA
M. Ahlers, C. Arguelles, E. Beiser, J. Braun, D. Chirkin, M. Day, P. Desiati, J. C. Díaz-Vélez, S. Fahey, J. Feintzeig, K. Ghorbani, L. Gladstone, Z. Griffith, F. Halzen, K. Hanson, K. Hoshina, K. Jero, A. Karle, M. Kauer, J. L. Kelley, A. Kheirandish, F. McNally, G. Merino, R. Morse, S. Richter, L. Sabbatini, M. N. Tobin, D. Tosi, J. Vandenbroucke, N. Wandkowsky, C. Wendt, S. Westerhoff, L. Wille & D. L. Xu
Institute of Physics, University of Mainz, Staudinger Weg 7, 55099, Mainz, Germany
M. Archinger, V. Baum, S. Böser, E. del Pino Rosendo, V. di Lorenzo, B. Eberhardt, T. Ehrhardt, C.-C. Fösig, L. Köpke, G. Kroll, G. Krückl, H.-G. Sander, J. Sandroos, K. Schatto, A. Steuer & K. Wiebe
Université de Mons, 7000, Mons, Belgium
G. Kohnen
Technische Universität München, 85748, Garching, Germany
K. Abraham, A. Bernhard, S. Coenders, A. Groß, K. Holzapfel, M. Huber, M. Jurkovic, K. Krings, E. Resconi, A. Turcati & J. Veenkamp
Department of Physics and Astronomy, Bartol Research Institute, University of Delaware, Newark, DE, 19716, USA
H. Dembinski, P. A. Evenson, T. K. Gaisser, J. G. Gonzalez, R. Koirala, H. Pandya, D. Seckel, T. Stanev & S. Tilav
Department of Physics, Yale University, New Haven, CT, 06520, USA
M. Kauer & R. Maruyama
Department of Physics, University of Oxford, 1 Keble Road, Oxford, OX1 3NP, UK
S. Sarkar
Department of Physics, Drexel University, 3141 Chestnut Street, Philadelphia, PA, 19104, USA
N. Kurahashi & M. Richman
Physics Department, South Dakota School of Mines and Technology, Rapid City, SD, 57701, USA
X. Bai
Department of Physics, University of Wisconsin, River Falls, WI, 54022, USA
J. Madsen, S. Seunarine & G. M. Spiczak
Department of Physics, Oskar Klein Centre, Stockholm University, 10691, Stockholm, Sweden
M. Ahrens, C. Bohm, J. P. Dumm, C. Finley, S. Flis, P. O. Hulth, K. Hultqvist, C. Walck, M. Wolf & M. Zoll
Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794-3800, USA
J. Kiryluk, M. Lesiak-Bzdak, H. Niederhausen & Y. Xu
Department of Physics, Sungkyunkwan University, Suwon, 440-746, Korea
D. Bose, S. In, M. Jeong & C. Rott
Department of Physics, University of Toronto, Toronto, ON, Canada, M5S 1A7
K. Clark
Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA
T. Palczewski, J. A. Pepper, P. A. Toale & D. R. Williams
Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA, 16802, USA
D. F. Cowen
Department of Physics, Pennsylvania State University, University Park, PA, 16802, USA
T. Anderson, T. C. Arlen, D. F. Cowen, M. Dunkman, F. Huang, A. Keivani, J. L. Lanfranchi, D. V. Pankova, M. Quinnan & G. Tešić
Department of Physics and Astronomy, Uppsala University, Box 516, 75120, Uppsala, Sweden
D. J. Boersma, O. Botner, S. Euler, A. Hallgren, C. Pérez de los Heros, R. Ström, H. Taavola & E. Unger
Department of Physics, University of Wuppertal, 42119, Wuppertal, Germany
K.-H. Becker, D. Bindig, T. Fischer-Wasels, K. Helbing, S. Hickford, R. Hoffmann, J. Kläs, S. Kopper, U. Naumann, A. Obertacke Pollmann, A. Omairat, J. Posselt & D. Soldin
DESY, 15735, Zeuthen, Germany
M. Ackermann, P. Berghaus, E. Bernardini, H.-P. Bretz, A. H. Cruz Silva, T. Glüsenkamp, D. Góra, E. Jacobi, T. Karg, M. Kowalski, E. Middell, L. Mohrmann, R. Nahnhauer, A. Schönwald, C. Spiering, A. Stasik, A. Stößl, N. L. Strotjohann, A. Terliuk, M. Usner, J. van Santen & J. P. Yanez
M. G. Aartsen
K. Abraham
M. Ackermann
J. A. Aguilar
M. Ahlers
M. Ahrens
D. Altmann
T. Anderson
I. Ansseau
M. Archinger
C. Arguelles
T. C. Arlen
J. Auffenberg
S. W. Barwick
V. Baum
R. Bay
J. Becker Tjus
K.-H. Becker
E. Beiser
P. Berghaus
D. Berley
E. Bernardini
A. Bernhard
G. Binder
D. Bindig
M. Bissok
E. Blaufuss
J. Blumenthal
D. J. Boersma
C. Bohm
M. Börner
F. Bos
D. Bose
S. Böser
O. Botner
L. Brayeur
H.-P. Bretz
N. Buzinsky
J. Casey
M. Casier
E. Cheung
D. Chirkin
A. Christov
L. Classen
S. Coenders
A. H. Cruz Silva
J. Daughhetee
J. C. Davis
M. Day
J. P. A. M. de André
C. De Clercq
E. del Pino Rosendo
H. Dembinski
S. De Ridder
P. Desiati
K. D. de Vries
G. de Wasseige
M. de With
T. DeYoung
J. C. Díaz-Vélez
V. di Lorenzo
J. P. Dumm
M. Dunkman
B. Eberhardt
T. Ehrhardt
B. Eichmann
S. Euler
P. A. Evenson
S. Fahey
A. R. Fazely
J. Feintzeig
J. Felde
K. Filimonov
C. Finley
T. Fischer-Wasels
S. Flis
C.-C. Fösig
T. Fuchs
T. K. Gaisser
R. Gaior
L. Gerhardt
K. Ghorbani
D. Gier
L. Gladstone
M. Glagla
T. Glüsenkamp
A. Goldschmidt
G. Golup
J. G. Gonzalez
D. Góra
D. Grant
Z. Griffith
A. Groß
C. Ha
C. Haack
A. Haj Ismail
A. Hallgren
F. Halzen
E. Hansen
B. Hansmann
K. Hanson
D. Hebecker
D. Heereman
K. Helbing
R. Hellauer
S. Hickford
J. Hignight
G. C. Hill
K. D. Hoffman
R. Hoffmann
K. Holzapfel
A. Homeier
K. Hoshina
F. Huang
M. Huber
W. Huelsnitz
P. O. Hulth
K. Hultqvist
S. In
A. Ishihara
E. Jacobi
M. Jeong
K. Jero
M. Jurkovic
A. Kappes
T. Karg
A. Karle
M. Kauer
A. Keivani
J. L. Kelley
J. Kemp
A. Kheirandish
J. Kiryluk
J. Kläs
S. R. Klein
R. Koirala
H. Kolanoski
R. Konietz
L. Köpke
C. Kopper
S. Kopper
D. J. Koskinen
M. Kowalski
K. Krings
G. Kroll
M. Kroll
G. Krückl
J. Kunnen
N. Kurahashi
T. Kuwabara
M. Labare
J. L. Lanfranchi
M. J. Larson
M. Lesiak-Bzdak
M. Leuermann
J. Leuner
L. Lu
J. Lünemann
J. Madsen
G. Maggi
K. B. M. Mahn
M. Mandelartz
R. Maruyama
K. Mase
H. S. Matis
R. Maunu
F. McNally
K. Meagher
M. Medici
A. Meli
T. Menne
G. Merino
T. Meures
S. Miarecki
E. Middell
L. Mohrmann
T. Montaruli
R. Morse
R. Nahnhauer
U. Naumann
G. Neer
H. Niederhausen
S. C. Nowicki
D. R. Nygren
A. Obertacke Pollmann
A. Olivas
A. Omairat
A. O'Murchadha
T. Palczewski
H. Pandya
D. V. Pankova
L. Paul
J. A. Pepper
C. Pérez de los Heros
C. Pfendner
D. Pieloth
E. Pinat
J. Posselt
P. B. Price
G. T. Przybylski
J. Pütz
M. Quinnan
C. Raab
L. Rädel
M. Rameez
R. Reimann
M. Relich
E. Resconi
W. Rhode
M. Richman
S. Richter
B. Riedel
S. Robertson
M. Rongen
C. Rott
T. Ruhe
D. Ryckbosch
L. Sabbatini
H.-G. Sander
A. Sandrock
J. Sandroos
K. Schatto
F. Scheriau
M. Schimp
T. Schmidt
M. Schmitz
S. Schoenen
S. Schöneberg
A. Schönwald
L. Schulte
L. Schumacher
D. Seckel
S. Seunarine
D. Soldin
M. Song
G. M. Spiczak
C. Spiering
M. Stahlberg
M. Stamatikos
T. Stanev
A. Stasik
A. Steuer
T. Stezelberger
R. G. Stokstad
A. Stößl
R. Ström
N. L. Strotjohann
G. W. Sullivan
M. Sutherland
H. Taavola
I. Taboada
J. Tatar
S. Ter-Antonyan
A. Terliuk
G. Tešić
S. Tilav
P. A. Toale
M. N. Tobin
S. Toscano
D. Tosi
M. Tselengidou
A. Turcati
E. Unger
M. Usner
S. Vallecorsa
J. Vandenbroucke
N. van Eijndhoven
S. Vanheule
J. van Santen
J. Veenkamp
M. Vehring
M. Voge
M. Vraeghe
C. Walck
A. Wallace
M. Wallraff
N. Wandkowsky
Ch. Weaver
C. Wendt
S. Westerhoff
B. J. Whelan
K. Wiebe
C. H. Wiebusch
L. Wille
D. R. Williams
H. Wissing
M. Wolf
T. R. Wood
K. Woschnagg
D. L. Xu
X. W. Xu
Y. Xu
J. P. Yanez
G. Yodh
S. Yoshida
M. Zoll
Correspondence to A. Obertacke Pollmann or J. Posselt.
In Table 1 the uncertainties of both analyses are shown. Table 2 gives the numeric values of the derived limits of both analyses. Tables 3, 4 and 5 show the event selection of both analyses in detail which illustrates how magnetic monopoles can be separated from background signals in IceCube.
Table 2 Values of final limits of both analyses
Table 3 Description of all cuts in the highly relativistic analysis. For some cuts only the 10% of the DOMs with the highest charge (HC) were chosen
Table 4 Description of all cuts in the mildly relativistic analysis and the according event rate
Table 5 Description of the variables used in the BDTs of the mildly relativistic analysis
Funded by SCOAP3
Aartsen, M.G., Abraham, K., Ackermann, M. et al. Searches for relativistic magnetic monopoles in IceCube. Eur. Phys. J. C 76, 133 (2016). https://doi.org/10.1140/epjc/s10052-016-3953-8
DOI: https://doi.org/10.1140/epjc/s10052-016-3953-8
Relativistic Analysis
Magnetic Monopole
Muon Neutrino
Boost Decision Tree
Cherenkov Light | CommonCrawl |
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100
Groups of Flagged Homotopies and Higher Gauge Theory [PDF]
Valery V. Dolotin
Mathematics , 1999,
Abstract: Groups $\Pi_k(X;\sigma)$ of "flagged homotopies" are introduced of which the usual (abelian for $k>1$) homotopy groups $\pi_k(X;p)$ is the limit case for flags $\sigma$ contracted to a point $p$. Calculus of exterior forms with values in algebra $A$ is developped of which the limit cases are differential forms calculus (for $A=\bb R$) and gauge theory (for 1-forms). Moduli space of integrable forms with respect to higher gauge transforms (cohomology with coefficients in $A$) is introduced with elements giving representations of $\Pi_k$ in $G=exp A$.
$K$-theory and homotopies of 2-cocycles on higher-rank graphs [PDF]
Elizabeth Gillaspy
Abstract: This paper continues our investigation into the question of when a homotopy $\omega = \{\omega_t\}_{t \in [0,1]}$ of 2-cocycles on a locally compact Hausdorff groupoid $\mathcal{G}$ gives rise to an isomorphism of the $K$-theory groups of the twisted groupoid $C^*$-algebras: $K_*(C^*(\mathcal{G}, \omega_0)) \cong K_*(C^*(\mathcal{G}, \omega_1)).$ In particular, we build on work by Kumjian, Pask, and Sims to show that if $\mathcal{G} = \mathcal{G}_\Lambda$ is the infinite path groupoid associated to a row-finite higher-rank graph $\Lambda$ with no sources, and $\{c_t\}_{t \in [0,1]}$ is a homotopy of 2-cocycles on $\Lambda$, then $K_*(C^*(\mathcal{G}_\Lambda, \sigma_{c_0})) \cong K_*(C^*(\mathcal{G}_\Lambda, \sigma_{c_1})),$ where $\sigma_{c_t}$ denotes the 2-cocycle on $\mathcal{G}_\Lambda$ associated to the 2-cocycle $c_t$ on $\Lambda$. We also prove a technical result (Theorem 3.3), namely that a homotopy of 2-cocycles on a locally compact Hausdorff groupoid $\mathcal{G}$ gives rise to an upper semi-continuous $C^*$-bundle.
Higher homotopies and Golod rings [PDF]
Abstract: We study the homological algebra of an R = Q/I module M using A-infinity structures on Q-projective resolutions of R and M. We use these higher homotopies to construct an R-projective bar resolution of M, Q-projective resolutions for all R-syzygies of M, and describe the differentials in the Avramov spectral sequence for M. These techniques apply particularly well to Golod modules over local rings. We characterize R-modules that are Golod over Q as those with minimal A-infinity structures. This gives a construction of the minimal resolution of every module over a Golod ring, and it also follows that if the inequality traditionally used to define Golod modules is an equality in the first dim Q+1 degrees, then the module is Golod, where no bound was previously known. We also relate A-infinity structures on resolutions to Avramov's obstructions to the existence of a dg-module structure. Along the way we give new, shorter, proofs of several classical results about Golod modules.
Higher Homotopies in a Hierarchy of Univalent Universes [PDF]
Nicolai Kraus,Christian Sattler
Mathematics , 2013, DOI: 10.1145/2729979
Abstract: For Martin-Lof type theory with a hierarchy U(0): U(1): U(2): ... of univalent universes, we show that U(n) is not an n-type. Our construction also solves the problem of finding a type that strictly has some high truncation level without using higher inductive types. In particular, U(n) is such a type if we restrict it to n-types. We have fully formalized and verified our results within the dependently typed language and proof assistant Agda.
The Origins of Lattice Gauge Theory [PDF]
Kenneth G. Wilson
Physics , 2004, DOI: 10.1016/j.nuclphysbps.2004.11.271
Abstract: An anecdotal account of the author's role in the origins of lattice gauge theory, prepared for delivery on the thirtieth anniversary of the publication of "Confinement of Quarks" [Phys. Rev. D10 (1974) 2445].
Higher homotopies and Maurer-Cartan algebras: Quasi-Lie-Rinehart, Gerstenhaber, and Batalin-Vilkovisky algebras [PDF]
Johannes Huebschmann
Abstract: Higher homotopy generalizations of Lie-Rinehart algebras, Gerstenhaber-, and Batalin-Vilkovisky algebras are explored. These are defined in terms of various antisymmetric bilinear operations satisfying weakened versions of the Jacobi identity, as well as in terms of operations involving more than two variables of the Lie triple systems kind. A basic tool is the Maurer-Cartan algebra-the algebra of alternating forms on a vector space so that Lie brackets correspond to square zero derivations of this algebra-and multialgebra generalizations thereof. The higher homotopies are phrased in terms of these multialgebras. Applications to foliations are discussed: objects which serve as replacements for the Lie algebra of vector fields on the "space of leaves" and for the algebra of multivector fields are developed, and the spectral sequence of a foliation is shown to arise as a special case of a more general spectral sequence including as well the Hodge-de Rham spectral sequence.
On the general theory of the origins of retroviruses
Misaki Wayengera
Theoretical Biology and Medical Modelling , 2010, DOI: 10.1186/1742-4682-7-5
Abstract: On the basis of an arbitrarily non-Euclidian geometrical "thought experiment" involving the cross-species transmission of simian foamy virus (sfv) from a non-primate species Xy to Homo sapiens (Hs), initially excluding all social factors, the following was derived. At the port of exit from Xy (where the species barrier, SB, is defined by the Index of Origin, IO), sfv shedding is (1) enhanced by two transmitting tensors (Tt), (i) virus-specific immunity (VSI) and (ii) evolutionary defenses such as APOBEC, RNA interference pathways, and (when present) expedited therapeutics (denoted e2D); and (2) opposed by the five accepting scalars (At): (a) genomic integration hot spots, gIHS, (b) nuclear envelope transit (NMt) vectors, (c) virus-specific cellular biochemistry, VSCB, (d) virus-specific cellular receptor repertoire, VSCR, and (e) pH-mediated cell membrane transit, (↓pH CMat). Assuming As and Tt to be independent variables, IO = Tt/As. The same forces acting in an opposing manner determine SB at the port of sfv entry (defined here by the Index of Entry, IE = As/Tt). Overall, If sfv encounters no unforeseen effects on transit between Xy and Hs, then the square root of the combined index of sfv transmissibility (√|RTI|) is proportional to the product IO* IE (or ~Vm* Ha* ∑Tt*∑As*Ω), where Ω is the retrovirological constant and ∑ is a function of the ratio Tt/As or As/Tt for sfv transmission from Xy to Hs.I present a mathematical formalism encapsulating the general theory of the origins of retroviruses. It summarizes the choreography for the intertwined interplay of factors influencing the probability of retroviral cross-species transmission: Vm, Ha, Tt, As, and Ω.The order Retroviridae constitutes a collection of non-icosahedral, enveloped viruses with two copies of a single-stranded RNA genome [1-5]. Retroviruses are known to infect avians [1] and murine [2], non-primate [3] and primate [4,5] mammals. Viruses of the order Retroviridae are unique in the sense that they
String Theory Origins of Supersymmetry [PDF]
John H. Schwarz
Physics , 2000, DOI: 10.1016/S0920-5632(01)01492-X
Abstract: The string theory introduced in early 1971 by Ramond, Neveu, and myself has two-dimensional world-sheet supersymmetry. This theory, developed at about the same time that Golfand and Likhtman constructed the four-dimensional super-Poincar\'e algebra, motivated Wess and Zumino to construct supersymmetric field theories in four dimensions. Gliozzi, Scherk, and Olive conjectured the spacetime supersymmetry of the string theory in 1976, a fact that was proved five years later by Green and myself.
Higher-order models versus direct hierarchical models: g as superordinate or breadth factor? [PDF]
GILLES E. GIGNAC
Psychology Science Quarterly , 2008,
Abstract: Intelligence research appears to have overwhelmingly endorsed a superordinate (higher-order model) conceptualization of g, in comparison to the relatively less well-known breadth conceptualization of g, as represented by the direct hierarchical model. In this paper, several similarities and distinctions between the indirect and direct hierarchical models are delineated. Based on the re-analysis of five correlation matrices, it was demonstrated via CFA that the conventional conception of g as a higher-order superordinate factor was likely not as plausible as a first-order breadth factor. The results are discussed in light of theoretical advantages of conceptualizing g as a first-order factor. Further, because the associations between group-factors and g are constrained to zero within a direct hierarchical model, previous observations of isomorphic associations between a lower-order group factor and g are questioned.
$K$-theory and homotopies of 2-cocycles on group bundles [PDF]
Abstract: This paper continues the author's program to investigate the question of when a homotopy of 2-cocycles $\Omega = \{\omega_t\}_{t \in [0,1]}$ on a locally compact Hausdorff groupoid $\mathcal{G}$ induces an isomorphism of the $K$-theory groups of the twisted groupoid $C^*$-algebras: $K_*(C^*(\mathcal{G}, \omega_0)) \cong K_*(C^*(\mathcal{G}, \omega_1)).$ Building on our earlier work, we show that if $\pi: \mathcal{G} \to M$ is a locally trivial bundle of amenable groups over a locally compact Hausdorff space $M$, a homotopy $\Omega = \{\omega_t\}_{t \in [0,1]}$ of 2-cocycles on $\mathcal{G} $ gives rise to an isomorphism $K_*(C^*(\mathcal{G}, \omega_0)) \cong K_*(C^*(\mathcal{G}, \omega_1)).$ | CommonCrawl |
Math index
Physics index
Chemistry index
Biology index
Pressure defined
Kinetic molecular theory
Ideal gas law
Non-ideal gases
Kinetics with calculus
MATH CHEMISTRY PHYSICS BIOLOGY EDUCATION
xaktly | Chemistry | equilibrium
Colligative properties
Dissolved solutes can alter some of the properties of solvents.
Adding non-volatile salts to a solvent can change a few key properties of the solvent:
Vapor pressure can be reduced
The boiling point can be elevated, and
The freezing / melting point can be depressed.
This occurs for both ionic solutes like salts, and nonionic solutes, such as sugars. On this page we'll describe the results of adding solutes to liquid solvents, review some theores that can help us get quantitative about the increases and decreases of these properties, and discuss why these phenomena occur.
The ideas in this section apply to "ideal" solutions. We often have to begin forming our theories of physical phenomena with simplified systems like that, adding complications later as we better understand what's going on.
One definition of an ideal solution is one in which the interactions between solute and solvent particles is no different from the intereactions within pure solvent or pure solute.
There are other definitions, but for our purposes, we'll note that non-ideality is more likely to arise when a solution is more concentrated with solute, so we'll mostly stipulate that dilute solutions are closer to ideal than concentrated ones.
Vapor pressure reduction
The reason for vapor pressure reduction (often called "lowering," but I'm not a fan of that grammar), is really pretty simple. First, consider a beaker of pure solvent, such as water. Below is a highly schematic view. At the surface, occaisionally a water molecule has enough translational kinetic energy to escape the fluid and fly off as a gaseous molecule. The number of such events at a given temperature in a given time results in a force that can be detected over the liquid, the vapor pressure (pressure is force divided by area — in this case, the area of the liquid surface).
Now in an aqueous solution containing solute molecules (red in the figure below), some of the solute particles will be at the surface. This effectively crowds the solute molecules out, leaving a smaller surface from which they can escape. The presence of the solute particles in no way changes the rate at which water molecules leave the surface, it's just that they have access to less surface. This turns out to be an overly simplistic model, but it gets most of the explanation right.
Modeling vapor pressure reduction
The simplest model of vapor pressure reduction is called Raoult's law. Raoult proposed that the vapor pressure of the liquid containing the solute is directly (linearly) proportional to the vapor pressure of the pure solvent, and that the decrease is proportional to the mole fraction of solvent.
It looks like this:
$$P_S = \chi_S \cdot P_S^o$$
where PS is the vapor pressure of the solvent with solute, XS is the mole fraction of solvent, and PSo is the vapor pressure of the pure solvent at the temperature of interest. Recall that vapor pressure increases with temperature, and that when the vapor pressure is equal to the atmospheric pressure, the solvent is boiling.
Here's what Raoult's law looks like in the ideal case.
As the mole fraction of solute increases, the mole fraction of solvent decreases. Less surface is available for escape from the liquid, so the vapor pressure decreases, theoretically to zero when all of the solvent is gone.
Deviations from Raoult's law
It turns out that the identity of the solute and how it interacts — whether the intermolecular forces are attractive or repulsive — matters. When the solvent and solute share attractive forces, there is some "stickiness" for solvent particles leaving the surface near solute particles, and that reduces the vapor pressure.
Conversely, when the intermolecular forces are repulsive, solute particles near the surface weaken the attraction of solvent particles to it, and they can escape more easily. Thus the vapor pressure is increased.
This graph shows some of the deviations.
A positive deviation arises when repulsive solvent-solute interactions dominate.
A negative deviation arises from attractive forces keeping solvent particles in solution, and
When, as illustrated by the dashed lines, there is a clear maximum or minimum in the vapor pressure vs. mole-fraction curve, the two compounds form an azeotrope, a subject for another section.
One thing that leads to non-ideal behavior is highly-concentrated solutions. The lower the solute concentration, the closer to ideal behavior. High concentrations mean many more chances for solute-solute and solute-solvent interactions.
Calculate the change in vapor pressure of water after adding 117 g of solid NaCl to 1 L of pure water.
Solution: First we need to calculate the mole fraction of NaCl and water in the final solution. 58.45 g of NaCl is one mole, so we're adding two moles of NaCl.
Now for the number of moles of water, we recall that the density of water is about* 1g/ml, so 1L of water has a mass of 1000 g.
$$1000 \; g \; H_2O \left( \frac{1 \text{ mol }H_2O}{18 \; g \; H_2O} \right) = 55.6 \text{ mol } H_2O$$
Now the mole fraction of NaCl is
$$\chi_{NaCl} = \frac{2}{2 + 55.6} = 0.0347$$
* It varies with temperature, and is actually somewhat less (0.9982˚C) at 20˚C.
The mole fraction of solvent H2O can be calculated in the same way, but the two have to add to 1, so it's a little easier just to do this:
$$\chi_{H_2O} = 1 - \chi_{NaCl} = 0.9652$$
Now Raoult's law gives us
so the vapor pressure of the solvent is reduced by 0.7 torr, or about 3.5% of its value without solute.
$$P_{H_2O} = 19.3 \text{ torr}$$
Note that this is the vapor pressure calculated from a model assuming no interactions between solvent and solute particles. It's a good start, but we should always expect some deviation from the ideal.
Calculate the amount of vapor pressure reduction of water at 25˚C after addition of the following solutes to 500 ml of pure water. The vapor pressure of pure water at 25˚C is 23.8 torr.
22.4 g of LiOH
Find concentrations
$$ \begin{align} 22.4 \text{ g LiOH} &\left( \frac{\text{1 mol LiOH}}{\text{23.94 g LiOH}} \right) \\ &= 0.94 \text{ mol LiOH}\\ \\ 500 \; g \; H_2O &\left( \frac{1 \text{ mol } H_2O}{18 \; g \;H_2O} \right) \\ &= \text{27.78 mol } H_2O \end{align}$$
Now the mole fractions, χ
$$ \begin{align} \chi_{LiOH} &= \frac{0.94}{27.78 - 0.94} \\ &= 0.0326 \\ \\ \chi_{H_2O} &= 1 - \chi_{LiOH} \\ &= 1 - 0.0326 = 0.9647 \end{align}$$
Now the pressure:
$$ \begin{align} P_S &= \chi_S P_s^o \\ &= 0.9674(23.8 \; torr) \\ &= 23.02 \; torr \end{align}$$
$$\frac{23.8 - 23.02}{23.8} = 3.3 \text{% change}$$
18.0 g of Ca(NO3)2
See solution to #1 for method. The results are:
$$ \begin{align} \text{moles solute} &= 0.11 \\ \text{moles solvent} &= 27.78 \\ \chi_{Ca(NO_3)_2} &= 0.0039 \\ \chi_{H_2O} &= 0.9961 \\ \text{VP} &= 23.71 \text{ torr} \\ \text{% change} &= -0.4 \, % \end{align}$$
225 g of NaOH
$$ \begin{align} \text{moles solute} &= 5.63 \\ \text{moles solvent} &= 27.78 \\ \chi_{Ca(NO_3)_2} &= 0.1684 \\ \chi_{H_2O} &= 0.8316 \\ \text{VP} &= 19.79 \text{ torr} \\ \text{% change} &= -17 \, % \end{align}$$
100.0 g of PbSO4
$$ \begin{align} \text{moles solute} &= 0.33 \\ \text{moles solvent} &= 27.78 \\ \chi_{Ca(NO_3)_2} &= 0.0117 \\ \chi_{H_2O} &= 0.9883 \\ \text{VP} &= 23.52 \text{ torr} \\ \text{% change} &= -1.2 \,% \end{align}$$
Boiling point elevation
Boiling point elevation really just follows from vapor pressure reduction. Recall that the boiling point of a liquid is the temperature at which the vapor pressure is equal to the atmospheric pressure, or whatever external pressure is pushing on the liquid. That's why, when we place a beaker of water in a vacuum chamber and evacuate the air, the water will begin to boil — and by the definition, it's truly boiling, although the temperature can still be low.
When the vapor pressure of a liquid is reduced by addition of a solute (see above), that much more heat has to be put into it to get it to boil, so the boiling temperature has to go up.
There is a mathematical model for boiling point elevation. It's a little more complicated than Raoult's law, but working through it might give you some insights into how solutions work.
The boiling point of a liquid is the temperature at which the vapor pressure above the liquid surface is equal to the atmospheric pressure surrounding it. A liquid may be made to boil either by raising its temperature or by reducing the pressure above its surface, or both.
Modeling boiling point elevation
Making a mathematical model that predicts boiling point elevation is a little more complicated than Raoult's law, but it's still pretty straight forward. We begin by making a linear relationship between the boiling point of a liquid and the concentration of the solution: The change in boiling point, ΔTb, is proportional to the concentration of the solute, this time in molality (moles of solute per Kg of solvent). Molality is useful here because we don't have to worry about changes in solution volume on adding or removing solute, as we would with molarity.
The ebullioscopic constant is simply the constant of proportionality that we use to get the units right, a bridge between temperature units (K) and molality. The ebullioscopic constant is
Here is the dimensional analysis — just looking at the units to see if they make sense:
The resulting units Kg·K·mol-1 will cancel the Kg of the molality to give temperature units for ΔTb, just what we needed.
Now we have to be careful here, because molality of the solute doesn't really tell us how many ions have been introduced into the solution. Each of those ions will contribute to vapor pressure reduction and other colligative properties. For example, when NaCl dissociates we get
NaCl ⇄ Na+ + Cl-
which yields two moles of ions per mole of solute. Similarly, when Ca(OH)2 is dissolved we get
Ca(OH)2 ⇄ Ca2+ + 2 OH-
or three moles of ions per mole of solute.
We need to modify our equation just a bit to account for the number of moles of ions introduced into solution. Here's the result:
The van't Hoff factor is an average measure of the number of ions in solution, taking solubility of the solutes and any other ion-pairs into account. For NaCl, i = 1.9 - 2.0 (usually 2). In particular,
For soluble salts that yeild three ions, like Fe(NO3)2, i = 3, and so on. for the average three-ion salt, i = 2.3 - 3, lower if the salt is less soluble and higher if it's more soluble. For solutes that don't dissociate, like glucose, i = 1.
You're probably getting the impression that solution science is a little soft, and that's because you're right. Solutions are much more complicated than gases, with many more interactions to account for.
Now we're ready to do a couple of examples.
Calculate the boiling point change upon adding 58.45g of NaCl to 1 liter of pure water.
Solution: This is a common kitchen problem. The idea is that adding some salt to a pot of water raises its boiling temperature. Let's see how much.
First we calculate the molality of the resulting solution. 58.45 g of NaCl is 1 mole of NaCl, so the molality is
$$\frac{\text{1 mol NaCl}}{\text{1 Kg } H_2O} = \text{1 m NaCl}$$
Now Kb for pure water looks like this:
$$ \begin{align} K_b &= \frac{R T_b^2 m}{\Delta H_v} \\ \\ &= \frac{8.314 (373)2 (0.018)}{40650} \\ \\ &= 0.512 \text{ Kg·K} \end{align}$$
where 8.314 J·mol-1·K-1 is the gas constant in SI units, 373 is the boiling temperature of pure water in K, 0.018 is the molar mass of water in Kg, and 40,650 is the molar heat of vaporization in J/mol.
The van't Hoff number for NaCl is 2 because NaCl is soluble and we don't expect an appreciable concentration of solid NaCl in a 1m solution.
Then the boiling point elevation is
$$ \begin{align} \Delta T_b &= K_b \cdot b_{solution} \cdot i \\ \\ &= (0.512 \text{ K·Kg/mol})(1 \text{ mol/Kg})(2) \\ \\ &= 1.02 \; K \\ \\ &= 1.02˚C \text{ or } 1.84˚F \end{align}$$
So you'd have to add a whole mole of salt (58 g of table salt is a handful) to a liter of water in order to raise the boiling temperature by a little less than 2 degrees Fahrenheit. That's not a lot for that much salt. It's more likely that addition of salt to boiling water is done for taste than to raise the boiling temperature. Science!
How much Mg(NO3)2 needs to be added to 1 L of water in order to raise the boiling temperature by 4˚C ?
Solution: Mg(NO3)2 is very soluble in water, so we won't need to worry about that. In this problem, we know the temperature rise, so we can solve for the molality first, find that, then back out the number of grams of Mg(NO3)2 when we have it.
The molality of the solute is
$$ \begin{align} \Delta T_b &= K_b \cdot b_{solution} \cdot i \\ \\ b_{solution} &= \frac{\Delta T_b}{K_b \cdot i} \end{align}$$
We can use Kb of water as calculated in example 2 above, Kb = 0.512. And for i, because Mg(NO3)2 is soluble and splits into three separate ions, we'll use i = 3.
$$ \begin{align} b_{solution} &= \frac{4 \; K}{0.512 \; K\cdot mol^{-1} \cdot 3} \\ \\ &= 2.604 \; mol/Kg \; H_2O \end{align}$$
Converting that number of moles of solute to grams gives
$$ \begin{align} 2.604 \; mol \; Mg(NO_3)_2 &\left( \frac{148.32 \; g}{1 \; mol \; Mg(NO_3)_2} \right) \\ \\ &= 386 \; g \; Mg(NO_3)_2 \end{align}$$
That's a lot of salt! It can be done, though. You could actually dissolve more than 1 Kg of Mg(NO3)2 in a liter of water.
Calculate the change in boiling point of water after addition of the following solutes to 500 ml of pure water.
Roll over or tap the problem box for the solution.
An azeotrope is a mixture of two liquids which, because they are mixed, have a common boiling temperature.
Water and ethanol form an azeotropic pair, which is why all of the water can never be extracted from ethanol just by boiling and distillation.
xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012-2019, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to [email protected]. | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.